Category Archives: article

How an Australian scientist tried to stop the US plan to monopolise the nuclear arms race



File 20180515 122916 jnoyez.jpg?ixlib=rb 1.1
Mark Oliphant in 1939.
From a collection at the National Portrait Gallery, Canberra. Gift of Ms Vivian Wilson 2004

Darren Holden, University of Notre Dame Australia

Australian scientist Mark Oliphant, who helped push the United States to develop the atomic bombs in World War II, also played a major role during the war in attempting to stop the US dominating the UK in any further development of nuclear weapons.

Details of the Adelaide-born physicist’s efforts are included in new research published today in the CSIRO’s Historical Records of Australian Science, based on documents sourced from the UK Cabinet archives.

These archival documents reveal how Oliphant attempted a British rebellion against scientific collaboration with the US that escalated all the way to the top of Britain’s wartime leadership.




Read more:
How Melbourne activists launched a campaign for nuclear disarmament and won a Nobel prize


The rise of the physicist

Oliphant (1901-2000) described himself as a “belligerent pacifist” and his humanitarianism and compassion forms an indelible image of the gentle giant of Australian science.

After studying at the University of Adelaide he moved to the Cavendish Laboratory at Cambridge in the UK. Oliphant joined a freewheeling cabal of atomic physicists led by fellow antipodean Ernest Rutherford. He later took up a position at Birmingham University.

But soon the war was to change everything for him.

In late 1938, nuclear fission of uranium was discovered in Berlin and within months the thunderclap of war clattered over Europe. After convincing the Americans of the potential of an atomic bomb in 1941, Oliphant joined the Manhattan Project in 1943 as a leading member of the collaborative British Mission.

At war with secrecy

Oliphant found that wartime secrecy was totally opposite to the usual culture of open science. The US military police opened his mail, and the FBI interrogated him on his casual attitude to rules.

In September 1944 Oliphant complained of his restrictions to the US Army’s no-nonsense military head of the project, General Leslie Groves. Groves was frustrated with progress and gave Oliphant a lecture on war and security.

In doing so, the cabinet documents on Oliphant’s notes show that the normally circumspect Groves also let slip that the US had no intention of honouring an agreement with the British to share atomic technology after the war. Groves stated that even after the war America needed to prepare for an “inevitable war with Russia”.

Oliphant’s notes added:

In this conversation Groves insisted that he spoke for the armed forces and for every thinking man and woman in U.S.A. He said that any effort U.K. might make must be confined to central Canada. He excluded specifically Australia or any other part of the Empire. Every possible source of supply of raw materials would be monopolised and controlled by U.S.A.-U.K.

How to warn the UK?

Oliphant saw weapons development as merely a vehicle on which to carry the potential of almost limitless energy and he was intent on resuming his open research after the war.

He could not risk his mail being opened again. So he headed from Berkeley, California to the British Embassy in Washington to write a secret report to London detailing his conversation with Groves.

Oliphant had a plan. He proposed that, without delay, the entire British Mission leave the Manhattan Project, return to Britain and restart their own programs. In late 1944 he seemingly had traction and the British project, code-named Tube Alloys, was reinvigorated with new plans tabled to construct uranium isotope plants.

Oliphant’s plan escalated up the chain to Lord Cherwell, then Prime Minister Winston Churchill’s scientific advisor, and to Sir John Anderson, the Chancellor of the Exchequer and the authority on atomic matters inside the British War Cabinet.

James Chadwick, the scientific head of the British Mission, was furious at Oliphant’s cavalier approach and wrote to the British polity arguing that the British Mission must stay in America to complete the task at hand.

Oliphant’s bombast, confidence and directness is famous. As he approached the door of 11 Downing Street (the official residence of the Chancellor of the Exchequer) on January 9, 1945, he was likely optimistic that his meeting with Sir John would result in a decision to follow his new plan.

But Sir John was in a pessimistic mood. There was still a war on, and the allies were being pushed back by the Nazis at the Battle of the Bulge. Sir John put a stop to talk of this scientific rebellion, and ordered Oliphant back to America to complete the job.

The atomic bombs fell on Japan in August 1945. World War II soon ended.

The wrecked framework of the Museum of Science and Industry in Hiroshima, Japan, shortly after the dropping of the first atomic bomb, on August 6, 1945.
Shutterstock/Everett Historical

After the war

In mid-1946 the newly formed United Nations debated control of atomic technology and Oliphant was in New York as an Australian advisor. He and other scientists pushed a plan to abolish weapons and throw the science open.

The alternative, the scientists argued, would be an escalation of an arms race. Only openness in science could reduce suspicion between nations.

The US and the Soviet Union almost agreed to the plan. But the Americans refused a Soviet request to first destroy their atomic arsenal and the Soviets refused to allow UN inspections.




Read more:
We may survive the Anthropocene, but need to avoid a radioactive ‘Plutocene’


The US passed their Atomic Energy Act in August 1946 which prevented any collaboration on atomic technology. Oliphant’s prophecy came true. But the scientists had made another prophecy: atomic secrets cannot be contained.

As the critical mass of international scientists that had gathered together for war radiated back out around the world, they carried with them the secrets of the atom.

The British restarted their bomb project in 1947 and tested their first weapon in 1952, and the Soviets tested their first bomb in 1949. The US monopoly on atomic weaponry was a fleeting moment.

The ConversationSo the opportunity was lost in 1946 to abolish weapons, and today more than 14,000 nuclear weapons exist, held by nine countries. Even in a post-Cold War world this sword of annihilation hangs by a thread over the head of all us.

Darren Holden, PhD Candidate, University of Notre Dame Australia

This article was originally published on The Conversation. Read the original article.

Advertisements

How Captain Cook became a contested national symbol


Tracy Ireland, University of Canberra

Captain Cook has loomed large in the federal government’s 2018 budget. The government allocated $48.7 million over four years to commemorate the 250th anniversary of Cook’s voyages to the South Pacific and Australia in 1770. The funding has been widely debated on social media as another fray in Australia’s culture wars, particularly in the context of $84 million in cuts to the ABC.

Closer scrutiny suggests that this latest celebration of Cook may serve as a headline for financial resources already committed to a range of cultural programs, at least some of which could be seen as business as usual. These include the development of digital heritage resources and exhibitions at the National Maritime Museum, National Library, AIATSIS and the National Museum of Australia, as well as support for training “Indigenous cultural heritage professionals in regional areas”.

However, the budget package also includes unspecified support for the “voyaging of the replica HMB Endeavour” and a $25 million contribution towards redevelopment of Kamay Botany Bay National Park, including a proposed new monument to the great man.

So while the entire $48.7 million won’t simply go towards a monument, it’s clear that celebrating the 250th anniversary of Cook’s landing at Botany Bay is a high priority for this federal government.

In 1770 Lieutenant (later Captain) James Cook, on a scientific mission for the British Navy, anchored in a harbour he first called Stingray Bay. He later changed it to Botany Bay, commemorating the trove of specimens collected by the ship’s botanists, Joseph Banks and Daniel Solander.

Cook made contact with Aboriginal people, mapped the eastern coast of the continent, claimed it for the British Crown and named it New South Wales, allowing for the future dispossession of Australia’s First Nations. He would later return to the Pacific on two more voyages before his death in Hawaii in 1779.

Scholars agree that Cook had a major influence on the world during his lifetime. His actions, writings and voyages continue to resonate through modern colonial and postcolonial history.

Cook continues to be a potent national symbol. Partly this is due to the rich historical written and physical records we have of Cook’s journeys, which continue to reward further study and analysis.

But the other side to the hero story is the dispossession of Australia’s Indigenous peoples from their land. As a symbol of the nation, Cook is, and has always been, contested, political and emotional.

Too many Cooks

There are other European contenders for the title of “discoverer of the continent”, such as Dirk Hartog in 1616 and William Dampier in 1699. However, both inconveniently landed on the west coast. Although Englishman Dampier wrote a book about his discoveries, he never became a major figure like Cook.

Cook’s legend began immediately after his death, when he became one of the great humble heroes of the European Enlightenment. Historian Chris Healy has suggested that Cook was suited to the title of founder of Australia because his journey along the entire east coast made him more acceptable in other Australian states. Importantly, unlike that other great contender for founding father, the First Fleet’s Governor Arthur Phillip, Cook was not associated with the “stain of convictism”.

Landing of Captain Cook at Botany Bay, 1770, by Emanuel Phillips Fox, 1902.
Wikimedia

Australians celebrated the bicentenary of Cook’s arrival in 1970, and the bicentenary of the arrival of the First Fleet in 1988. Throughout this period it was widely accepted that Cook was the single most important actor in the British possession of Australia, despite the fact that many other political figures played significant roles.

This perhaps partly explains why Cook has featured so prominently in Aboriginal narratives of dispossession, and why the celebrations in 1970 and 1988 triggered debate around Aboriginal land rights.

Other scholars have examined the Aboriginal perspective on Cook’s landing. In the 1970s archaeologist Vincent Megaw found British artefacts in a midden at Botany Bay. He cautiously suggested that these items might have been part of the gifts given by Cook to the Aboriginal people he encountered.

Historian Maria Nugent has assessed the narratives recounted by Percy Mumbulla and Hobbles Danaiyarri. Both were senior Aboriginal lawmen and knowledge holders who, in the 1970s and ’80s, shared their sagas of the coming of Cook to their lands with anthropologists.

Too pale, stale and male?

Controversy over the celebration of Cook as founding father is not a new thing. It dates back to the 19th century when his first statues were raised.

This latest Captain Cook fanfare comes hot on the heels of broader global debates about the contemporary values and meaning of civic statues of (“pale, stale, male”) heroes associated with colonialism and slavery.

In Australia, there has also been debate about how the events of the first world war have been commemorated so expansively by Australia. A further $500 million was recently allocated for the extension of the Australian War Memorial, at a time when other cultural institutions in Canberra are being forced to shed jobs and tighten their belts.

The view from Captain Cook’s landing in Botany Bay, Kamay National Park.
Wikimedia/Maksym Kozlenko, CC BY-SA

The funding cycle for our contemporary cultural institutions and activities in Australia has been closely linked to anniversaries and their commemoration since at least the 1970 bicentenary. The 2018 budget lists support for programs at a number of cultural institutions and for training Indigenous cultural heritage professionals. It would be interesting to know whether these funds have been diverted away from existing operational budgets and core activities in these institutions to support the Cook celebrations.

The master plan for Kamay Botany Bay National Park has also been in development for some time. While centred on the historical event of Cook’s landing, the plan itself is more about the rehabilitation and activation of this somewhat neglected landscape. Plans have been drawn up in consultation with the La Perouse Aboriginal Land Council.

Should we be devoting scarce financial resources to yet another celebration of Cook? Focal events such as these can divert funds into cultural activities and may allow researchers and creative practitioners to unearth new evidence and develop fresh interpretations. Some of these funds may also go to support initiatives driven by First Nations communities.

The ConversationThere is no escaping the fact that Captain Cook is a polarising national symbol, representing possession and dispossession. Another anniversary of Cook’s landing may give us much to reflect upon, but it also the highlights the need for investment in new symbols that grapple with colonial legacies and shared futures.

Tracy Ireland, Associate Professor Cultural Heritage, University of Canberra

This article was originally published on The Conversation. Read the original article.


Sunken Nazi U-boat discovered: why archaeologists like me should leave it on the seabed



File 20180425 175054 1jvt9dj.jpg?ixlib=rb 1.1

Sea War Museum

Innes McCartney, Bournemouth University

The collapsing Nazi government ordered all U-boats in German ports to make their way to their bases in Norway on May 2, 1945. Two days later, the recently commissioned U-3523 joined the mission as one of the most advanced boats in the fleet. But to reach their destination, the submarines had to pass through the bottleneck of the Skagerrak – the strait between Norway and Denmark – and the UK’s Royal Air Force was waiting for them. Several U-boats were sunk and U-3523 was destroyed in an air attack by a Liberator bomber.

U-3523 lay undiscovered on the seabed for over 70 years until it was recently located by surveyors from the Sea War Museum in Denmark. Studying the vessel will be of immense interest to professional and amateur historians alike, not least as a way of finally putting to rest the conspiracy theory that the boat was ferrying prominent Nazis to Argentina. But sadly, recovering U-3523 is not a realistic proposition. The main challenges with such wrecks lie in accurately identifying them, assessing their status as naval graves and protecting them for the future.

U-boat wrecks like these from the end of World War II are the hardest to match to historical records. The otherwise meticulous record keeping of the Kriegsmarine (Nazi navy) became progressively sparser, breaking down completely in the last few weeks of the war. But Allied records have helped determine that this newly discovered wreck is indeed U-3523. The sea where this U-boat was located was heavily targeted by the RAF because it knew newly-built boats would flee to Norway this way.

Identification

The detailed sonar scans of the wreck site show that it is without doubt a Type XXI U-boat, of which U-3523 was the only one lost in the Skagerrak and unaccounted for. These were new types of submarines that contained a number of innovations which had the potential to make them dangerous opponents. This was primarily due to enlarged batteries, coupled to a snorkel, which meant they could stay permanently underwater. Part of the RAF’s mission was to prevent any of these new vessels getting to sea to sink Allied ships, and it successfully prevented any Type XXI U-boats from doing so.

The Type XXI U-3008.
Wikipedia

With the U-boat’s identity correctly established, we now know that it is the grave site of its crew of 58 German servicemen. As such, the wreck should either be left in peace or, more implausibly, recovered and the men buried on land. Germany lost over 800 submarines at sea during the two world wars and many have been found in recent years. It is hopelessly impractical to recover them all, so leaving them where they are is the only real option.

Under international law all naval wrecks are termed “sovereign immune”, which means they will always be the property of the German state despite lying in Danish waters. But Denmark has a duty to protect the wreck, especially if Germany asks it to do so.

Protection

Hundreds of wartime wreck sites such as U-3523 are under threat around the world from metal thieves and grave robbers. The British cruiser HMS Exeter, which was sunk in the Java Sea on May 1, 1942, has been entirely removed from the seabed for scrap. And wrecks from the 1916 Battle of Jutland that also lie partly in Danish waters have seen industrial levels of metal theft. These examples serve as a warning that organised criminals will target shipwrecks of any age for the metals they contain.

Detailed sonar scans have been taken.
Sea War Museum

Germany and the UK are among a number of countries currently pioneering the use of satellite monitoring to detect suspicious activity on shipwrecks thought to be under threat. This kind of monitoring could be a cost-effective way to save underwater cultural heritage from criminal activity and its use is likely to become widespread in the next few years.

Recovery

The recovery cost is only a small fraction of the funds needed to preserve and display an iron object that has been immersed in the sea for many years. So bringing a wreck back to the surface should not be undertaken lightly. In nearly all cases of salvaged U-boats, the results have been financially ruinous. Lifting barges that can raise shipwrecks using large cranes cost tens of thousands of pounds a day to charter. Once recovered, the costs of conservation and presentation mount astronomically as the boat will rapidly start to rust.

The U-boat U-534 was also sunk by the RAF in 1945, close to where U-3523 now lies. Its crew all evacuated that boat, meaning that she was not a grave when recovered from the sea in 1993 by Danish businessman Karsten Ree, allegedly in the somewhat incredible belief that it carried Nazi treasure. At a reported cost of £3m, the operation is thought to have been unprofitable. The boat contained nothing special, just the usual mundane objects carried on a U-boat at war.

U-534 after the rescue.
Les Pickstock/Flickr, CC BY

Similar problems were experienced by the Royal Navy Submarine Museum in the UK when it raised the Holland 1 submarine in 1982. In that case, the costs of long-term preservation proved much greater than anticipated after the initial rust-prevention treatment failed to stop the boat corroding. It had to be placed in a sealed tank full of alkali sodium carbonate solution for four years until the corrosive chloride ions had been removed, and was then transferred to a purpose-built exhibition building to protect it further.

The ConversationThe expensive process of raising more sunken submarines will add little to our knowledge of life at sea during World War II. But each time a U-boat is found, it places one more jigsaw piece in its correct place, giving us a clearer picture of the history of the U-boat wars. This is the true purpose of archaeology.

Innes McCartney, Leverhulme Early Career Fellow, Department of Archaeology, Anthropology and Forensic Science, Bournemouth University

This article was originally published on The Conversation. Read the original article.


In ancient Mesopotamia, sex among the gods shook heaven and earth



File 20180420 75123 92p1gd.jpg?ixlib=rb 1.1
The “Burney Relief,” which is believed to represent either Ishtar, the Mesopotamian goddess of love and war, or her older sister Ereshkigal, Queen of the underworld (c. 19th or 18th century BC)
BabelStone

Louise Pryke, Macquarie University

In our sexual histories series, authors explore changing sexual mores from antiquity to today.


Sexuality was central to life in ancient Mesopotamia, an area of the Ancient Near East often described as the cradle of western civilisation roughly corresponding to modern-day Iraq, Kuwait, and parts of Syria, Iran and Turkey. It was not only so for everyday humans but for kings and even deities.

Mesopotamian deities shared many human experiences, with gods marrying, procreating and sharing households and familial duties. However when love went wrong, the consequences could be dire in both heaven and on earth.

Scholars have observed the similarities between the divine “marriage machine” found in ancient literary works and the historical courtship of mortals, although it is difficult to disentangle the two, most famously in so-called “sacred marriages”, which saw Mesopotamian kings marrying deities.




Read more:
Guide to the classics: the Epic of Gilgamesh


Divine sex

Gods, being immortal and generally of superior status to humans, did not strictly need sexual intercourse for population maintenance, yet the practicalities of the matter seem to have done little to curb their enthusiasm.

Sexual relationships between Mesopotamian deities provided inspiration for a rich variety of narratives. These include Sumerian myths such as Enlil and Ninlil and Enki and Ninhursag, where the complicated sexual interactions between deities was shown to involve trickery, deception and disguise.

The goddess Ishtar as depicted in Myths and legends of Babylonia & Assyria, 1916, by Lewis Spence.
Wikimedia

In both myths, a male deity adopts a disguise, and then attempts to gain sexual access to the female deity — or to avoid his lover’s pursuit. In the first, the goddess Ninlil follows her lover Enlil down into the Underworld, and barters sexual favours for information on Enlil’s whereabouts. The provision of a false identity in these myths is used to circumnavigate societal expectations of sex and fidelity.

Sexual betrayal could spell doom not only for errant lovers but for the whole of society. When the Queen of the Underworld, Ereshkigal, is abandoned by her lover, Nergal, she threatens to raise the dead unless he is returned to her, alluding to her right to sexual satiety.

The goddess Ishtar makes the same threat in the face of a romantic rejection from the king of Uruk in the Epic of Gilgamesh. It is interesting to note that both Ishtar and Ereshkigal, who are sisters, use one of the most potent threats at their disposal to address matters of the heart.




Read more:
Friday essay: the legend of Ishtar, first goddess of love and war


The plots of these myths highlight the potential for deceit to create alienation between lovers during courtship. The less-than-smooth course of love in these myths, and their complex use of literary imagery, have drawn scholarly comparisons with the works of Shakespeare.

Love poetry

Ancient authors of Sumerian love poetry, depicting the exploits of divine couples, show a wealth of practical knowledge on the stages of female sexual arousal. It’s thought by some scholars that this poetry may have historically had an educational purpose: to teach inexperienced young lovers in ancient Mesopotamia about intercourse. It’s also been suggested the texts had religious purposes, or possibly magical potency.

Several texts write of the courtship of a divine couple, Inanna (the Semitic equivalent of Ishtar) and her lover, the shepherd deity Dumuzi. The closeness of the lovers is shown through a sophisticated combination of poetry and sensuousness imagery – perhaps providing an edifying example for this year’s Bad Sex in Fiction nominees.

Ancient Sumerian cylinder seal impression showing Dumuzid being tortured in the Underworld by the galla demons.
British Museum

In one of the poems, elements of the female lover’s arousal are catalogued, from the increased lubrication of her vulva, to the “trembling” of her climax. The male partner is presented delighting in his partner’s physical form, and speaking kindly to her. The feminine perspective on lovemaking is emphasised in the texts through the description of the goddess’ erotic fantasies. These fantasies are part of the preparations of the goddess for her union, and perhaps contribute to her sexual satisfaction.

Female and male genitals could be celebrated in poetry, the presence of dark pubic hair on the goddess’ vulva is poetically described through the symbolism of a flock of ducks on a well-watered field or a narrow doorway framed in glossy black lapis-lazuli.

The representation of genitals may also have served a religious function: temple inventories have revealed votive models of pubic triangles, some made of clay or bronze. Votive offerings in the shape of vulvae have been found in the city of Assur from before 1000 BC.

Happy goddess, happy kingdom

Divine sex was not the sole preserve of the gods, but could also involve the human king. Few topics from Mesopotamia have captured the imagination as much as the concept of sacred marriage. In this tradition, the historical Mesopotamian king would be married to the goddess of love, Ishtar. There is literary evidence for such marriages from very early Mesopotamia, before 2300 BC, and the concept persevered into much later periods.

The relationship between historical kings and Mesopotamian deities was considered crucial to the successful continuation of earthly and cosmic order. For the Mesopotamian monarch, then, the sexual relationship with the goddess of love most likely involved a certain amount of pressure to perform.

In ancient Mesopotamia, a goddess’ vulva could be compared to a flock of ducks.
Shutterstock.com

Some scholars have suggested these marriages involved a physical expression between the king and another person (such as a priestess) embodying the goddess. The general view now is that if there were a physical enactment to a sacred marriage ritual it would have been conducted on a symbolic level rather than a carnal one, with the king perhaps sharing his bed with a statue of the deity.

Agricultural imagery was often used to describe the union of goddess and king. Honey, for instance, is described as sweet like the goddess’ mouth and vulva.

A love song from the city of Ur between 2100-2000 BC is dedicated to Shu-Shin, the king, and Ishtar:

In the bedchamber dripping with honey let us enjoy over and over your allure, the sweet thing. Lad, let me do the sweetest things to you. My precious sweet, let me bring you honey.

Sex in this love poetry is depicted as a pleasurable activity that enhanced loving feelings of intimacy. This sense of increased closeness was considered to bring joy to the heart of the goddess, resulting in good fortune and abundance for the entire community — perhaps demonstrating an early Mesopotamian version of the adage “happy wife, happy life”.

The diverse presentation of divine sex creates something of a mystery around the causes for the cultural emphasis on cosmic copulation. While the presentation of divine sex and marriage in ancient Mesopotamia likely served numerous purposes, some elements of the intimate relationships between gods shows some carry-over to mortal unions.

The ConversationWhile dishonesty between lovers could lead to alienation, positive sexual interactions held countless benefits, including greater intimacy and lasting happiness.

Louise Pryke, Lecturer, Languages and Literature of Ancient Israel, Macquarie University

This article was originally published on The Conversation. Read the original article.


FDR’s forest army: How the New Deal helped seed the modern environmental movement 85 years ago



File 20180328 109199 ksb44t.jpg?ixlib=rb 1.1
Bridge built by CCC workers, Shady Lake Recreation Area, Arkansas.
Jerry Turner, CC BY-SA

Benjamin Alexander, City University of New York

Eighty-five years ago, on April 5, 1933, President Franklin D. Roosevelt signed an executive order allocating US$10 million for “Emergency Conservation Work.” This step launched one of the New Deal’s signature relief programs: the Civilian Conservation Corps, or CCC. Its mission was to put unemployed Americans to work improving the nation’s natural resources, especially forests and public parks.

Today, when Americans talk about “big government,” the connotation is almost always negative. But as I show in my history of the Corps, this agency infused money into the economy at a time when it was urgently needed, and its work had lasting value.

Corps workers planted trees, built dams and preserved historic battlefields. They left trail networks and lodges in state and national parks that are still widely used today. The CCC taught useful skills to thousands of unemployed young men, and inspired later generations to get outside and help conserve America’s public lands.

CCC recruits at work in Great Smoky Mountain National Park, 1936.

The spiritual value of outdoor work

Roosevelt had sketched out much of his concept for the CCC well before his inauguration on March 4, 1933. Proposing the corps on March 21, he asserted that it would be “of definite, practical value” to the nation and the men it enrolled:

“The overwhelming majority of unemployed Americans, who are now walking the streets and receiving private or public relief, would infinitely prefer to work. We can take a vast army of these unemployed out into healthful surroundings. We can eliminate to some extent at least the threat that enforced idleness brings to spiritual and moral stability.”

Congress enacted the bill on March 31, and Roosevelt signed it that day. Although there was no precedent for such a vast mobilization, enrollment started a week later in New York, Baltimore, Washington, D.C., Pittsburgh and other major cities, then fanned out across the country. By midsummer, some 250,000 men aged 18 to 25 had signed up. Their six-month term might be spent at one camp or several; it might be located across the continent or, rarely, just across town.

Poster by Albert M. Bender, Illinois WPA Art Project, Chicago, 1935.
Library of Congress

Another day, another dollar

CCC recruits came from families on relief. Agents from local welfare offices screened prospects, then passed them along to the Army for a physical examination and a final decision. The Army also managed the huge task of transporting successful applicants to hundreds of work camps. The corps established operations in all 48 states and the territories of Puerto Rico, Alaska, Hawaii and the Virgin Islands, as well as a separate American Indian division.

Most enrollees were young unmarried men, but the CCC also created special companies of war veterans. This policy was Roosevelt’s response to the 1932 Bonus March, in which thousands of World War I veterans camped out in Washington, D.C., demanding early payment on promised military service bonuses, only to be evicted at gunpoint by order of then-president Herbert Hoover. (Some scholars believe this debacle helped clinch Roosevelt’s election later that year.)

CCC recruits could only bring a single trunk; tools were provided on-site. Many Corps members packed musical instruments, and some brought their dogs, which became company mascots. At the start many recruits slept in tents and bathed in nearby rivers. Those without experience in the great outdoors learned key lessons fast, such as how to avoid using poison ivy for toilet paper. Some succumbed to homesickness and dropped out, but most adjusted, forming baseball teams, music combos and boxing leagues.

Although the CCC was a civilian organization, the camps were run by the Army and bore some of its hallmarks. Dining facilities were called mess halls, beds had to be made tightly enough to bounce a quarter off them, and workers woke to the sound of reveille and went to sleep with taps. Commanding officers had final say over most issues.

At work sites, the Agriculture and Interior departments – custodians of U.S. public lands – were in charge. CCC members planted 3 billion trees, earning the nickname “Roosevelt’s tree army.” This work revitalized U.S. national forests and created shelter belts across the Great Plains to reduce the risk of dust storms. The corps also surveyed and treated forests to control insect pests and created forest fire prevention systems. Over its decade of operation, 42 enrollees and five supervisors died fighting forest fires.

Major planting areas for the Shelterbelt Project, 1933-42.
U.S. Forest Service

Corps members created and landscaped 711 state parks, and built lodges and hiking trails in dozens of national parks and monument areas. Many of these facilities are still in use today. Attractions including the Grand Canyon, Grand Teton and Yellowstone National Parks, and Civil War battlefields at Gettysburg and Shiloh bear signatures of CCC work.

For their labors, corps members received $30 a month – but as a condition of enrollment, the CCC sent $22 to $25 each pay period home to their families. Still, at Depression prices, $5 was enough to visit nearby dance halls and meet girls once or twice a week. These forays sometimes ended in fights with jealous local men, but also led to many lifelong marriages.

Ripple effects

In total, close to 3 million workers and their families received support from the CCC between 1933 and 1942. The corps also provided jobs for well over 250,000 salaried employees, including reserve military officers who ran the camps and so-called “local experienced men” – unemployed foresters who lived near the camps and were hired mainly to help supervise enrollees on the job.

Camps also hired unemployed teachers to offer informal evening classes. Some 57,000 enrollees learned to read and write during their CCC stints. Camps offered many other classes, from standard subjects like history and arithmetic to vocational skills such as radio, carpentry and auto repair.

Like other New Deal programs, the CCC had flaws. Party patronage heavily influenced hiring of salaried personnel. Although the law creating the CCC banned racial discrimination, black enrollment was capped. Many African-American enrollees were housed in “colored camps” and could only go into town for recreation and romance if black communities existed to serve them.

A racially mixed CCC Company in Pineland, Texas in 1933, with African-American members grouped at far right.
University of North Texas Libraries., CC BY-ND

The CCC also discriminated socially, enrolling young men with families but excluding rootless transients who wandered from town to town in search of work and food. These men could have reaped great benefits from the CCC, but its leaders imagined an unbridgeable cultural gap between young men who came from families and others who came from the byroads. And the corps only enrolled men, although Eleanor Roosevelt convinced her husband to let her and Labor Secretary Frances Perkins organize a smaller network of “She-She-She” camps for jobless women.

Congress terminated funding for the CCC in 1942, after the United States entered World War II, although Roosevelt argued that it still played an essential role. Many men who had gained physical strength and learned to handle Army discipline in the CCC later entered the armed forces.

The tree army’s legacy

Beyond its physical impact, the corps helped to broaden public support for conservation. In the 1940s and 1950s, youth groups such as the Oregon-based Green Guards volunteered in local forests clearing flammable underbrush, cutting fire breaks and serving as fire lookouts. Others, such as the Student Conservation Association, advocated for wilderness protection and conservation education. Hundreds of former CCC enrollees helped lead these efforts. Today many teenagers work in national parks, forests and wildlife refuges every summer.

The ConversationAlthough it is hard to picture a CCC-style initiative winning political support today, some of its ideas still resonate. Notably, the Obama administration’s economic stimulus plan and some proposals for upgrading U.S. infrastructure present federal spending on projects that benefit society as a legitimate way to stimulate economic growth. The CCC combined that strategy with the idea that America’s natural resources should be protected so that everyone could enjoy them.

Benjamin Alexander, Lecturer in social science, New York City College of Technology, City University of New York

This article was originally published on The Conversation. Read the original article.


The day bananas made their British debut


File 20180410 554 3ygp4b.jpeg?ixlib=rb 1.1
Thomas Johnson’s illustration of his banana plant from The Herball Or Generall Historie of Plantes.
Wikimedia Commons

Rebecca Earle, University of Warwick

When Carmen Miranda sashayed her way into the hearts of Britain’s war-weary population in films such as The Gang’s All Here and That Night in Rio, her combination of tame eroticism and tropical fruit proved irresistible. Imagine having so much fruit you could wear it as a hat. To audiences suffering the strictures of rationing, Miranda’s tropical headgear shouted exoticism and abundance – with a touch of phallic sensuality thrown in.

In 1940s and 1950s Britain, bananas represented luxury, sunshine and sexiness. But entranced cinema-goers might have been surprised to learn that the bananas in Miranda’s tutti-frutti hat were in all probability descended from a strain developed in a hothouse at a stately home in Derbyshire, in England’s picturesque – but decidedly non-tropical – Midlands.

England got its first glimpse of the banana when herbalist, botanist and merchant Thomas Johnson displayed a bunch in his shop in Holborn, in the City of London, on April 10, 1633. He included the woodcut you see at the top of this article in his “very much enlarged” edition of John Gerard’s popular botanical encyclopedia, The herball or generall historie of plantes.

Page 1516 of the Johnson edition of The herball or generall historie of plantes.
Wellcome Images

Johnson’s single stem of bananas came from the recently colonised island of Bermuda. We don’t know what variety it was – but these days the chances are that any banana you will find in a British supermarket will be descended from the Cavendish banana. This strain was developed in the 19th century by the head gardener at Chatsworth House, John Paxton. His invention is called the Cavendish, rather than the Paxton, after the family name of the owners of the Chatsworth estate, the Duke and Duchess of Devonshire.

Paxton spent several years developing his banana. In 1835 his plant finally bore fruit, which won him a prize from the Royal Horticultural Society.

The Cavendish slowly gained popularity as a cultigen, but its current dominance is the result of a calamity. The genetic uniformity of commercial banana plantations is a hostage to ill-fortune. During the 1950s a virulent fungal pathogen wiped out the previously ubiquitous Gros Michel variety. The Cavendish stepped into the space left by the attack of Panama Disease. There is no reason to assume the fate suffered by the Gros Michel will not befall the Cavendish. What then will adorn our bowls of cereal and add volume to our smoothies?




Read more:
Disease may wipe out world’s bananas – but here’s how we might just save them


Taste of the tropics

Europeans have long associated bananas with the exotic pleasures of distant, island paradises. When the exhausted Ilarione da Bergamo arrived in the Caribbean in 1761 after a long sea voyage, the sight of the local fruit convinced the Italian friar that the travails of his protracted journey had been worthwhile. “Thus I began enjoying the delights of America,” he noted in his diary. Travellers marvelled at the exuberance of new-world nature, which – unlike her more parsimonious European sister – offered ripe, sweet fruit all year round.

The opportunity to gorge on sugary fruits became part of the European image of the tropics. The historian David Arnold pointed out that, in English: “One of the earliest and most enduring uses of the adjective ‘tropical’ was to describe fruit.”

De negro e india, china cambuja, by Miguel Cabrera (1695–1768).
Museum of the Americas

And of course these juicy, succulent treasures quickly became associated, not only with the tropics, but also with the sexual allure travellers projected onto women in the torrid zone. Women and tropical fruits merged into one delightful commodity in the overheated imagination of the US journalist, Carleton Beals, as he travelled through Costa Rica in the 1930s. “And the women,” he wrote breathlessly in Banana Gold, “their firm ample flesh seems ready to burst through the satin skin—like ripe fruit!”. Carmen Miranda’s provocative wink and her banana hat played masterfully on this centuries-old association.

Banana republics

Bananas originated in South-East Asia and were brought to the New World by European settlers – who, by the 19th century, were growing them on vast plantations in the Caribbean. Labour conditions on banana plantations were often atrocious. When underpaid workers at a plantation on Colombia’s Caribbean coast struck for better working conditions in 1928, they were gunned down by Colombian troops probably called in at the behest of the United Fruit Company.

The novelist Gabriel García Márquez immortalised this tragedy in a memorable scene in his One Hundred Years of Solitude. “Look at the mess we’ve got ourselves into,” one of his characters remarks, “just because we invited a gringo to eat some bananas”.

Banana plantation in Nicaragua, 1894.
Popular Science Monthly

Far worse messes were to occur in Guatemala in 1954, when the United Fruit Company cooperated closely with the Guatemalan military and the US State Department to overthrow the democratically-elected government of Jacobo Arbenz, who had made the mistake of nationalising some of the unused lands owned by the fruit company. The coup ushered in decades of military rule, during which the government, locked in a struggle with the guerrilla movement that inevitably arose in response, engaged in what many scholars have described as genocide against the Maya population.

The ConversationToday, bananas are so commonplace – thanks, of course, to industrial-scale production and working conditions that continue to attract critique – that they scarcely conjure up the delight they once inspired in the travel-fatigued Ilarione da Bergamo and weary postwar cinema goers. Since April 10 2018 marks the 385th anniversary of the day in 1633 when bananas were displayed for the first time to Londoners, it’s worth pondering the complex history behind the everyday banana.

Rebecca Earle, Professor of HIstory, University of Warwick

This article was originally published on The Conversation. Read the original article.


The Panama Canal’s forgotten casualties



File 20180413 46652 qfl5wi.jpg?ixlib=rb 1.1
Panama Canal construction in 1913 showing workers drilling holes for dynamite in bedrock, as they cut through the mountains of the Isthmus. Steam shovels in the background move the rubble to railroad cars.
(Everett Historical/Shutterstock)

Caroline Lieffers, Yale University

It was the greatest infrastructure project the world had ever seen. When the 77 kilometre-long Panama Canal officially opened in 1914, after 10 years of construction, it fulfilled a vision that had tempted people for centuries, but had long seemed impossible.

“Never before has man dreamed of taking such liberties with nature,” wrote journalist Arthur Bullard in awe.

But the project, which employed more than 40,000 labourers, also took immense liberties with human life. Thousands of workers were killed. The official number is 5,609, but many historians think the real toll was several times higher. Hundreds, if not thousands, more were permanently injured.

How did the United States government, which was responsible for the project, reconcile this tremendous achievement with the staggering cost to human lives and livelihoods?

They handled it the same way governments still do today: They doled out a combination of triumphant rhetoric and just enough philanthropy to keep critics at bay.

U.S. engineering might

From the outset, the Canal project was supposed to cash in on the exceptionalism of American power and ability.

Work crew drilling through solid rock to create the Panama Canal, Panama, 1906.
(Everett Historical/Shutterstock)

The French had tried — and failed — to build a canal in the 1880s, finally giving in after years of fighting a recalcitrant landscape, ferocious disease, the deaths of some 20,000 workers and spiralling costs. But the U.S., which purchased the French company’s equipment, promised they would do it differently.

First, the U.S. government tried to broker a deal with Colombia, which controlled the land they needed for construction. When that didn’t work, the U.S. backed Panama’s separatist rebellion and quickly signed an agreement with the new country, allowing the Americans to take full control of a 16 kilometre-wide Canal Zone.

The Isthmian Canal Commission, which managed the project, started by working aggressively to discipline the landscape and its inhabitants. They drained swamps, killed mosquitoes and initiated a whole-scale sanitation project. A new police force, schools and hospitals would also bring the region to what English geographer Vaughan Cornish celebrated as “marvellous respectability.”

A path of destruction

But this was just the beginning. The world’s largest dam had to be built to control the temperamental Chagres river and furnish power for the Canal’s lock system. It would also create massive Gatún Lake, which would provide transit for more a third of the distance between the Atlantic and Pacific oceans.

The destruction was devastating. Whole villages and forests were flooded, and a railway constructed in the 1850s had to be relocated.

The greatest challenge of all was the Culebra Cut, now known as the Gaillard Cut, an artificial valley excavated through some 13 kilometres of mountainous terrain.

More than 100 million cubic metres of dirt had to be moved; the work consumed more than eight million kilograms of dynamite in three years alone.

Imagine digging a trench more than 90 metres wide, and 10 storeys deep, over the length of something like 130 football fields. In temperatures that were often well over 30 degrees Celsius, with sometimes torrential rains. And with equipment from 1910: Dynamite, picks and coal-fired steam shovels.

Loading shot holes with dynamite to blast a slide of rock in the west bank of the Culebra Cut, February 1912.
(National Archives at St. Louis/local Identifier 185-G-154)

Expendable labour

The celebratory rhetoric masked horrifying conditions.

The Panama Canal was built by thousands of contract workers, mostly from the Caribbean. To them, the Culebra Cut was “Hell’s Gorge.”

They lived like second-class citizens, subject to a Jim Crow-like regime, with bad food, long hours and low pay. And constant danger.

In the 1980s, filmmaker Roman Foster went looking for these workers; most of the survivors were in their 90s.

Only a few copies of Fosters’s film Diggers (1984) can be found in libraries around the world today. But it contains some of the only first-hand testimony of what it was like to dig through the spiny backbone of Panama in the name of the U.S. empire.

Constantine Parkinson was one of the workers who told his story to Foster, his voice firm but his face barely able to look at the camera.

He started work on the canal at 15 years old; like many, he may have lied about his age. He was soon a brakeman, probably on a train carrying rocks to a breakwater. On July 16, 1913, a day he would never forget, he lost his right leg, and his left heel was crushed.

Parkinson explains that his grandmother went to the Canal’s chief engineer, George Goethals, to ask for some sort of assistance. As Parkinson tells it, Goethals’s response was simple: “My dear lady, Congress did not pass any law … to get compensation when [the workers] [lose limbs]. However, not to fret. Your grandson will be taken care of as soon as he [is able to work], even in a wheelchair.”

Goethals was only partly right.

At the outset, the U.S. government had essentially no legislation in place to protect the tens of thousands of foreign workers from Barbados, Jamaica, Spain and elsewhere. Administrators like Goethals were confident that the labourers’ economic desperation would prevent excessive agitation.

For the most part, their gamble worked. Though there were scandals over living conditions, injuries seem to have been accepted as a matter of course, and the administration’s charity expanded only slowly, providing the minimum necessary to get men back to work.

Placing granite in the hollow quoin. Dry Dock No. 1, Balboa, June 21, 1915.
(National Archives at St. Louis/local Identifier 185-HR-4-26J164)

Cold comfort

In 1908, after several years of construction, the Isthmian Canal Commission finally began to apply more specific compensation policies. They also contracted New York manufacturer A.A. Marks to supply artificial limbs to men injured while on duty, supposedly “irrespective of colour, nationality, or character of work engaged in.”

A. A. Marks advertising card, showing a customer holding and wearing his artificial legs, late 1800s.
U.S. National Library of Medicine/courtesy Warshaw Collection, Archives Center, National Museum of American History, Smithsonian Institution

There were, however, caveats to this administrative largesse: the labourer could not be to blame for his injury, and the interpretation of “in the performance of … duty” was usually strict, excluding the many injuries incurred on the labour trains that were essential to moving employees to and from their work sites.

Despite all of these restrictions, by 1912, A.A. Marks had supplied more than 200 artificial limbs. The company had aggressively courted the Canal Commission’s business, and they were delighted with the payoff.

A.A. Marks even took out a full-page ad for their products in The New York Sun, celebrating, in strangely cheerful tones, how their limbs helped the many men who met with “accidents, premature blasts, railroad cars.” They also placed similar advertisements in medical journals.

But this compensation was still woefully inadequate, and many men fell through its deliberately wide cracks. Their stories are hard to find, but the National Archives in College Park, Md., hold a handful.

Wilfred McDonald, who was probably from Jamaica or Barbados, told his story in a letter to the Canal administrators on May 25, 1913:

I have ben Serveing the ICC [Isthmian Canal Commission] and the PRR [Panama Railroad] in the caypasoity as Train man From the yea 1906 until my misfawchin wich is 1912. Sir without eny Fear i am Speaking Nothing But the Truth to you, I have no claim comeing to me. But for mercy Sake I am Beging you To have mercy on me By Granting me a Pair of legs for I have lost both of my Natrals. I has a Mother wich is a Whido, and too motherless childrens which During The Time when i was working I was the only help to the familys.

You can still hear McDonald’s voice through his writing. He signed his letter “Truley Sobadenated Clyante,” testifying all too accurately to his position in the face of the Canal Zone’s imposing bureaucracy and unforgiving policies.

With a drop in sugar prices, much of the Caribbean was in the middle of a deep economic depression in the early 1900s, with many workers struggling even to reach subsistence; families like McDonald’s relied on remittances. But his most profound “misfortune” may have been that his injury was deemed to be his own fault.

Legally, McDonald was entitled to nothing. The Canal Commission eventually decided that he was likely to become a public charge without some sort of help, so they provided him with the limbs he requested, but they were also clear that his case was not to set a precedent.

Other men were not so lucky. Many were deported, and some ended up working on a charity farm attached to the insane asylum. A few of the old men in Foster’s film wipe away tears, almost unable to believe that they survived at all.

The ConversationTheir blood and bodies paid mightily for the dream of moving profitable goods and military might through a reluctant landscape.

The Construction of the Panama Canal [1913-1914], 1937 (Reel 1-5 of 5), Office of the Chief Signal Officer, National Archives and Records Administration.

Caroline Lieffers, PhD Candidate, Yale University

This article was originally published on The Conversation. Read the original article.


Vikings exhibit hangs up the sword, and gives us a welcome insight into domestic life



File 20180323 54887 gp6gjp.jpg?ixlib=rb 1.1
A reconstructed Viking ship.
Caitlin Mills

Tom Clark, Victoria University

The Vikings are in Melbourne. It is hard to see anything “Vikings” without thoughts of the seafaring thugs who invaded or raided much of coastal Europe and beyond. As Viking scholar Judith Jesch has reminded us, that is essential to what the word originally meant: Norse-speaking people who got into surprisingly small ships and went in search of adventure, very often violent.

However, this is not the full story. A new exhibition at the Melbourne Museum is at pains to demonstrate this other side.




Read more:
What does the word ‘Viking’ really mean?


The television series Vikings goes out of its way to show how its characters did some pretty amazing things in their rovings – just surviving those sea voyages must rate high on the list – but mostly we know about them because they plundered far from home, to great effect. From 793 until 1066, or thereabouts, many people feared a visit from the Vikings more intensely than they feared their own rulers.

Jesch has also explained how the word broadened its meaning, even at the same time as Vikings became increasingly caricatured in poplar knowledge (think the Terry Jones movie Erik the Viking). “Vikings” can now mean all people from Denmark, Sweden, Norway, Iceland, the Faroe Islands, Shetland and many other colonies across the North Atlantic who lived during “the Viking Age”.

The Melbourne Museum’s exhibition takes this broader sense of the word and uses it against that other, narrower one. Brought to Melbourne by the Swedish History Museum, which owns the collection, it explores the lives of the Vikings as much more holistic than just the adventures of those Norsemen who went a-viking.

The approach will disappoint some people. There are weapons on show, some of them remarkably elegant for all the ravages of time, but none are better preserved than the bent sword from a burial mound in Sweden. Archaeologists reckon it was bent precisely to render it useless for violence – to prevent its misuse in the afterlife.




Read more:
Roman gladiators were war prisoners and criminals, not sporting heroes


There are boats, both original and reconstructed. Compared to the palpably seaworthy wonders of Oslo’s Viking Ship Museum, though, the standout here is a half-ship plotted in the abstract by its rivets — the planks have all perished in the boat’s burial site, but the rivets that once fastened them have been suspended in their true positions in mid-air. It offers a haunting impression of the boat that once was.

Rivets from a Viking ship create a ‘ghost ship’.
Swedish History Museum

Still, these are not displays to get the adrenalin pumping. The interactives will not push you to imagine yourself in armour, screaming from behind a wall of shields on some stricken hillside, mead in one hand and great axe in the other.

Instead, this exhibition focuses on domestic life, economy, religion and technology. Nobody should imagine that any visiting show at a museum can do comprehensive justice to even one of those four, but this one gives us plenty of concrete evidence if we wanted to imagine Swedish and similar communities in the Viking Age.

It shows us the basics of Scandinavian clothing, for example, which is so essential for imagining the people in those countries. Its displays of jewellery remind just how fine the silver and gold smithing traditions of Germanic Europe were — for example, a filigreed pendant depicting Mjölnir (“Mealgrinder”), Thor’s hammer.

Pendant, Thor’s hammer, in gold and silver. The pendant is richly decorated with filigree ornaments and is one of a kind. Erikstorp, Ödeshög, Östergötland.
Swedish History Museum

The Mjölnir pendant is also an example of how this exhibition explores the religious and spiritual dispositions of the Vikings. The gradual progression of Christian conversion through Scandinavia and Iceland meant that some southern communities were converted long before the recognised Viking Age began. Others in the north held to their faith in the Aesir (one of two tribes of Norse gods) until well into the 12th century.

What we miss in that story of incremental northwards progression, though, is how varied and often contradictory the local beliefs were. There may have been as many different schools of Aesir worship as there were settlements across the Norse-speaking lands. Certainly, during the period of Christian conversion, many people practised a dual worship — keeping the old gods alive, even though the new God forbade it.

There is a wealth of riches in the exhibition, as you might expect, which could be chaos if it lacked a strong logic of curation. Importantly, then, elements of the curation speak with great depth. The collectors have clear points to make, and they use the exhibits to make them.

A case in point is the questioning, rather than definitive, discussion of hair combs. Archaeologists have curiously found such apparently mundane items in most of the Scandinavian burial sites. Were they for carrying into the next world, for a final grooming of the dead person before burial, or something else entirely? If we cannot understand those combs, how can we understand the worlds they joined?

This emphasis on the social and everyday is quite different from many other Viking exhibitions – in English-speaking countries at least – which have tended to focus on the martial vigour of those people who repeatedly invaded “us”. A recent example was the British Museum’s 2014 exhibition Vikings: Life and Legend, which cast them as fighting fanatics for their religion, a medieval precursor of Daesh or ISIS.

Here, the curators are trying first and foremost to redirect our attentions. War was only a part of the Viking life, and only for a segment of Viking society at that. Anyone who wears a horned helmet to see this exhibition may feel an urge to take it off.


The ConversationVikings: Beyond the Legend is showing at the Melbourne Museum until August 26 2018.

Tom Clark, Associate Professor, First Year College, Victoria University

This article was originally published on The Conversation. Read the original article.


Australia’s history of live exports is more than two centuries old



File 20180410 75767 1u99jz6.jpg?ixlib=rb 1.1
A sheep undergoing live export in 2017.
Animals Australia

Nancy Cushing, University of Newcastle

A recent episode of 60 Minutes has captured public attention and the political agenda by airing dramatic video footage from Animals Australia, showing the fate of Australian animals in the live export trade.

Video shot secretly by a crew member shows sheep on five separate voyages from Fremantle to the Middle East last year. They are buffeted by the movement of the ship, strain to breathe in the hot, noisy and acrid atmosphere between decks and trample the dead and dying under their hooves.

But while these glimpses inside a transport ship are new, the practice of live animal export is as old as the European colonisation of Australia.




Read more:
Can live animal export ever be humane?


Animals of the new colony

The first arrival of animals that would later be exported from Australia, including sheep, cattle and goats, can be dated with unusual precision to January 1788.

Like the convict workforce who made up the bulk of the human cargo on the First Fleet, the livestock, purchased mainly at the Cape of Good Hope, were considered necessary to transplant a British society and economy in Antipodean soil. Live animal import from other colonies, like India and Batavia, and from Europe continued throughout the first century of colonisation.

Hoists were used to load and unload live animals in ports without purpose-built ramps. This photograph demonstrates the practice in India in 1895.
Source: William Henry Jackson, World’s Transportation Commission photograph collection. Library of Congress

Breeds that suited the climate and their roles in the colony, especially those that helped displace native plants and animals and Indigenous peoples, were sought after and carefully nurtured.

Gradually the inward flow of animals reversed. Flocks and herds increased to the point where some could be sold on to other destinations. Initially, this was to the other colonies Britain was establishing in the region, such as Van Diemen’s Land (now Tasmania), Western Australia, New Zealand and South Australia. These animals were primarily traded to establish new populations at their destinations.

Animals from New South Wales were also sent to the French colony of New Caledonia, and in small numbers farther afield to Russia, Japan and India. As numbers rose, larger-scale live export for consumption became established.

A hidden process

As in the present, this trade had distinct phases, some more visible than others. The process began where the animals were raised, generally on lightly stocked rangelands in the interior. They were driven on foot or loaded onto rail carriages to be taken to ports, where they waited in open yards to be loaded onto ships.

Thus far, the animals were moving through public spaces, where their treatment and conditions could be seen and in some cases recorded. Members of the public could register their concerns and seek to have mistreatment addressed. And even in a period when animal welfare was still an emerging concept, some did.

Railcars laden with frightened stock led to complaints about overcrowding and lack of access to food and water. One observer labelled such treatment “as gross a case of cruelty as it is possible to conceive”.

However, once the animals were hoisted or walked onto ships, they became invisible. No outsider could see them. Only those involved with the voyage knew how densely they were packed, how secure their pens were, whether their dung was cleared away, or how much food and water they received over journeys that could last for weeks. In the case of sheep, the advice was to pack them like wool bales, so tightly pressed together that they prevented one another from falling over.

In many cases, the animals were barely seen at all, except by one another, being left to their own devices on short voyages. During longer trips they would be tended to minimally, because of the toxic environment created below deck by what were termed their “exhalations of carbonic gases”.

Even the evidence of how many died on the voyages was hidden. Their bodies were thrown overboard before reaching port and few records were kept.

Animals carried on open decks could be seen while at the docks and had access to better-quality air, but were more vulnerable to high seas and inclement weather.

Animals carried on open decks could be seen while at the docks and had access to better-quality air, but were also vulnerable to high seas and inclement weather. Sheep in pens on a ship’s deck, Sydney Harbour, circa 1929.
Sam Hood photograph, State Library of New South Wales, Home and Away, 4066.

At the other end of the journey, the exported animals came back into view. This was often when the most useful accounts were recorded. Complaints about their poor condition, reduced numbers or the loss of entire shipments of animals were considered worthy of writing about in local newspapers by those who had eagerly awaited their arrival. It is at the receiving end of the export process that accusations of flimsy pens, overcrowding or the loading of animals that were not fit for the voyage can be found.

Taking this longer view of the Australian live export trade shows just how extraordinary the opportunity to see what happens during live export is. Animals Australia has noted that “Australia’s live sheep trade has operated for over five decades with only those financially invested in the trade having visual access to the conditions and welfare implications for the sheep on-board”.




Read more:
Assessing Australia’s regulation of live animal exports


This has been an issue for much longer than 50 years, but it’s now possible for outsiders – including farmers, politicians and members of the public – to see the appalling conditions of the live export trade for themselves.


The ConversationThis article is based on a blog post originally published by White Horse Press.

Nancy Cushing, Associate professor, University of Newcastle

This article was originally published on The Conversation. Read the original article.


Ancient stone tools found on Sulawesi, but who made them remains a mystery



File 20180226 140178 1q7bjnf.jpg?ixlib=rb 1.1
Limestone ‘tower’ karst region of Maros in the south of Sulawesi, where Leang Burung 2 is located.
D.P. McGahan , Author provided

Adam Brumm, Griffith University

Another collection of stone tools dating back more than 50,000 years has been unearthed on the Indonesian island of Sulawesi. Details of the find, at a rock-shelter known as Leang Burung 2, are described in our paper out today in PLOS ONE.

But we uncovered no human fossils, so the identity of these tool-makers remains a mystery.

In 2016 we reported the discovery of similar findings dating to 200,000 years ago on Sulawesi, and we also have no idea who made them.

The earliest Sulawesi tools are so old that they could belong to one of several human species. Candidates include Homo erectus and Homo floresiensis, the dwarf-like “Hobbits” of Flores.




Read more:
World’s scientists turn to Asia and Australia to rewrite human history


Alternatively, they might have been Denisovans, distant cousins of Neanderthals who met early Aboriginal people in Southeast Asia, leaving a genetic legacy in their descendants.

They may even have been Homo sapiens that had ventured out of Africa long before the main exodus of our species.

Or they could be a totally unknown species.

Where did they go?

Not only do we not know who the first inhabitants of Sulawesi were, we have no idea what happened to them.

By 40,000 years ago people were creating rock art on Sulawesi. Given the sophistication of these artworks, their makers were surely Homo sapiens with modern minds like ours.

If the first islanders were a now-extinct group, did they linger long enough to encounter modern cultures?

Sulawesi also holds great promise for understanding the initial peopling of our land.

This large island on the route to Australia might have been the launch pad to these shores up to 65,000 years ago. It could even be where the First Australians met Denisovans.

How our region looked during the Ice Age. Lower sea levels bridged the ocean barrier now separating Australia from New Guinea and joined up numerous islands in Southeast Asia to each other and to the adjacent mainland, with the exception of islands in Wallacea, which have always remained separate. The arrows show how the ancestors of Aboriginal people may have got to Australia up to 65,000 years ago.
Adam Brumm, Author provided

Resolving this mystery is not easy on a huge landmass like Sulawesi. Where do you begin to look? Which brings us to Leang Burung 2.

The original dig

Leang Burung 2 is a limestone rock-shelter in the island’s south. It was first excavated in 1975 by archaeologist Ian Glover.

Sulawesi, showing the location of Leang Burung 2 rock-shelter.
ESRI (right map), Author provided

Glover dug to a depth of 3.6m, uncovering “Ice Age” artefacts dating back 30,000 years. He also found, at the bottom of his trench, a layer of yellow clay containing simpler stone tools and fossils of large mammals (megafauna) that were rare to absent in overlying (that is, younger) “Ice Age” levels.

But before Glover could explore these hallmarks of earlier habitation he had to shut down the dig – large rocks in the trench had made further progress untenable.

Decades later, the late Mike Morwood, of “Hobbit” fame, resolved to extend Glover’s trench to bedrock. He had a hunch that below the undated clay might be evidence that archaic humans existed on Sulawesi until relatively recent times. In fact, Mike thought the ancestors of the “Hobbits” might have come from this island to the north of Flores.

In 2007 Mike’s team (led by Makassan archaeologist Irfan Mahmud) deepened the trench to 4.5m, but the dig was once again halted by rocks.

A new dig and deeper

Later, at Mike’s invitation, and with colleagues from Indonesia’s National Research Centre for Archaeology (ARKENAS), I reopened the trenches in an effort to finally get to the bottom of things.

Indonesian archaeologists at work in Leang Burung 2.
Adam Brumm, Author provided

Over three seasons (2011-13) we excavated to a depth of 6.2m – deeper than ever before. It was a trying dig, requiring the use of heavy-duty shoring to support the unstable walls and specialist drilling equipment to remove huge rocks that had hindered prior work at this site.

Instead of reaching bedrock, we hit groundwater. With water seeping in, our dig was done.

Deep-trench excavation at Leang Burung 2 in 2012.
Adam Brumm, Author provided

Nevertheless, we can confirm that beneath an upper disturbed zone there is indeed evidence of an early human presence, having exposed a rich cultural horizon in a brown clay deep below Glover’s yellow clay.

Among the findings are large, rudimentary stone tools and megafauna fossils. We also turned up a fossil from an extinct elephant, the first known from the site.

Fossil tooth fragment from an extinct elephant, excavated from Leang Burung 2.
M W Moore, Author provided

Dating the new find

We are fortunate to have dating methods that were unavailable in Glover’s day, but the age of the lowermost layers has still proved tricky to nail down.

Our best efforts suggest that the top of Glover’s clay is over 35,000 years old, while the brown clay is about 50,000 years old – and we still have not bottomed out.

The early inhabitants used tools like those made 200,000 years ago on Sulawesi, so the deepest artefacts may be connected to the island’s oldest tool-making culture.

Stone artefacts from the deep deposits at Leang Burung 2, dated to at least 50,000 years ago.
M W Moore, Author provided

These cave dwellers could still have been around when the first rock art appears 40,000 years ago, but owing to dating uncertainties and the erosion of a large amount of sediment from Leang Burung 2, we can’t be sure.




Read more:
Ice age art and ‘jewellery’ found in an Indonesian cave reveal an ancient symbolic culture


Leang Burung 2 rock-shelter.
Adam Brumm, Author provided

A new hope

Digging deeper at Leang Burung 2 is possible but it will require serious effort, including artificially lowering the water table. But while research at this shelter has been challenging, it has led us to another site with better prospects.

Our excavations at nearby Leang Bulu Bettue have unearthed rare “Ice Age” ornaments up to 30,000 years old, and we have now reached deeper and older levels.

The ConversationFurther work at this cave may yield vital clues about the original inhabitants of Sulawesi, including, we hope, the first fossil remains of these enigmatic people.

Adam Brumm, ARC Future Fellow, Griffith University

This article was originally published on The Conversation. Read the original article.


%d bloggers like this: