Category Archives: article

From the jarnpa of central Australia to trolls: the many meanings of monsters



File 20181030 76399 1cn15p.jpg?ixlib=rb 1.1
The Tanami desert in central Australia is haunted by beings called the jarnpa, which look like people but possess superhuman powers.
Shutterstock.com

Yasmine Musharbash, University of Sydney

The word “monster” was coined from two Latin verbs “monere” (to warn) and “demonstrare” (to reveal). In tandem, they create a sense of warning, or a portent. The figure of the monster signals what threatens society.

Monster Anthropology combines the interdisciplinary field of Monster Studies, which explores the meanings of monsters, with anthropology, which is concerned with understanding how different peoples see and experience the world in their own specific ways.
Less focused on fictional monsters in literature and popular culture, (such as ghosts, zombies, vampires, aliens, dragons, and elves) it considers the monsters who haunt the people anthropologists work with.

These monsters are more than characters in myths, songs, and stories from around the fire. They are “out there” on the prowl, lurking in the shadows, lying in wait, going about their monstrous business in the real world. They appear in all kinds of shapes, and for all kinds of reasons. Some are cheeky and mischievous, some are mysterious, others are downright evil.

But all monsters make their mark on the communities they haunt.

Fears come to life

In central Australia, for example, many Aboriginal people are terrified of jarnpa. These monsters may look like humans, but they possess superhuman powers. They can fly as fast as a bullet and make themselves invisible. They love to kill and do so with ease, using either sorcery or brute force.

Jarnpa have existed in the Tanami Desert since time immemorial. In the past, when local people moved across the desert in their seasonal rhythms, jarnpa were held responsible for otherwise inexplicable deaths. A person and a jarnpa must have crossed paths, and the jarnpa did what jarnpa do: it killed.

Nowadays, Aboriginal people live in permanent communities dotted across the desert. It is believed these small towns have become magnets for jarnpa, who flock to them to kill. Interestingly, they kill only Aboriginal residents, while non-Indigenous locals are not even afraid of them.

We can interpret jarnpa as providing insights into prevailing inequalities between Indigenous and non-Indigenous people – in particular the fact that Indigenous Australians have a life expectancy of around 10 years less than those who are non-Indigenous.

A statue of an Anito.
Wikimedia

Another compelling example of monsters who exert a distinct influence over the people they haunt are the Anito, spirits of the Indigenous Tao people on Lanyu Island, Taiwan. Their presence on the island and in the Tao’s lives is all-encompassing.

As the Anito take great joy in spoiling people’s plans, the Tao will not discuss their intentions out loud. For the same reason, the Tao are taught to keep their emotions hidden.

Anger, for example, is said to draw the Anito in, enabling them to detach the soul from one’s body. To ward off this danger, children are taught to suppress anger from an early age. Through these and more examples, anthropologist Leberecht Funk illustrates how the Anito shape every aspect of Tao life.

Dangerous allies

Other monsters are less intrusive, but this does not mean they are any less potent of meaning. Take the Latharr-ghun, for example. This is a big, black, scaly dragon said to live in caverns and underground tunnels in and around Litchfield National Park in the Northern Territory.

The traditional custodians of the land under which the Latharr-ghun roams, the Mak Mak Marranunggu people, told anthropologist Joanne Thurman how it can pop up through soft soil and pull you down with it.

In Litchfield National Park, the Latharr-gun lives in caverns and underground tunnels.
Shutterstock

The Mak Mak Marranunggu know how to recognise the “th-d-th-d-th-d” sound signalling its approach. They say they learned how to calm the Latharr-gun from “the old people”. It’s imperative to stand very still, while announcing in the local language that one belongs to the land. Slinging some sweat in the direction of the Latharr-gun also helps, as that way it can smell that one is “from here”.

Put differently, the danger the Latharr-gun poses can be mediated by custodians only. In the context of a contested land, over which Aboriginal, mining, pastoral, and National Park interests clash, the Latharr-gun becomes a strong if dangerous ally.




Read more:
The ancient origins of werewolves


Icelandic anthropologist Helena Onnudottir describes another monstrous ally: the Tröll. Human-like in appearance but larger and bit uncouth and rough, they live in caves and crevasses across Iceland and make their presence felt in a number of ways.

Like other Icelandic monsters, they are the idiom through which Icelanders know their land – and themselves. Further, as Onnudottir describes, in a situation of danger she “called on her Tröll … and the Tröll headed her call,” ensuring her safe passage.

The Princess and the Trolls, John Bauer, 1913.
Wikimedia

Such ambiguity in nature, being both threatening and familiar at once, is characteristic of all monsters.

Taking monsters seriously

Monsters always take on specific cultural meanings wherever they are found. Consider ghosts, for example. They are one of the most prolific monsters, existing everywhere across time and space. And yet, they do so differently.

Ghosts in Fiji are recognisably related to other local supernatural beings and take on the same responsibilities as ancestral spirits. According to anthropologist Geir Henning Presterudstuen, they reinforce central cultural beliefs about Fijian cosmology, joining in with ancestors protecting the wellbeing of land and people. As they haunt people they also reflect the same concerns about ethnic and social relations that preoccupy the locals, such as sexual morality and maintaining racial borders.




Read more:
Friday essay: why YA gothic fiction is booming – and girl monsters are on the rise


Meanwhile ghosts in North Maluku, Indonesia, as anthropologist Nils Ole Bubandt reports, are part of the current political climate. For instance, a series of unnerving events was understood to be caused by the ghost of a woman whose husband had been killed in a conflict.

The woman had joined in herself, only to be raped, killed, and dumped in the forest. Her haunting the living echoed her own trauma and that of the conflict more widely.

The study of monsters can be a shortcut towards understanding different fears and how they manifest culturally. This is why taking other people’s monsters seriously becomes ever more urgent in these apocalyptic times of climate change, wars, inequality, terrorism, deforestation, extinction, floods, fires, and droughts.The Conversation

Yasmine Musharbash, Senior Lecturer of Anthropology, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

How huge floods and complex infrastructure could have triggered ancient Angkor’s demise



File 20181017 17671 1edzf8f.jpg?ixlib=rb 1.1
A monument to urban frailty?
Javier Gil/Wikimedia Commons, CC BY-SA

Dan Penny, University of Sydney

A series of floods that hit the ancient city of Angkor would have overwhelmed and destroyed its vast water network, according to a new study that provides an explanation for the downfall of the world’s biggest pre-industrial city.

Our research, published in Science Advances, explains how the damage to this vital network would have triggered a series of “cascading failures” that ultimately toppled the entire city. And it holds lessons for today’s cities about the danger posed when crucial infrastructure is overwhelmed.

Angkor, in modern-day Cambodia, was founded in 802 AD and abandoned during the 15th century. Its demise coincided with a period of highly variable rainfall in the late 14th and early 15th centuries, with prolonged droughts and extremely wet years.

We know Angkor’s water distribution network was heavily damaged by flooding during that period. But we didn’t have an explanation of how this triggered the city’s eventual collapse and abandonment.

Flooding fate

Angkor is an unusual archaeological site because the remains of the city can still be seen on the ground and, particularly, from the air. It is thus possible to map precisely the constructed features that made up its urban fabric and, from this, to interpret the function and flow of the living city.

We used existing archaeological maps of Angkor to chart the city’s water distribution network, which was made up of hundreds of excavated canals and embankments, temple moats, reservoirs, natural river channels, and other features. This sprawling network, covering more than 1,000 square km, provided both irrigation and flood defence.

We then used a computer model to simulate the effects of flooding, such as would have occurred during huge monsoonal rains, to see how the system would have coped with the biggest deluges.

We found that large floods would have been channelled into just a few major pathways, which would have suffered significant erosion as a result. Other parts of the network, meanwhile, would have had less water flow and would have begun to fill up with sediment.

The resulting feedback loop would have caused damage to cascade through the network, ultimately fragmenting Angkor’s water infrastructure.

A watery end.
Alcyon/Wikimedia Commons, CC BY-SA

There are two main messages from our research. First, it demonstrates how climatic variability in the 14th and 15th centuries could have triggered the demise of the city.

Second, it shows how Angkor’s fate resonates with today’s concerns about the resilience of our own urban infrastructure – not just to extreme weather (although that is important), but also to other potentially damaging events such as terrorism.




Read more:
What’s critical about critical infrastructure?


Angkor was once the largest city on Earth. But its huge growth made it unworkable, unwieldy, and ultimately irreparable. Its critical urban infrastructure was both complex and interdependent, meaning that a seemingly small disruption (such as a flood) could fracture the entire network and bring down an entire city.

Ancient Angkor, it seems, experienced the same challenges as modern urban networks. As we move further into a period characterised by extreme weather events, the resilience of our urban infrastructure will be tested.

As cities grow, their infrastructure becomes more complex. Eventually, networks such as roads, water infrastructure or electricity grids reach a critical state that is neither predicted nor designed by those that operate them. In these networks, small errors or outages in one part of the network can quickly propagate to become a much larger failure. One example would be an electrical fault that triggers a wide-scale blackout.

Government agencies around the world have developed or are developing strategies to deal with threats to critical infrastructure, including from terrorism, natural disasters and, increasingly, extreme weather events related to climate change. Resilience can be built into infrastructural networks by increasing redundancy (or alternative flow paths) and emphasising modularity, so that cascading failures, if they occur, can be localised while maintaining the function of the wider network.

Our research on the demise of Angkor’s infrastructure sounds a warning from history about the dangers of the complex urban environments in which most humans now live, and the urgent need to prepare for a more variable future.The Conversation

Dan Penny, Associate Professor, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.


The ancient origins of werewolves



File 20181026 71029 bprt57.jpg?ixlib=rb 1.1
In Ancient Greek texts, the king Lycaon is punished for misdeeds by being turned into a wolf.
Wikimedia

Tanika Koosmen, University of Newcastle

The werewolf is a staple of supernatural fiction, whether it be film, television, or literature. You might think this snarling creature is a creation of the Medieval and Early Modern periods, a result of the superstitions surrounding magic and witchcraft.

In reality, the werewolf is far older than that. The earliest surviving example of man-to-wolf transformation is found in The Epic of Gilgamesh from around 2,100 BC. However, the werewolf as we now know it first appeared in ancient Greece and Rome, in ethnographic, poetic and philosophical texts.

These stories of the transformed beast are usually mythological, although some have a basis in local histories, religions and cults. In 425 BC, Greek historian Herodotus described the Neuri, a nomadic tribe of magical men who changed into wolf shapes for several days of the year. The Neuri were from Scythia, land that is now part of Russia. Using wolf skins for warmth is not outside the realm of possibility for inhabitants of such a harsh climate: this is likely the reason Herodotus described their practice as “transformation”.

A werewolf in a German woodcut, circa 1512.
Wikimedia

The werewolf myth became integrated with the local history of Arcadia, a region of Greece. Here, Zeus was worshipped as Lycaean Zeus (“Wolf Zeus”). In 380 BC, Greek philosopher Plato told a story in the Republic about the “protector-turned-tyrant” of the shrine of Lycaean Zeus. In this short passage, the character Socrates remarks: “The story goes that he who tastes of the one bit of human entrails minced up with those of other victims is inevitably transformed into a wolf.”

Literary evidence suggests cult members mixed human flesh into their ritual sacrifice to Zeus. Both Pliny the Elder and Pausanias discuss the participation of a young athlete, Damarchus, in the Arcadian sacrifice of an adolescent boy: when Damarchus was compelled to taste the entrails of the young boy, he was transformed into a wolf for nine years. Recent archaeological evidence suggests that human sacrifice may have been practised at this site.




Read more:
Friday essay: the female werewolf and her shaggy suffragette sisters


Monsters and men

The most interesting aspect of Plato’s passage concerns the “protector-turned-tyrant”, also known as the mythical king, Lycaon. Expanded further in Latin texts, most notably Hyginus’s Fabulae and Ovid’s Metamorphoses, Lycaon’s story contains all the elements of a modern werewolf tale: immoral behaviour, murder and cannibalism.

An Athenian vase depicting a man in a wolf skin, circa 460 BC.
Wikimedia

In Fabulae, the sons of Lycaon sacrificed their youngest brother to prove Zeus’s weakness. They served the corpse as a pseudo-feast and attempting to trick the god into eating it. A furious Zeus slayed the sons with a lightning bolt and transformed their father into a wolf. In Ovid’s version, Lycaon murdered and mutilated a protected hostage of Zeus, but suffered the same consequences.

Ovid’s passage is one of the only ancient sources that goes into detail on the act of transformation. His description of the metamorphosis uses haunting language that creates a correlation between Lycaon’s behaviour and the physical manipulation of his body:

…He tried to speak, but his voice broke into

an echoing howl. His ravening soul infected his jaws;

his murderous longings were turned on the cattle; he still was possessed

by bloodlust. His garments were changed to a shaggy coat and his arms

into legs. He was now transformed into a wolf.

Ovid’s Lycaon is the origin of the modern werewolf, as the physical manipulation of his body hinges on his prior immoral behaviour. It is this that has contributed to the establishment of the “monstrous werewolf” trope of modern fiction.

Lycaon’s character defects are physically grafted onto his body, manipulating his human form until he becomes that which his behaviour suggests. And, perhaps most importantly, Lycaon begins the idea that to transform into a werewolf you must first be a monster.

The idea that there was a link between biology (i.e. appearance) and “immoral” behaviour developed fully in the late 20th century. However, minority groups were more often the target than mythical kings. Law enforcement, scientists and the medical community joined forces to find “cures” for socially deviant behaviour such as criminality, violence and even homosexuality. Science and medicine were used as a vehicle through which bigotry and fear could be maintained, as shown by the treatment of HIV-affected men throughout the 1980s.

However, werewolf stories show the idea has ancient origins. For as long as authors have been changing bad men into wolves, we have been looking for the biological link between man and action.The Conversation

Tanika Koosmen, PhD Candidate, University of Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Columbus believed he would find ‘blemmyes’ and ‘sciapods’ – not people – in the New World



File 20181004 52678 1rvabyh.jpg?ixlib=rb 1.1
The statue of Christopher Columbus in Columbus Circle, New York City.
Zoltan Tarlacz/Shutterstock.com

Peter C. Mancall, University of Southern California – Dornsife College of Letters, Arts and Sciences

In 1492, when Christopher Columbus crossed the Atlantic Ocean in search of a fast route to East Asia and the southwest Pacific, he landed in a place that was unknown to him. There he found treasures – extraordinary trees, birds and gold.

But there was one thing that Columbus expected to find that he didn’t.

Upon his return, in his official report, Columbus noted that he had “discovered a great many islands inhabited by people without number.” He praised the natural wonders of the islands.

But, he added, “I have not found any monstrous men in these islands, as many had thought.”

Why, one might ask, had he expected to find monsters?

My research and that of other historians reveal that Columbus’ views were far from abnormal. For centuries, European intellectuals had imagined a world beyond their borders populated by “monstrous races.”

Of course the ‘monstrous races’ exist

One of the earliest accounts of these non-human beings was written by the Roman natural historian Pliny the Elder in 77 A.D. In a massive treatise, he told his readers about dog-headed people, known as cynocephalus, and astoni, creatures with no mouth and no need to eat.

Across medieval Europe, tales of marvelous and inhuman creatures – of cyclops, blemmyes, creatures with heads in their chests, and sciapods, who had a single leg with a giant foot – circulated in manuscripts hand-copied by scribes who often embellished their treatises with illustrations of these fantastic creatures.

A 1544 woodcut by Sebastian Münster depicts, from left to right, a sciapod, a cyclops, conjoined twins, a blemmye and a cynocephaly.
Wikimedia Commons

Though there were always some skeptics, most Europeans believed that distant lands would be populated by these monsters, and stories of monsters traveled far beyond the rarefied libraries of elite readers.

For example, churchgoers in Fréjus, an ancient market town in the south of France, could wander into the cloister of the Cathédrale Saint-Léonce and study monsters on the more than 1,200 painted wooden ceiling panels. Some panels portrayed scenes of daily life – local monks, a man riding a pig and contorted acrobats. Many others depicted monstrous hybrids, dog-headed people, blemmyes and other fearsome wretches.

The ceiling of the Cathédrale Saint-Léonce depicts an array of monstrous creatures.
Peter C. Mancall, Author provided

Perhaps no one did more to spread news of monsters’ existence than a 14th-century English knight named John Mandeville, who, in his account of his travels to faraway lands, claimed to have seen people with the ears of an elephant, one group of creatures who had flat faces with two holes, and another that had the head of a man and the body of a goat.

Scholars debate whether Mandeville could have ventured far enough to see the places that he described, and whether he was even a real person. But his book was copied time and again, and likely translated into every known European language.

Leonardo da Vinci had a copy. So did Columbus.

Old beliefs die hard

Even though Columbus didn’t see monsters, his report wasn’t enough to dislodge prevailing ideas about the creatures Europeans expected to find in parts unknown.

In 1493 – around the time Columbus’ first report began to circulate – printers of the “Nuremberg Chronicle,” a massive volume of history, included images and descriptions of monsters. And soon after the explorer’s return, an Italian poet offered a verse translation describing Columbus’ journey, which its printer illustrated with monsters, including a sciapod and a blemmye.

Indeed, the belief that monsters lived at the Earth’s edge remained for generations.

In the 1590s, the English explorer Sir Walter Raleigh told readers about the American monsters he heard about in his travels to Guiana, some of which had “their eyes in their shoulders, and their mouths in the middle of their breasts, & that a long train of haire groweth backward between their shoulders.”

Soon after, the English natural historian Edward Topsell translated a mid-16th-century treatise of the various animals of the world, a book that appeared in London in 1607, the same year that colonists established a small community at Jamestown, Virginia. Topsell was eager to integrate descriptions of American animals in his book.
But alongside chapters on Old World horses, pigs and beavers, readers learned about the “Norwegian monster” and a “very deformed beast” that Americans called an “haut.” Another, known as a “su,” had “a very deformed shape, and monstrous presence” and was “cruell, untamable, impatient, violent, [and] ravening.”

Of course, in the New World, the gains for Europeans came at a terrifying cost for Native Americans: The newcomers stole their land and treasures, enslaved them, introduced Old World diseases and spurred long-term environmental change.

In the end, perhaps these indigenous Americans saw the invaders of their homelands as a ‘monstrous race’ of its own – creatures who destabilized their communities, took their possessions and threatened their lives.The Conversation

Peter C. Mancall, Andrew W. Mellon Professor of the Humanities, University of Southern California – Dornsife College of Letters, Arts and Sciences

This article is republished from The Conversation under a Creative Commons license. Read the original article.


World politics explainer: the Russian revolution


File 20180828 75990 hdb839.jpg?ixlib=rb 1.1
To try and understand the Russian revolution outside of the broader social context of the time is to neglect the development of nationhood in the region.
Wikicommons

Mark Edele, University of Melbourne

This article is part of our series of explainers on key moments in the past 100 years of world political history. In it, our authors examine how and why an event unfolded, its impact at the time, and its relevance to politics today.


For most people, the term “Russian Revolution” conjures up a popular set of images: demonstrations in Petrograd’s cold February of 1917, greatcoated men in the Petrograd Soviet, Vladimir Lenin addressing the crowds in front of the Finland station, demonstrators dispersed during the July days and the storming of the Winter Palace in October.

What happened?

These were all important events that forced the Tsar to abdicate, brought the Bolsheviks to power, took Russia out of the first world war, prompted British, American, and Japanese interventions, and careened the Romanov empire towards years of bloody civil war.

Among revolutionary socialists, they still inspire daydreams of future revolutions. Historians on the political right, by contrast, promote them as warnings of what happens if you try to change the world. In Russia, meanwhile, they pose complex challenges for constructing a past that can inspire the present.

The standard story summarised by these pictures goes something like this:

Demonstrations in Petrograd, February 1917.
Wikicommons
Riot on Nevsky Prospekt, July 1917.
Viktor Bulla/Wikicommons
Storming of the Winter Palace, October 1917.
Wikicommons

The Russian empire, already under severe political and social strain in 1914, broke apart under the pressures of modern warfare. In 1916, a massive uprising against conscription to work shook central Asia.

In 1917, it was the turn of the Russian heartland. Industrial strikes, protests over food shortages, and women’s demonstrations combined to create a revolutionary crisis in Petrograd, the capital of the empire.

Eventually, this crisis convinced both the political and the military elites to pressure the Tsar to abdicate. These events are known as the February revolution.

They turned out to be only the first step. Throughout 1917, the revolution radicalised until in October, the most radical wing of the Russian Social Democrats – Lenin’s Bolsheviks – took power in the name of the revolutionary working class. The October revolution, in turn, triggered the Russian Civil War which was eventually won by the Bolsheviks.

But this focus on events in Petrograd in 1917 is misleading. If we want to understand the significance of the Russian revolution for today’s world, we need to understand both its position in a wider historical process and its very complexity.




Read more:
Friday essay: Putin, memory wars and the 100th anniversary of the Russian revolution


The larger context

What happened in 1917 was not just a beginning. It was also a moment in the larger trajectory of the Romanov empire (the pre-Soviet Russian Empire) embroiled in a world war it was poorly prepared to fight.

1917 is part of the story of how an empire, built between the 15th and the 18th century on the basis of peasants tied to the land of their master (serfdom) and the indisputable power of the Tsar (autocracy) tried to come to grips with a changing world in the 19th and early 20th centuries filled with overseas empires, industrialisation, and the emerging mass society.

It is but a snapshot in the history of imperialism, economic and social change, and decolonisation. These are all ongoing processes that still trouble the region today.

This sequence of events began with the lost Crimean War of 1853-56, which triggered the Great Reforms of the 1860s and 1870s.

Together with a determined push in the 1890s to industrialise the country, these reforms brought a new, more modern, more urban, and more educated society into being.

This more complex society then faced its first test in 1904-05. A disastrous war against Japan destabilised the empire enough to trigger a first revolution in 1905. It forced the Tsar to make concessions towards modern politics through the creation of a pseudo-parliament, legal parties, and decreased control of the media.

Then came the first world war. The military campaign went poorly, disgruntling the elites with an obviously incompetent regime, dislocating populations on a massive scale, intensifying national feelings in this multi-ethic empire, triggering an economic crisis of immense proportions, and further polarising social divisions between the haves and have-nots.

The result was a cluster of wars, revolutions, and civil wars that dragged on to the early 1920s. The Union of Soviet Socialist republics that emerged from this catastrophe united most of the lands the Romanovs had ruled. Finland, Latvia, Estonia, Lithuania, and Poland went their own way, meanwhile, at least until the second world war.

Map of former USSR States.
Wikicommons, CC BY-SA

Contemporary relevance

The “Russian revolution”, then, was not just Russian and not just a revolution. It was also a moment when modern nations were born.

Notwithstanding earlier histories, today’s Ukraine, Belarus, Lithuania, Latvia, and Estonia began their lives in the crucible of war and revolution. Independent Finland and Poland, too, saw the light of day in 1917.

As one historian has pointed out in a compressed overview over events in Ukraine, “the Ukrainian revolution is not the Russian revolution.” Neither were the more democratic revolutions in Omsk, Samara, and Ufa, the same as the Bolshevik revolution in Petrograd, to say nothing of those beyond the peaks of the Caucasus, or the grassroots rural revolutions all over the empire. These other revolutions, often forgotten but as much part of the process as the iconic events in Petrograd, amounted to the catastrophic breakdown of the empire in 1918.

But the revolutionary period saw more than just the replacement of one empire by another. It also changed matters decisively. For one, the Soviet empire was not capitalist, notwithstanding the limited market mechanisms allowed under the New Economic Policy (NEP), introduced in 1921 to deal with the catastrophic economic crisis engendered by war, revolution, and civil war.

The new empire was also much more national in form than its Romanov predecessor had been. The aspirations of the non-Russian peoples had to be accommodated in some way and hence a pseudo-federal state was erected, where “Union republics” (such as Ukraine, Belarus, or Russia) were joined together in a Union of Soviet Socialist Republics (or USSR). In 1991, it would break apart along the borders of these Union republics, lines drawn, by and large, as a result of the reconquest of the Romanov lands by the revolutionary Red Army.

These lines became more significant over time, because of a second far reaching aspect of the national transformation of the multi-ethnic Romanov empire in the crucible of the “Russian” revolution. In order to deal with the threat of nationalism, the Soviet Union became an “affirmative action empire”, which gave non-Russian minorities space and resources to develop their languages and cultures. This affirmation of the national principle was meant to disarm nationalism and help the development of socialism. Instead, it inadvertently “promoted ethnic particularism”.

As a result, many of the nationalisms we encounter in the region today are to a considerable degree a result of this paradoxical Soviet nation making.The Conversation

Mark Edele, Hansen Chair in History, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.


The road to here: rivers were the highways of Australia’s colonial history



File 20180930 48644 159bj1v.jpg?ixlib=rb 1.1
A river in Van Diemen’s Land, charted during Nicolas Baudin’s 1802 journey.
National Library of Australia, CC BY-NC

Imogen Wegman, University of Tasmania

On November 2, 1816, Charles Repeat, “a poor old man”, was driving his master’s cart along the short route between Hobart and New Town in Van Diemen’s Land (Tasmania). He accidentally drove over a small tree stump, and was thrown from the cart and killed immediately.

By that time the British had been established in New Town for about 12 years, and this road was part of the route to several settlements further out of town. It was not an unused back street but a main road, and yet drivers still had to avoid the deadly perils of tree stumps.

Even main roads could be in poor condition. Macquarie Street in Hobart, 1833.
State Library Victoria

It was not that roads were not important – a network of overland routes was quickly spreading to connect the growing colony – but they were not the only transport routes. Waterways were also vital to transport systems, as overland routes were rough and slow. In Australia, rivers played a pivotal role in giving European settlers access to the land beyond the immediate coastlines, and shaped the modern cities we know today.

Charts of Van Diemen’s Land from expeditions by Abel Tasman, James Cook, and Nicolas Baudin reveal the extent to which these colonial explorers relied on rivers. The maps show water depths, fresh water supplies, and sources of timber for ship repairs. Mountain ranges are depicted as lines of peaks, as they would have appeared from the deck of a ship following the coastline. The world beyond navigable waterways was a place for speculation, not exploration.




Read more:
New law finally gives voice to the Yarra River’s traditional owners


Once the British arrived in Australia, one of their main concerns was finding sites for further expansion. Surveyors and adventurers recorded the landscape around the primary settlements, sometimes combining them together as in the case of one chart, described as “A map of all those parts of … New South Wales which have been seen by any person belonging to the settlement”. Other charts were surveyed and drafted by one person, such as James Meehan’s 1804 chart of the land alongside Hobart’s River Derwent.

Chart showing exploration route along the Upper Derwent River, 1828.
Tasmanian Archives and Heritage Office

In Tasmania’s hilly landscape, valleys were often more accessible than the scrub-covered hills. Rivers also served as stable landmarks to identify important points in the environment, and were useful for retracing steps. Even some 30 years after the British were firmly settled in Tasmania, rivers remained the starting point for pushing out into areas they had not yet explored.

River reasons

There were plenty more sensible reasons for concentrating on rivers, besides ease of access. They also provided the necessities of daily life: drinking water, irrigation for kitchen gardens, and a sewer for removing the less picturesque elements. This preoccupation with waterways is captured on charts showing the Tasmanian colony throughout the early 19th century, where all the settlements are based on the banks of rivers.

Settlements around Hobart, based along the waterways.
Reconstructed by Imogen Wegman, original from Tasmanian Archives and Heritage Office

Even places that could not be reached by river, such as Bothwell, 60km north of Hobart, needed fresh water. Settlements like New Norfolk, 20km from Hobart, were used as transport hubs between Bothwell and the colony’s governing centre. Goods could move between river and road along these routes, depending on the infrastructure, urgency and weather.

Individual properties could be focused on the rivers as well, with houses facing their front doors toward the main thoroughfare – the river. Tour guides at Woolmers Estate in northern Tasmania will tell you that the house was originally orientated towards the river. This was common among grand houses and small cottages alike. It was not until roads became more reliable that new properties began facing them instead.

The Archer family at Woolmers renovated and built a grand new entrance, now facing an overland access route. This was a power move, as it made sure that guests approaching the house would pass through the most impressive land, and their first sighting of the house would be the entrance. They would be duly awestruck by the grandeur (and therefore wealth) of their hosts.




Read more:
A home for everyone? Property ownership has been about status and wealth since our convict days


The site of today’s Hobart central business district was chosen largely because of the waterways. The River Derwent was deep and suitable for ships, while the Hobart Rivulet (and others) provided fresh water for daily life and industry. Priorities change, however, and the rivulet has now been “all but obliterated from the city centre”, squashed into a series of culverts and tunnels.

The history of Australia’s colonial-era reliance on waterways will not be so easily buried, however. In May 2018, Hobart was hit by storms that brought 100mm of rain in a few hours. Hobart’s rivulets and streams broke their banks with spectacular vigour, washing over streets and into buildings. This was not the first time the Hobart Rivulet has brought the city to a standstill, and it will doubtless not be the last.

Floodwaters in Hobart, 10 May 2018 (ABC News)

For those of us who live in today’s Australian cities, waterways can be easy to dismiss as simply picturesque places to paddle a kayak or have a swim. But historically they were so much more. In fact, without rivers, the people who sowed the seeds of our modern cities would not have got very far at all.The Conversation

Imogen Wegman, Project officer, University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.


World politics explainer: the end of Apartheid


File 20180914 177935 1h2tvx9.jpg?ixlib=rb 1.1
Anti-Apartheid protest in the 1980s are mere snapshots of time in the long journey towards equality, paved by the sweat and blood of those in the African National Congress and beyond.
Paul Weinberg/Wikicommons, CC BY-SA

David Robinson, Edith Cowan University

This article is part of our series of explainers on key moments in the past 100 years of world political history. In it, our authors examine how and why an event unfolded, its impact at the time, and its relevance to politics today.


Racial divisions emerged in South Africa as early as the 1600s, due to Dutch settlement. It began with the Europeans maintaining segregation and hierarchy between themselves, their slaves (many from Asia), and local African populations.

Once the Cape of Good Hope was seized by the British during the Napoleonic period, race-based policies in the colony became increasingly formalised.

The 1806 Cape Articles of Capitulation, which secured the Dutch settlers’ surrender in exchange for the protection of their existing rights and privileges, bound the British to respect prior Dutch legislation and gave segregation an enduring place within the legal system of the South African colonies.

What happened?

Under British control during the 1800s, various laws were passed to limit the political, civil and economic rights of non-whites in South Africa.

This included denying them the right to vote, limiting their right to own land, and requiring the carrying of passes for movement within colonies.

Despite resistance to discriminatory laws in the first half of the 20th century by groups like the African National Congress (ANC), these laws persisted over the decades.

Signage in Durban reflecting apartheid values, 1989.
Guinnog/Wikicommons, CC BY-SA

However, social change accelerated in South Africa during the second world war, with African labourers increasingly drawn to urban areas. This was due to industrial production increasing to service Europe’s wartime demands for minerals and local manufacturing replacing imports, empowering rebellious workers and ANC activists in the process.

The threat of social change was palpable, leading South Africa’s white population to elect the Afrikaner-dominated Herenigde Nasionale Party (National Party) in 1948, over the more progressive United Party.

The National Party, which then ruled South Africa until 1994, offered white South Africans a new programme of segregation called Apartheid – which translates to “separateness”, or “apart-hood”.

Apartheid was based on a series of laws and regulations that formalised identities, divisions, and differential rights within South Africa. The system classified all South Africans as “White”, “Coloured”, “Indian”, and “African” – with Africans classified into 10 tribal groups.

From 1950, the Population Registration Act and the Group Areas Act assigned all South African citizens a racial status, and determined in which physical areas of South Africa different races could live.

Future legislation would embed these regional divisions, and provide a façade of self-government for the African regions.

The 1949 Prohibition of Mixed Marriages Act and 1950 Immorality Act outlawed interracial romantic relationships, and by 1953 the Reservation of Separate Amenities Act and Bantu Education Act segregated all kinds of public spaces, services and amenities.

Sign erected during the apartheid era.
Shutterstock

Racial policies also intermingled with rhetoric against communism. The 1950 Suppression of Communism Act was central to banning any party advocating a subversive ideology. Virtually any progressive opponent of the National Party regime could be defined as communist, particularly if they disrupted “racial harmony”, which severely limited anti-Apartheid activists’ ability to organise.

More generally, the government also maintained very socially conservative laws for all citizens regarding sexuality, reproductive health, and vices like gambling and alcohol.

The impact of and response to apartheid policies

In this context, the ANC youth wing (including a young lawyer by the name of Nelson Mandela) came to dominate the party and adopt a confrontational black nationalist programme. This group advocated strikes, boycotts and civil disobedience.

In March 1960, police attacked a demonstration against Apartheid’s racial pass system in the Sharpeville township. They killed 69 people, arrested over 18,000 more, and implemented a ban of the ANC and the smaller Pan-Africanist Congress.

Painting of the Sharpeville massacre in 1960.
Godfrey Rubens/Wikicommons, CC BY-SA

This pushed resistance towards more radical, underground tactics. Following authorities’ further brutal treatment of a 1961 labour strike, the ANC launched armed struggle against Apartheid through a military wing: Umkhonto we Sizwe (MK). As a leader of MK, Nelson Mandela was arrested in 1962 and subsequently sentenced to life in jail.

Anti-Apartheid resistance dimmed during the 1960s due to the harsh repression of activist activities and the arrests of many anti-Apartheid leaders. But in the 1970s, it was revitalised by a growing Black Consciousness Movement.

The independence of nearby Angola and Mozambique from Portugal, and discriminatory education policies that led to the 1976 Soweto Uprising, were hopeful examples of change. By the 1980s, township rebellions, boycotts, union militancy, and growing political organisations pushed South Africa’s Botha government into a state of emergency, forcing dramatic concessions that escalated to negotiations with Mandela.

Despite the British and American governments classifying the ANC as a terrorist organisation during the 1980s, the growing international criticism of Apartheid, spurred by disruptive resistance in South Africa, and the undermining of the anti-Communist imperative due to the end of the Cold War, also moved those states to finally implement trade sanctions against Apartheid.

In 1990, President Frederik de Klerk freed Mandela and unbanned anti-Apartheid political parties, to allow negotiations for a path to majority-rule democracy.

Frederik de Klerk (left with Nelson Mandela, 1992.
World Economic Forum/Wikicommons, CC BY-SA

Despite right-wing backlash and outbreaks of violence, the white minority did overwhelmingly approve negotiations for democratic transition. Mandela sought peaceful racial reconciliation, through a negotiated process of transition to free, inclusive elections, and the post-Apartheid operations of the Truth and Reconciliation Commission.

Receiving the 1993 Nobel Peace Prize and then winning South Africa’s 1994 elections, Mandela was thus personally integral to the peaceful transition from Apartheid to multiracial democracy.

Contemporary relevance

What legacy has the end of Apartheid thus left?

Globally, Mandela became an icon, associated with resistance, justice, and Christ-like self-sacrifice. The popular perception of Mandela and the anti-Apartheid movement, though acknowledging some elements of the struggle’s history, generally demonstrates a shallow understanding of what actually occurred.

These narratives predominantly fail to engage with Mandela’s leadership of military struggle, and the widespread militant, and violent, action that forced the Apartheid regime to negotiate. They often highlight international campaigns against Apartheid, but are mute on the strong military and financial support for Apartheid South Africa by western states throughout the Cold War.

While leaving a general message that opposition to injustice can win, the anti-Apartheid movement’s history encapsulated by Mandela is probably as well understood as the iconic image of Che Guevara printed on t-shirts.


Shutterstock

Regionally, the end of Apartheid ended much of Southern Africa’s conflict, and allowed black-ruled states to unite in far greater cooperation for social and economic development.

The intervention of South African troops (and mercenaries) throughout Africa was also greatly reduced. However, conflict has continued in many areas of Africa, as have operations of the African Union and increasingly the United States’ Africa Command.

Meanwhile, though still a regional hegemon, post-Apartheid South Africa failed to effectively support neighbouring democracies, allowing questionable regimes such as Mugabe’s ZANU-PF in Zimbabwe to persist without adequate intervention. Newly stable southern Africa was also increasingly open to trade and investment from China – their enhanced global reach and influence an unforeseen result of freedom in many developing countries.

Nationally, though entering power with principles seeking redistribution of wealth and a general raising of living standards, the ANC gradually embraced neoliberal policies that have only led to an increase in poverty and inequality in South Africa over the past two decades.

The ANC’s overwhelming dominance of government throughout this period – with an absolute majority – has stifled development of effective parliamentary democracy (though South African civil society remains vibrant and active). And corruption throughout the ANC and the South African state has become endemic. Though narratives of “white genocide” in South Africa are not supported by facts, although crime and racial enmity remain virulent in South African society. But, South Africa also persists as one of the world’s most multicultural and inclusive countries.

Despite its troubles, South Africa is a nation with an inspiring story of struggle – even though an accurate vision of the country’s past and present requires engagement with many complexities.

The South African example shines a light on sometimes unpleasant realities of history, as well as enduring aspects of human nature. For those who are willing to seek out the details and contemplate the contradictions, the end of Apartheid leaves a legacy of insight most valuable in our turbulent age.The Conversation

David Robinson, Lecturer of History, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


World politics explainer: The twin-tower bombings (9/11)



File 20180904 41720 xwahre.jpg?ixlib=rb 1.1
South Tower being hit during the 9/11 attacks. The events of September 11 2001 has significantly shaped American attitudes and actions towards fighting terrorism, surveilling citizens and othering outsiders.
NIST SIPA/Wikicommons

Barbara Keys, University of Melbourne

This article is part of our series of explainers on key moments in the past 100 years of world political history. In it, our authors examine how and why an event unfolded, its impact at the time, and its relevance to politics today.


At 8:46am on a sunny Tuesday morning in New York City, a commercial jet plane flew into the North Tower of the World Trade Centre, cutting through floors 93 to 99.

As the news was beamed around the world, shaken reporters wondered whether the crash had been an accident or an act of terrorism. At 9:03am, viewers watching the smoke billowing from the gash in the building were stunned to see a second jet plane dart into view and fly directly into the South Tower. Suddenly, it was clear that the United States was under attack.

The scale of the assault became apparent about 40 minutes later, when a third jet crashed into the Pentagon. Not long after, in the fourth shock of the morning, the South Tower of the World Trade Centre unexpectedly crumbled to the ground in a few seconds, its structural integrity destroyed by the inferno set off by the plane’s thousands of gallons of jet fuel. Its twin soon succumbed to the same fate.

Fire fighters on scene after the 9/11 attack.
Mike Goad/Wikicommons

What happened?

Over the next days and weeks, the world learned that 19 militants belonging to the Islamic terrorist group, al Qaeda, armed with box cutters and knives missed by airport security, had hijacked four planes.

Three hit their targets. The fourth, intended for the White House or the Capitol, crashed in a field in Pennsylvania when passengers, who had learned of the other attacks, struggled for control of the plane. All told, close to 3,000 people were killed and 6,000 were injured.

Immediate impact of the attacks

The events of 9/11 seared the American psyche. A country whose continental states had not seen a major attack in nearly 200 years was stunned to find that its financial and military centres had been hit by a small terrorist group based thousands of miles away. More mass attacks suddenly seemed not just probable but inevitable.




Read more:
How the pain of 9/11 still stays with a generation


The catastrophe set in motion a sequence of reactions and unintended consequences that continue to reverberate today. Its most lasting and consequential effects are interlinked: a massively expensive and unending “war on terror”, heightened suspicion of government and the media in many democratic countries, a sharp uptick in Western antagonism toward Muslims, and the decline of US power alongside rising international disorder – developments that aided the rise of Donald Trump and leaders like him.

War without end?

Just weeks after 9/11, the administration of US President George W. Bush invaded Afghanistan with the aim of destroying al Qaeda, which had been granted safe haven by the extremist Taliban regime. With the support of dozens of allies, the invasion quickly toppled the Taliban government and crippled al Qaeda. But it was not until 2011, under President Barack Obama, that US forces found and killed al Qaeda’s leader and 9/11 mastermind – Osama bin Laden.

American soldiers in Afghanistan, 2001.
Marine Corps New York/Flickr, CC BY

Though there have been efforts to end formal combat operations since then, over 10,000 US troops remain in Afghanistan today, fighting an intensifying Taliban insurgency. It is now the longest war the United States has fought. Far from being eradicated, the Taliban is active in most of the country. Even though the war’s price tag is nearing a trillion dollars, domestic pressure to end the war is minimal, thanks to an all volunteer army and relatively low casualties that make the war seem remote and abstract to most Americans.

Even more consequential has been the second major armed conflict triggered by 9/11: the US-led invasion of Iraq in 2003. Although Iraqi dictator Saddam Hussein was not linked to 9/11, officials in the administration of George W. Bush were convinced his brutal regime was a major threat to world order. This is largely due to Saddam Hussein’s past aggression, his willingness to defy the United States, and his aspirations to build or expand nuclear, chemical, and biological weapons programs, making it seem likely that he would help groups planning terrorist attacks on the West.

The invading forces quickly ousted Saddam, but the poorly executed, error-ridden occupation destabilised the entire region.

In Iraq, it triggered a massive, long-running insurgency. In the Middle East more broadly, it boosted Iran’s regional influence, fostered the rise of the Islamic State, and created lasting disorder that has led to civil wars, countless terrorist attacks, and radicalisation.

In many parts of the world, the war fuelled anti-Americanism; in Europe, public opinion about the war set in motion a widening estrangement between the United States and its key European allies.

Monetary and social costs

Today, the United States spends US$32 million every hour on the wars fought since 9/11. The total cost is over US$5,600,000,000,000. (5.6 trillion dollars). The so-called war on terror has spread into 76 countries where the US military is now conducting counter-terror activities, ranging from drone strikes to surveillance operations.

The mind-boggling sums have been financed by borrowing, which has increased social inequality in the United States. Some observers have suggested that government war spending was even more important than financial deregulation in causing the 2007-2008 Global Financial Crisis.

Eroding democracy

The post-9/11 era has eroded civil liberties across the world. Many governments have cited the urgent need to prevent future attacks as justification for increased surveillance of citizens, curbing of dissent, and enhanced capacity to detain suspects without charge.

The well publicised missteps of the FBI and the CIA in failing to detect and prevent the 9/11 plot, despite ample warnings, fed public distrust of intelligence and law enforcement agencies. Faulty intelligence about what turned out to be nonexistent Iraqi “weapons of mass destruction” (WMDs) undermined public confidence not only in the governments that touted those claims but also in the media for purveying false information.

The result has been a climate of widespread distrust of the voices of authority. In the United States and in other countries, citizens are increasingly suspicious of government sources and the media — at times even questioning whether truth is knowable. The consequences for democracy are dire.

Increasing Islamophobia

Across the West, 9/11 also set off a wave of Islamophobia. Having fought a decades-long Cold War not long before, Americans framed the attack as a struggle of good versus evil, casting radical Islam as the latest enemy. In many countries, voices in the media and in politics used the extremist views and actions of Islamic terrorists to castigate Muslims in general. Since 9/11, Muslims in the United States and elsewhere have experienced harassment and violence.

Cartoon highlighting Islamophobia in Europe.
Carlos Latuff/Flickr, CC BY-SA

In Western countries, Muslims are now often treated as the most significant public enemy. European populists have risen to power by denouncing refugees from Muslim majority countries like Syria, and the willingness and ability of Muslims to assimilate is viewed with increasing scepticism.

A week after his inauguration, US President Donald Trump kept a campaign promise by signing the so-called “Muslim ban”, designed to prevent citizens of six Muslim-majority countries from entering the United States.

Following attacks

One of the most widely expected consequences of 9/11 has so far been averted. Though Islamic terrorists have engaged in successful attacks in the West since 9/11, including the 2002 Bali bombings, the 2004 Madrid train bombings, and the 2015 attacks in Paris, there has been no attack on the scale of 9/11. Instead, it is countries with large Muslim populations that have seen a rise in terrorist attacks.

Yet the West still pays the price for its militant and militarised response to terrorism through the weakening of democratic norms and values. The unleashing of US military power that was supposed to intimidate terrorists has diminished America’s might, creating a key precondition for Donald Trump’s promise to restore American greatness.

Although many of the issues confronting us today have very long roots, the world we live in has been indelibly shaped by 9/11 and its aftermath.The Conversation

Barbara Keys, Associate Professor of US and International History, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Rome: City + Empire contains wonderful objects but elides the bloody cost of imperialism



File 20181002 98878 8uv9il.jpg?ixlib=rb 1.1
Coins from the Hoxne Treasure,
Hoxne, England, late 4th – early 5th century CE.
silver.
1994,0401.299.1-20
© Trustees of the British Museum
© Trustees of the British Museum, 2018. All rights reserved

Caillan Davenport, Macquarie University and Meaghan McEvoy, Macquarie University

“What have the Romans ever done for us?” asks Reg from the People’s Front of Judaea in Monty Python’s comedy classic, Life of Brian. Rome: City + Empire, now showing at the National Museum of Australia, offers visitors a clear answer: they brought civilization.

This collection of more than 200 objects from the British Museum presents a vision of a vast Roman empire, conquered by emperors and soldiers, who brought with them wealth and luxury. Quotations from ancient authors extolling the virtues of Rome and the rewards of conquest stare down from the walls. This is an exhibition of which the Romans themselves would have been proud.

Portrait head resembling Cleopatra.
Italy, 50–30 BCE
limestone
1879,0712.15
© Trustees of the British Museum

© Trustees of the British Museum, 2018. All rights reserved

Indeed, the major issue is that the displays present a largely uncritical narrative of Roman imperialism. One section, called “Military Might,” features a statue of the emperor Hadrian in armour, a defeated Dacian, and a bronze diploma attesting to the rewards of service in the Roman army. An explanatory panel informs us that resistors were “treated harshly” while those “who readily accepted Roman domination, benefited”. This is especially troubling to read in an Australian context.

The exhibition is beautifully laid out, with highly effective use of lighting and colour to emphasise the different themes: “The Rise of Rome”, “Military Might”, “The Eternal City”, “Peoples of the Empire” and “In Memoriam”. And it boasts impressive busts and statues of emperors, imperial women, priests and priestesses, gods and goddesses, most displayed in the open, rather than behind glass. This allows visitors to view them up close from many angles.

Mummy portrait of a woman.
Rubaiyat, Egypt, 160–170 CE
encaustic on limewood
1939,0324.211
© Trustees of the British Museum

© Trustees of the British Museum, 2018. All rights reserved

The use of imagery is one of the exhibition’s greatest strengths. Close-ups of coins and other small artefacts are projected against the wall, while enlarged 18th-century Piranesi prints of famous monuments such as the Pantheon provide a stunning backdrop.

There are some excellent curatorial choices. The number of images of women is commendable, enabling the exhibition to move beyond emperors, soldiers and magistrates to emphasise women as an intrinsic part of the life of Rome.

Stories of key monuments, such as the Colosseum, the Baths of Caracalla, and the Pantheon, are accompanied by busts of the emperors who built them as well as associated everyday objects such as theatre tickets and strigils. However, there is no map of the city of Rome to allow visitors to place these buildings in context. And the evidence for the true cost of Roman conquest is not sufficiently highlighted.

Where are the slaves?

Coins show emperors subduing prostrate peoples, including one featuring Judaea, where Vespasian and Titus cruelly crushed a revolt between 66-73 CE. The accompanying plaque refers obliquely to Roman “acts of oppression”, but one has to turn to the exhibition catalogue to find the true list of horrors, including the thousands enslaved and the sacking of the Temple of Jerusalem. Nor is there any mention that the construction of the Colosseum, profiled just a few feet away in the exhibition, was funded by the spoils of the Jewish War.

Relief showing two female gladiators.
Halicarnassus (modern Bodrum), Turkey, 1st–2nd century CE
marble
1847,0424.19
© Trustees of the British Museum

© Trustees of the British Museum, 2018. All rights reserved

The walls are covered with quotations extolling the Romans’ own imperialistic vision. “The divine right to conquer is yours”, a line from Virgil’s Aeneid, greets visitors at the start. Even more troubling is a quotation from Pliny the Elder which looms over the “Peoples of the Empire” section:

Besides, who does not agree that life has improved now the world is united under the splendour of the Roman Empire.

Toothpick from the Hoxne Treasure.
Hoxne, England, late 4th – early 5th century CE
silver and niello with gold gilding
1994,0408.146
© Trustees of the British Museum

© Trustees of the British Museum, 2018. All rights reserved

This section is full of objects displaying the luxurious lifestyle of provincial elites under Roman rule, from the stunning decorated spoons and bracelets of the British Hoxne treasure to beautiful funerary reliefs of rich Palmyrenes. The exhibition trumpets the “diversity” of Rome’s peoples, but this curious set of objects does not tell any coherent story beyond the comfortable lives of the privileged.

Slavery – the most horrifying aspect of Roman society – is all but absent. There are incidental references (a gladiator given his freedom, the funerary urn of a former slave), but they are presented with little context. Scholars have estimated that slaves composed at least 10 per cent of the empire’s total population of 60 million. They undertook domestic and agricultural labour, educated children, and served in the imperial household. Their stories remain largely untold.




Read more:
Mythbusting Ancient Rome: cruel and unusual punishment


Alternative narratives

The absence of any counterpoint to the Romans’ story in this exhibition is all the more surprising given that the catalogue contains an essay from the NMA that does show awareness of these problems. Curators Lily Withycombe and Mathew Trinca explore how the narrative of Roman conquest influenced imperial expansion in the modern age, including the colonisation of Australia.

Particularly revealing is their statement: “While the Classics may have once been in the service of British ideas of empire, they are now more likely to be taught using a critical postcolonial lens.” Yet this nuance does not make it into the exhibition itself.

Ring with sealstone depicting Mark Antony
probably Italy, 40–30 BCE.
gold and jasper
1867,0507.724
© Trustees of the British Museum

© Trustees of the British Museum, 2018. All rights reserved

A very different narrative about the Roman world could have been presented. Even in their own time, Roman commentators were aware of the darker side of imperialism. In his account of the influx of Roman habits and luxuries into Britain, the historian Tacitus remarked:

The Britons, who had no experience of this, called it ‘civilization’, although it was a part of their enslavement. (Agricola 21, trans. A. R. Birley).

The colossal head of the empress Faustina the Elder from a temple in Sardis is a spectacular object, but its overwhelming size should remind us of the asymmetrical power dynamics of Roman rule. Emperors and their family members were meant to be figures of awe to peoples of the empire, to be feared like gods. Tacitus memorably described the imperial cult temple at Colchester in Britain as a “fortress of eternal domination”.




Read more:
Guide to the Classics: Virgil’s Aeneid


The Rome of the exhibition is a curiously timeless world. The grant of Roman citizenship to all free inhabitants of the empire in 212 CE goes unmentioned, and the coming of Christianity is presented almost as an afterthought.

There are some spectacular items from the vibrant world of Late Antiquity (3rd-7th centuries CE), such as the gold glass displaying Peter and Paul and parts of the Esquiline treasure. But this section is marred by factual errors and it misses the opportunity to explore the dynamics of fundamental religious and cultural change.

Horse-trappings from the Esquiline Treasure.
Rome, Italy, 4th century CE
silver and gold gilding
1866,1229.26
© Trustees of the British Museum

© Trustees of the British Museum, 2018. All rights reserved

Rome: City + Empire is a wonderful collection of objects, displayed in an engaging manner, which will be of interest to all Australians. The exhibition is likely to be a hit with children – there is a playful audio-guide specifically for kids and many hands-on experiences dotted throughout: from the chance to electronically “colour-in” the funerary relief of a Palmyrene woman on a digital screen, to feeling a Roman coin or picking up a soldier’s dagger.

But visitors should be aware that it presents a distinctly old-fashioned tale of Rome’s rise and expansion, which is out of step with contemporary scholarly thinking. The benefits of empire came at a bloody cost.

Rome: City + Empire is at the National Museum of Australia until 3 February 2019.The Conversation

Caillan Davenport, Senior Lecturer in Roman History, Macquarie University and Meaghan McEvoy, Associate Lecturer in Byzantine Studies, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Bogs are unique records of history – here’s why


Henry Chapman, University of Birmingham; Ben Gearey, University College Cork; Jane Bunting, University of Hull; Kimberley Davies, Plymouth University, and Nicola Whitehouse, Plymouth University

Peat bogs, which cover 3% of the world’s land surface, are special places. While historically often considered as worthless morasses, today they are recognised as beautiful habitats providing environmental benefits from biodiversity to climate regulation. However, they are threatened by drainage, land reclamation for agriculture and peat cutting for fuel, which has significantly reduced the extent and condition of these ecosystems on a global scale. Bogs are fragile and sensitive to change, whether by human hands or by processes such as climate change.

A less well known aspect of bogs is their remarkable archaeological potential. In their undisturbed state at least, bogs are anoxic (oxygen-free) environments due to their saturation. These conditions are hostile to the microbes and fungi that would normally decay organic material such as the remains of plants, which are the principal constituents of the peat. The same anoxic conditions also offer protection from decay for organic archaeological remains. The vast majority of objects and structures used by our ancestors were made from organic materials (in particular wood). These are normally lost on dryland archaeological sites but can be preserved in peatlands.

The saturated conditions mean that even soft tissue can survive, including both skin and internal organs. Probably the best known archaeological finds are the remains of “bog bodies” such as the famous prehistoric Tollund Man in Denmark, Lindow Man in the UK, or the more recent Irish discoveries of Clonycavan Man, Old Croghan Man and Ireland’s oldest known bog body, Cashel Man, dated to the Bronze Age.

Excavating a trackway on Hatfield Moors, South Yorkshire.
© Henry Chapman

Seeing hidden landscapes

But archaeology is only part of the story these environments have to tell. They are important archives of the past in other ways: the layers of moss and other vegetation that make up peat are themselves immensely valuable as archives of past environments (palaeoenvironments). The manner in which peat accumulates means that the deposits have stratigraphic integrity, meaning that contained within each layer can be found macroscopic and microscopic remains of plants and other organisms that shed light on landscape change and biodiversity on timescales ranging from centuries to millennia. The high organic content of peat means that these records can be dated using the radiocarbon method.

The best known such records are probably pollen grains which provide evidence of past vegetation change. But evidence from other organic material can be used to reconstruct other past environmental processes. For example, single-celled organisms called testate amoebae, preserved in sub-fossil form, are highly sensitive to peatland hydrology and have been extensively used in recent years to reconstruct a history of climatic changes. Meanwhile, fossil beetles can tell us how the biodiversity and nutrient status of a peatland has altered over time.

Fossil beetle remains associated with Old Croghan Man bog body, Ireland.
© Nicki Whitehouse, Author provided

The potential of bogs to preserve both environmental and archaeological records means that they can be regarded as archives of “hidden landscapes”. The accumulating peat literally seals and protects evidence of human activity ranging from the macroscopic (in the form of archaeological sites, artefacts and larger plant and animal remains) through to the microscopic (pollen, testate amoebae and other remains) material that provides contextual evidence of environmental processes.

Through detailed integrated analyses these records can provide evidence of past human activity ranging from the everyday exploitation of economic resources of peatlands, through to the ceremonies associated with prehistoric human sacrifice and the deposition of the so-called bog bodies. The associated palaeoenvironmental record can be used to situate these cultural processes within long term patterns of environmental changes.

A bog in Estonia seen from above.
FotoHelin/Shutterstock.com

Taming the wild

There has been extensive study of the palaeoenvironmental record from bogs and notable archaeological excavations of sites and artefacts, but there have been relatively few concerted attempts to integrate these approaches. In part this is because generating sufficient data to model the development of a bog in four dimensions (the fourth being time) is a formidable research challenge. But some peatlands have seen relatively extensive archaeological and palaeoenvironmental research over the last few decades, providing an excellent starting point. Hatfield and Thorne Moors, situated primarily in South Yorkshire, are two such peatlands.

These two largest surviving areas of lowland bog in England are located within a wider lowland region known as the Humberhead Levels. After decades of industrial peat extraction, these bogs are now nature reserves managed by Natural England, and are becoming the “wild” bogs they once were. We are attempting to reconstruct the wildscape and bring the complex histories of this vast and dynamic boggy landscape to life.

Flora on Thorne Moors.
© Peter Roworth, Author provided

These moors are just two surviving parts of a once rich mosaic of wetland landscapes. In the past, this landscape was famed for its wildness – a remnant of an extensive complex of mires, rivers, meres and extensive floodplain wetlands. Antiquarians such as John Leland visited the area in the 16th century, and his descriptions provide a “window onto what must have been a truly fabulous ‘everglades-like’ landscape”, as described by local historian Colin Howes.

Now largely drained, tamed and converted to farmland, it’s hard to imagine the vast wetland landscapes that once characterised these areas. Following large-scale land reclamation in the 17th century, many of the traditional practises such as fishing, fowling, grazing and peat-cutting (turbary) rights were no longer available to commoners. Consequently, the connections between people and place became increasingly defined by a new, dryland landscape and disconnected from its former wetlands that were once so central to people’s lives.

Sphagnum moss on Thorne Moors.
© Peter Roworth

We are investigating and reconstructing this dynamic and changing wildscape throughout its history, reconnecting communities to these wetland landscapes. Drawing together previous research alongside targeted archaeological fieldwork and palaeoenvironmental analyses, we are combining these with newly available digital data and sophisticated modelling techniques to reconstruct their interwoven landscape and human histories. Together, for the first time, we are beginning to see the complexity of the dynamic and changing landscape that once characterised the Humberhead Levels.The Conversation

Henry Chapman, Professor of Archaeology, University of Birmingham; Ben Gearey, Lecturer in Environmental Archaeology, University College Cork; Jane Bunting, Reader in Geography, University of Hull; Kimberley Davies, Research Assistant, Wildscape Project, Plymouth University, and Nicola Whitehouse, Associate Professor (Reader) in Physical Geography, Plymouth University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


%d bloggers like this: