In 1866, Adelaide colonist George Hamilton published An Appeal for the Horse, arguing against the harsh treatment of animals. He claimed that in “treating [the horse] as a machine, [people] have forgotten the higher attributes of his nature, and considered only his bone and muscle”.
Hamilton was a trailblazer in challenging the cruel treatment of animals by humans. He loved horses better than many people, and frequently likened them to children, wives, or friends. Although Australian anti-cruelty laws were passed as early as 1837, more specific prohibitions against cruelty to animals were not introduced until the 1860s. Hamilton’s challenge to the boundary we have constructed between humans and animals anticipated considerable recent research that questions this divide.
In contrast, Hamilton’s compassion for Aboriginal people was conspicuously lacking. Empathy is always politicised, and emotional narratives such as Hamilton’s tell us whose lives are worthy of compassion and therefore valuable.
In 1839, on a journey droving 350 head of cattle from Port Philip to Adelaide, Hamilton had a tense confrontation with two Aboriginal men, in which he drew and cocked his pistol, ready to fire. The moment passed, and as he later explained:
Although it was my intention to fire upon my black relations, it was with no desire to kill them. No … they would have been merely winged, shot through the leg or arm, or in some place not vital.
Hamilton contrasted his willingness to harm, if not kill, Aboriginal people with what he saw as the hypocrisy of those “pious persons” who cared more about “ignorant pagan black monsters” than “their white brethren who are, from poverty, neglect and vicious teaching, fast falling into a savagedom far more frightful”. Like many at this time, he pitted the rights of Indigenous people against those of poor whites, whether in Britain or in the colonies.
Picturing colonial violence
As an “overlander” who helped “open up” land routes between Sydney, Melbourne and Adelaide during the years from 1836 to 1845, Hamilton was a veteran of frontier conflict. His fine-grained narratives and carefully observed drawings and prints provide a valuable insight into the story of white settlement in South Australia.
In 1845 Hamilton wrote:
The black man who roams over these wilds their lord and master is little elevated by nature from the beasts that inhabit his native forests, so far beneath the rest of his species that he seems to be standing on the line of demarcation between instinct and reason.
It seems that humans have always needed to elevate ourselves in contrast to those who are radically different – the “other” – whether that be animals, non-white races or women. This process was fundamental to imperialism and, more often than not, it was violent.
There is a shift in Hamilton’s views from his first forays into the bush around 1836 to his late-19th-century reminiscences. Some of his early drawings, such as “Meeting natives on the Campaspi plains, Victoria, June 1836”, express a friendly curiosity, offering many details of material culture and exchange between white travellers and local Aboriginal people.
But after the Myall Creek Massacre of June 1838 in north-western New South Wales, colonists became more circumspect regarding frontier clashes with Aboriginal people. Following this massacre, seven white men were hanged for the murder of 28 Wererai people, outraging white colonists and increasing racial tensions over subsequent decades. Recent research by Lyndall Ryan documents the sites where thousands of Aboriginal people and tens of settlers were killed in south-east Australia.
During the 1840s, Hamilton began to produce less sympathetic images, including many drawings and prints depicting frontier violence. These sometimes showed conflict in relatively objective terms, such as his ink drawing “Overlanders Attacking the Natives”.
Even “Natives Spearing the Overlanders’ Cattle”, while showing Aboriginal people as aggressors, remains relatively neutral.
But a series of lithographs from the late 1840s have a nastier edge, losing their quality of realist observation and descending into caricature. These show Aboriginal people attacking white colonists, with ironic titles such as “The Harmless Natives”, or “The Persecuting White Men”. Here Hamilton directs our sympathy from black to white.
As Hamilton’s legacy shows us, emotional narratives and images are a powerful way of defining our relations with others.
This process is also fundamental to modern global warfare, as Judith Butler argues in her analysis of war journalism. It can be seen at work wherever interests compete. For Hamilton, Aboriginal people’s defence of kin and country challenged his own right to colonise and posed a personal threat. This made it easy for him to demonise Aboriginal people.
On the violent frontier, Hamilton was typical in defining the white colonist as victim and Indigenous Australian as persecutor, declaring in 1845:
We may soon look forward to the time when murders perpetrated by the savage on the settler will be considered something more than a peccadillo, and we may hope to see the settler at liberty to protect his life and property without the fear of escaping the blacks’ tomahawk only to run his neck in the hangman’s noose.
Here we see the emotional logic of Hamilton’s imperial cultural hierarchy and his political deployment of compassion. Suddenly, the seeming incongruity of Hamilton’s scorn for threatening Aboriginal people alongside his sympathy for the faithful horse makes perfect sense.
For centuries, the bloody gladiator conflicts that the Romans staged in amphitheatres throughout the empire have engrossed and repelled us. When it comes to gladiators, it is almost impossible to look away. But the arena is also the place where the Romans feel most foreign to us.
The gladiator was the product of a unique environment. He can exist only within a very particular set of religious, social, legal, political and economic circumstances. It is not surprising that this is a form of spectacle we have not seen either before or since the Romans. To acknowledge this is also to acknowledge that they are only ever going to be partially comprehensible to us.
Sadly, this is not a view shared by the Queensland Museum, which last week opened its new exhibition, Gladiators: Heroes of the Colosseum. The exhibition brings together 117 objects from Italian museums, most notably the collection of the Colosseum at Rome. Highlights include some extremely well preserved and intricately decorated gladiatorial helmets and pieces of armour from Pompeii, as well as some very fine carved reliefs depicting scenes of combat.
Yet, while the quality of the individual objects is without question and certainly worth the price of admission alone, the intellectual framework of the exhibition is far more problematic.
This is not an exhibition that is plagued by doubts or uncertainties. It firmly knows who gladiators were and what they stood for – gladiators, the opening panel of the exhibition proclaims, were the “elite athletes” of the ancient world. The antique equivalent of today’s fighters in the popular sport MMA, if you like.
Sporting analogies pepper the exhibition. Spectators are routinely referred to as “fans” and the catalogue promises that this is an exhibition that “touches on many issues that have parallels with modern-day sport and sporting culture”.
At times, the exhibition also feels like it has taken its cues from contemporary video-game culture. The special weapons of the various types of gladiators are spelled out and visitors are invited to contemplate who would win between a gladiator fighting with a net (known as a retarius to the Romans) and one heavily armed (secutor). A video-game spin-off from the exhibition is easy to imagine.
Rogues not heroes
Gladiatorial combat was certainly popular among the Romans. Evidence for gladiators is found in every province of the Roman Empire.
These fights initially began as contests of matched pairs as part of funeral rites honouring the dead. However, over time their popularity grew. By the time of the Roman Empire, hundreds of gladiators might be involved in spectacles that could last as long as 100 days.
These games were never just displays of gladiatorial fighting. At their most elaborate they involved beast hunts with exotic animals, executions of criminals, naval battles staged in flooded arenas, musical entertainments and dances.
The Queensland Museum is not the first to try to understand gladiators as sporting heroes. However, this analogy causes more problems than it solves.
The vast majority of gladiators were either prisoners of war or criminals sentenced to death. Gladiators were the lowest of the low; violent murderers, thieves and arsonists. Even your most badly behaved football team at their most morally blind would have had no trouble in rejecting this crew.
Gladiators in Rome were regarded as fundamentally untrustworthy and outside of legal protection. It is more useful to think of gladiators as prisoners on death row than as David Beckham with a net and trident. The section in the exhibition where children are encouraged to dress up as gladiators would have appalled any respectable Roman parent (that said, it’s great fun).
The Queensland Museum can’t escape the lowly, servile and criminal origins of the gladiators, but it does attempt to moderate our opinion of them by suggesting that some free citizens wilfully chose to be gladiators in search of “eternal fame and glory”. In fact, the evidence of such citizen gladiators is extremely slim. It was almost certainly extreme desperation that forced them into the arena rather than a desire to be remembered by posterity.
At another point, the exhibition suggests that the crowd saw reflected in gladiators the virtues of the soldiers who guarded the empire. Such talk would have had any self-respecting Roman legionary reaching for his short sword.
Gods and monsters
Representing gladiatorial combat as sport also inevitably underplays the religious dimension of the fighting. The exhibition includes some fabulous tomb paintings from the city of Paestum, which illustrate the origins of gladiatorial combat in the funerary rites for the dead. These are wonderful works, which deserve to be much better known; however, they are a rare intrusion into an otherwise secular narrative.
Gladiatorial combats never stopped being religious events. Every day of the games would begin with a “solemn procession” with sacrifices on altars. The gladiators themselves were deeply implicated in the Roman theology of the divine, death, and the relationship between mortal and immortal. These spectacles were Roman sermons written in blood.
The final problem with focusing on gladiators as sporting heroes is that it tends to isolate their combat from the other elements that made up the games. Beast hunts and the executions of criminals were just as popular, possibly even more so. They were not precursors to the main event or entertainment for the intervals.
The executions of criminals could involve extravagant mythological tableaus. Prisoners were dressed as Hercules and burnt alive. The fatal flight of Icarus towards the sun might be re-enacted for the audience.
Certainly, these elaborate, gruesome affairs captured the attention of ancient writers far more than the gladiators who accompanied them. Wealthy Romans seemed far more preoccupied with obtaining suitably rare fauna for their spectacles.
For the poorer members of the audience, the beast hunts had an added attraction. Often the animal meat was distributed to the audience members to take home. They were literally watching their dinner being butchered in front of them.
One of the most intriguing items in the exhibition doesn’t relate to gladiatorial combat but to one of these beast hunts. It is a second-century CE mosaic that features what appears to be a female hunter facing off a giant tiger. Who is this woman? Evidence for female hunters (like female gladiators) is practically non-existent. Is she part of some mythological tableau? A woman pretending to be an Amazon? Or a man dressed up as a woman? Is this a scene from real life at all?
She is an enigma and a worthy reminder that the real secret of the appeal of Roman combat spectacle is that it raises more questions than it answers.
Most of the voters who will be casting their ballots in the general election on Thursday June 8 will take their right to do so for granted, unaware of the contested history of this now familiar action. It’s actually less than 100 years since all adult males in the UK were awarded the franchise for parliamentary elections, in 1918, in the wake of World War I. That right wasn’t extended to all adult women for a further ten years after that.
Even today, it might be argued, the democratic principle of “one person, one vote” has not been fully implemented, since the royal family and members of the House of Lords are not allowed to vote in parliamentary elections. And even after the mass enfranchisement of the early 20th century, university graduates and owners of businesses retained a double vote, the former in their university constituencies as well as where they lived. These privileges were only abolished in 1948, in face of overwhelming Conservative opposition.
How Britain votes today is also a relatively late development in electoral history. Until 1872, parliamentary electors cast their votes orally, sometimes in front of a crowd, and these choices were then published in a poll book. Public voting was often a festive, even riotous affair. Problems of intimidation were widespread, and sanctions might be applied by landlords and employers if voters failed to follow their wishes, though this was widely accepted at the time as the “natural” state of affairs.
Open voting even had its defenders, notably the political radical John Stuart Mill, who regarded it as a manly mark of independence.
But as the franchise was partially extended in the 19th century, the campaign for secrecy grew. The method that was eventually adopted was borrowed from Australia, where the use of polling booths and uniform ballot papers marked with an “X” was pioneered in the 1850s.
More recent reforms took place in 1969, when the voting age was lowered from 21 to 18. Party emblems were also allowed on the ballot paper for the first time that year. It’s this kind of paper that will be used on June 8.
Staying at home
What no one predicted, however, when these franchise and balloting reforms were first implemented, is that voters would simply not bother to turn out and that they would abstain in such considerable numbers.
To be sure, this is a relatively recent phenomenon. In fact, turnout for much of the 20th century at general elections remained high, even by European standards. The best turnout was secured in the 1950 general election, when some 84% of those eligible to do so voted. And the figure didn’t dip below 70% until 2001, when only 59% voted. Since then things have improved slightly. In 2010, turnout was 65%. In 2015, it was 66%. But the fact remains that, today, a massive one-third of those eligible to vote fail to do so, preferring instead to stay at home (and the situation in local elections is far worse).
What was a regular habit for a substantial majority of the electorate has now become a more intermittent practice. Among the young and marginalised, non-voting has become widely entrenched. Greater personal mobility and the decline of social solidarity has made the decision to vote a more individual choice, which may or may not be exercised according to specific circumstances, whereas in the past it was more of a duty to be fulfilled.
Voters rarely spoil their papers in the UK, whereas in France it is a traditional form of protest that has reached epidemic proportions: some 4m ballot papers were deliberately invalidated in the second round of the recent presidential election. Like the rise in abstention in both countries, it surely reflects disenchantment with the electoral process as well as disappointment with the political elite.
In these circumstances, the idea of compulsory voting has re-emerged, though in liberal Britain the idea of forcing people to the polling station has never exerted the same attraction as on the continent. The obligation to vote is a blunt instrument for tackling a complex political and social problem. When the interest of the electorate is fully engaged, as in the recent Scottish or EU referendums, then turnout can still reach the 75% to 80% mark.
However, in the forthcoming parliamentary election, following hard on the heels of its predecessor in 2015, the EU vote and elections to regional assemblies in 2016, plus the local elections in May, voter fatigue may take a toll. It’s hard to envisage more than two-thirds of those entitled to do so casting their ballot on June 8. Given the relatively small cost involved in conducting this civic act, which is the product of so much historical endeavour, such disaffection must be a cause for significant concern.
Before dawn on the morning of June 4 1629, the Batavia, a ship of the Dutch East India Company, struck a reef at the Abrolhos Islands, some 70 kilometres off the Western Australian coast. More than seven months earlier the ship had left the Netherlands to make its way to the city of Batavia (present-day Jakarta), carrying silver, gold and jewels and 341 passengers and crew. During the shipwreck, 40 of them drowned. The others found safety on a nearby island.
Since there was no fresh water on the island they would name Batavia’s Graveyard (now Beacon Island), Commander Pelsaert and about 45 others took a longboat in search of water on the mainland. Unsuccessful in his search, Pelsaert decided to sail on to the city of Batavia to get help. By the time he returned in mid-September, the followers of Jeronimus Cornelisz, the man he had left in charge, had murdered 115 men, women and children.
It was not just the extent of the killings that shocked Pelsaert, but also their sheer cruelty: victims had been repeatedly stabbed, had their throats slit with blunt knifes, or their heads split with an axe. In his account of the events, Pelsaert tried to comprehend what had happened. No Christian man could ever have done this. It had to be the work of the devil.
Within a few months of the shipwreck, the first short accounts appeared in print in the Netherlands. In 1647 these were followed by the publication of Pelsaert’s notes under the title Ongeluckige Voyagie, Van ‘t Schip Batavia.
Unsurprisingly, Pelsaert’s sensational eyewitness account proved a considerable success. It was republished several times over the following decades.
The gruesome Abrolhos murders somewhat faded from view during the 18th and early 19th centuries. But by the 1890s they had re-entered the public imagination, not least because Perth’s Western Mail chose, somewhat curiously, its Christmas issue (1897) to publish a full English translation of Pelsaert’s account.
Since then there have been numerous novels and retellings of the tale. Bruce Beresford directed a 1973 TV movie. Many stories have been accompanied by illustrations. But the wreck has provoked surprisingly little response from visual artists.
Meditating on mortality
In the new exhibition, two Perth-based artists, Robert Cleworth and Paul Uhlmann, collaborated with a team of archaeologists from the University of Western Australia, who recently excavated several new burials of the murder victims on Beacon Island. The exhibition features a presentation of these recent digs and projections of the grave sites alongside works by Cleworth and Uhlmann. By referencing skeletons and skulls, the two artists create new forms of contemporary memento mori, or artworks that remind us we all must die.
Much of the work on display is inspired by the art and life of Johannes Torrentius, a Dutch painter convicted in 1628 for his alleged blasphemy, heresy and Satanism. Although not aboard the Batavia, Torrentius was widely believed to have inspired Cornelisz in his gruesome deeds.
Besides his heretical statements on religion, Torrentius had offended Dutch Calvinists with a number of bawdy pictures. All of these transgressive works were destroyed, yet titles such as A Woman Pissing in a Man’s Ear give some indication of their subject matter.
Ironically, the only Torrentius painting to have survived is an allegorical still life that warns against immoderate behaviour. During his lifetime, the painter would have created numerous vanitas paintings, works that address life’s vanities, assisted by a camera obscura, a darkened box in which a lens projects an external image – a forerunner to our modern cameras.
Uhlmann has used the same device to create a triptych of photo prints that show the skull of one of the Batavia murder victims from three different angles. The skull, recovered in 1964, was missing a small bone fragment, the result of a blow to the head. This fragment was unearthed during the latest excavations. Uhlmann has used both the skill and the fragment in his study to demonstrate the impermanence of life and the transience of the skull.
Skulls also feature prominently in the paintings on display by Cleworth, and not just skulls of humans but also that of a wallaby. The skull testifies to the hunger and hardship of the victims: wallabies were not indigenous to Beacon Island and must have been brought there by the shipwreck survivors. This is another example of how art and science are brought together in this show.
A second painting by Cleworth shows two hands hovering in front of a deep-blue background. The broad brushstrokes evoke the sea surrounding the islands. The hands are those of the lead mutineer, Cornelisz.
Somewhat ironically, no one died by these hands during the reign of terror. Cornelisz had ordered his cronies to kill, rather than committing the murders himself. Nevertheless, when Pelsaert returned to Batavia’s Graveyard and immediately dispensed justice, he ordered Cornelisz’s hands be chopped off before he was hanged on the gallows.
These artworks don’t simply retell the story of the Batavia and its cruel aftermath. They explore the nexus of art and science, using processes similar to those of the 17th century. They not only offer reflections on the unimaginable cruelty that took place four centuries ago, but provoke a new reading of past events.
This week the marketing office of Dove, a personal care brand of Unilever, found itself in hot water over an ad that many people have taken to be racially insensitive. Social media users called for a boycott of the brand’s products.
The offending ad showed a black woman appearing to turn white after using its body lotion. This online campaign was swiftly removed but had already hurtled through social media after a US makeup artist, Naomi Blake (Naythemua), posted her dismay on Facebook, calling the ad “tone deaf”.
The company then followed up with a longer statement: “As a part of a campaign for Dove body wash, a three-second video clip was posted to the US Facebook page … It did not represent the diversity of real beauty which is something Dove is passionate about and is core to our beliefs, and it should not have happened.”
One has to ask, were the boys destined for Dove marketing kicking on at the pub instead of going to their History of Advertising lecture, the one with the 1884 Pears’ soap ad powerpoint? Jokes aside, Dove’s troubling ad buys into a racist history of seeing white skin as clean, and black skin as something to be cleansed.
Dove has missed the mark before. In a 2011 ad, three progressively paler-skinned women stand in towels under two boards labelled “Before” and “After”, implying transitioning to lighter skin was the luminous beauty promise of Dove (Dove responded that all three women represented the “after” image).
Many of the indignant comments reference the longstanding trope of black babies and women scrubbed white. Australia has particular form on this front. Gamilaraay Yuwaalaraay historian Frances Peters–Little (filmmaker and performing artist) has demanded an apology from Dove. She posted a soap advertisement for Nulla Nulla soap from 1901 on Facebook to show the long reach of racism through entrenched tropes still at work in the Dove ads.
Wiradjuri author Kathleen Jackson has also written about the Nulla Nulla ad and the kingplate, a badge of honour given by white settlers to Aboriginal people, labelled “DIRT”. She explains that whiteness was seen as purity, while blackness was seen as filth, something that colonialists were charged to expunge from the face of the Earth. Advertising suggested imperial soap had the power to eradicate indigeneity.
This coincided with policies that were expressly aimed at eliminating the “native”. In Australia the policy of assimilation was based on the entirely spurious scientific whimsy of “biological absorption”, that dark skin and indigenous features could be eliminated through “breeding out the colour”.
In New South Wales, “half-caste” girls were targeted for removal from their families and placed as domestic servants in white homes where it was assumed “lower-class” white men would marry them. These women were often vulnerable to sexual violence. Any resulting children, however begotten, would be fairer-skinned, due implicitly to the bleaching properties of white men’s semen.
Aboriginal mothers were vilified as unhygienic and neglectful. In fact, they battled against often impossible privation to turn their children out immaculately in the hope police would have less cause to remove them.
Cleanliness and godliness, whiteness and maternal competency: these are the lacerations Dove liberally salted with its history-blind ad. It unwittingly strikes at the resistance and resilience of Aboriginal families who for generations fended off fragmentation, draconian administration and intrusive surveillance by state administrators. Its myopic implied characterisation of beauty as resulting from shedding blackness is mystifying.
In 2004, Dove kicked off a campaign for “Real Beauty”. It proclaims itself “an agent of change to educate and inspire girls on a wider definition of beauty and to make them feel more confident about themselves”. Dove’s online short films about beauty standards – including Daughters, Onslaught, Amy and Evolution – have been recognised with international advertising awards.
Yet Dove also sits in Unilever with Fair and Lovely, a skin whitening product and brand developed in India in 1975. This corporate cousin to Dove touts its bleaching agent as the No. 1 “fairness cream” and purports to work through activating “the Fair and Lovely vitamin system to give radiant even toned skin”. It is sold in over 40 countries.
Skin whitening products (there is also a Fair and Handsome for men, not associated with Unilever) are popular in Asia, where more than 60 companies compete in a market estimated at US$18 billion. They enforce social hierarchies around caste and ethnicity. Since the 1920s the racialised politics of skin lightening have spread around the globe as consumer capitalism reached into China, India and South Africa.
Dove responded to its controversial ad by saying that “the diversity of real beauty… is core to our beliefs”. But “core” here seems skin-deep when it fails to penetrate into the pores of its parent company and its subsidiaries.
Over the past two decades Australian archaeologists have been slowly uncovering the World Heritage-listed ancient theatre site at Paphos in Cyprus. The Hellenistic-Roman period theatre was used for performance for over six centuries from around 300 BC to the late fourth century AD. There is also considerable evidence of activity on the site after the theatre was destroyed, particularly during the Crusader era.
The excavation of the site, and of the architectural remains in particular, is contributing significantly to our understanding of the role of theatre in the ancient eastern Mediterranean and the development of theatre architecture to reflect contemporary performance trends in the ancient world.
When we return to the site this month we will take archaeologists, surveyors, architects, specialist researchers of ancient materials, students and volunteers. We will also take contemporary artists.
As incongruous as this relationship sounds, the project is part of a wider momentum in contemporary Australian art that lauds working across disciplines. And the link between antiquity and today allows for fascinating insights to the benefit of both.
At the birth of archaeology as a discipline in the 19th century, it was a common practice to take artists on expeditions. Illustrations of exotic sites and impressive archaeological finds filled journals in Europe and the United States, such as the Illustrated London News. These reports allowed an eagerly awaiting audience to participate in the rediscovery. The rediscovery of ancient artist traditions had a profound effect on art movements of the 18th and 19th century too, from Neoclassicism to French Realism.
By the 20th century, however, archaeology as a discipline had become very focused on objective observation and detailed evidence-based analysis. Archaeological illustration became a form of technical drawing or scientific illustration, and the archaeological photograph developed clear standards for accurate recording. Any creative and emotive response to the past was pushed aside.
Recently, however, there has been something of a renewal of this relationship between the scientist and the artist. Mark Dion in 1999 used archaeological finds from London as the basis of his work Tate Thames Dig, arranging found objects in a cabinet for display.
In Australia, Ursula K. Frederick, who has a background in archaeology, explores the aesthetics of car cultures in Australia, Japan and the US. Izabela Pluta’s photographs explore ruin and place.
The responses of artists working in Paphos are often compelling, enabling ways of thinking that archaeologists had not previously considered. Media artist Brogan Bunt, for example, speaks of the irony of ephemeral digital platforms that cause what was new technology in 2006 to be unusable by 2017. For him, the ancient theatre site has maintained its identity for millennia, while digital virtual heritage is far more fragile than the places it sets out to document and preserve.
“My photographs combine visual exploration of actual sites and objects with original research into the quantum leap made by digital photography.” – Bob Miller
“I perceive the photography of sites as a memory aid, as a historical resource, as well as a reflective form of art.” – Rowan Conroy
“By mixing artistic and archaeological images we get a new grammar of looking.” – Derek Kreckler
“My research proposes a relationship between material landscapes and the immaterial and invisible spiritual, psychological and intellectual landscapes created through the artist’s gaze.” – Lawrence Wallen
“In my work I approach memories somewhat like an analyst, but perhaps more like a reflective archaeologist.” – Jacky Redgate
“Animation is for me, the physical, material perception of time.” – Hannah Gee
“The artistic motif crosses between eras, travelling back and forth in a temporal instability.” – Angela Brennan
“Drawing is a tool of thought allowing a larger framework for other meanings to emerge.” – Diana Wood Conroy
“My casting and patination process makes a connection to the narratives of archaeology.” – Penny Harris
“Digital ephemerality draws into curious relation with the loss and disappearance affecting the ancient world.” – Brogan Bunt
It is a little-known piece of history that Saddam Hussein was a great fan of ancient Mesopotamian literature. His enthusiasm for epics written in cuneiform – the world’s oldest known form of writing – can be seen in his own efforts at writing political romance novels and poetry. Hussein’s first novel, Zabibah and the King, blended the Epic of Gilgamesh with the 1001 Nights, and was adapted into a television series and a musical.
Indeed, the Iraqi dictator was said to be so immersed in his novel-writing that he left much of the military strategising to his sons leading up to the 2003 war. He continued writing in prison, using a card table as a writing desk. This example from the modern genre of “dictator literature” provides an unusual insight into the diverse reception of cuneiform literature in the modern day.
The decipherment of cuneiform in the late 18th century, a tale of academic virtuosity and daring, revealed a “forgotten age” and challenged the traditional, biblical view of history. One scholar was even put on trial for heresy for the wonders he uncovered in the translated script.
For over 3,000 years, cuneiform was the primary language of communication throughout the Ancient Near East (roughly corresponding to the Middle East today) and into parts of the Mediterranean. The dominance of the cuneiform writing style in antiquity has led scholars to refer to it as “the script of the first half of the known history of the world”. Yet it disappeared from use and understanding by 400 CE, and the processes and causes of the script’s vanishing act remain somewhat enigmatic.
Cuneiform is composed of wedge-shaped characters and was written on clay tablets (often likened to marks made by a chicken scratching in the mud). Unlike other ancient writing media, such as the papyri or leather scrolls used in Ancient Greece and Rome, cuneiform tablets survive in great abundance. Hundreds of thousands of tablets have been recovered from ruined Mesopotamian cities.
The discoveries yielded from the recovery of cuneiform writing continue to unfold in unexpected and exciting ways. In August this year, mathematicians at an Australian university made international headlines with their discovery involving a 3,700-year-old clay tablet containing a trigonometric table. The researchers said the cuneiform table reveals a sophisticated understanding of trigonometry — in some ways more advanced than in modern-day mathematics!
Lost in translation
It is difficult to overstate the influence of cuneiform literature in the ancient world. Many languages throughout a vast geographical span over thousands of years were written in cuneiform, including Sumerian, Hittite, Hurrian and Akkadian. Among these, Akkadian (an early cognate of Hebrew and Arabic) became the lingua franca of the Near East, including Egypt, during the Late Bronze Age.
Cuneiform was used to preserve the official royal correspondences between leaders of empires, but also simple transactions and record-keeping that were part of daily life. Over time, the skill of writing moved outside the main institutions of cities, such as temples and scribal schools, into the hands of citizens, as well as into private homes.
Despite its dominance in antiquity, the use of cuneiform ceased entirely at some point between the first and third centuries CE. The great empires of the Ancient Near East experienced a long decline over many centuries, which ultimately resulted in the loss of Egyptian hieroglyphs and cuneiform as written languages.
Cuneiform’s sphere of influence shrank after the sixth century BCE, before vanishing entirely. The disappearance of cuneiform accompanied, and likely facilitated, the loss of Mesopotamian cultural traditions from the ancient and modern worlds.
There are several schools of thought surrounding the disappearance of cuneiform, including competition with alphabetic languages (where letters correspond to sounds) such as Aramaic and Greek, and the decline of writing traditions. However, the process of the transition from cuneiform to alphabet is yet to be clearly understood.
Deciphering the code
The resurrection of cuneiform writing systems was described by legendary Sumerologist Samuel Noah Kramer as an “eloquent and magnificent achievement of 19th century scholarship and humanism”.
In the 15th century, cuneiform inscriptions were observed in Persepolis (in modern-day Iran). The script’s patterned dashes were not immediately recognised as writing. The name “cuneiform” (a Latin-based word meaning “wedge-shaped”) was given to the undeciphered writings by Oxford professor Thomas Hyde in 1700.
Hyde viewed the cuneiform markings as decorative rather than conveying language — a widely held view in academic circles of the 18th century. Despite some efforts to popularise the name “arrow writing”, “cuneiform” gained general acceptance. Yet cuneiform remained cryptic, and its ancient masterpieces buried and inscrutable.
The modern-day decipherment of cuneiform owes a great debt to the rulers of the Persian Achaemenid dynasty, who reigned in what is modern-day Iran in the first millennium BCE. These rulers made cuneiform inscriptions recording their achievements.
The most important of these inscriptions for the decipherment of cuneiform was the Behistun inscription, which recorded the same message in three languages: Persian, Elamite and Akkadian. This trilingual inscription was carved into the face of a cliff in Behistun in what is now western Iran.
Detailing the successes of King Darius I of Persia, the Behistun inscription was inscribed on rock some 100 metres off the ground around 520 BCE. In 1835, Henry Creswicke Rawlinson was training troops of the Shah of Iran when he encountered the inscription. In order to reach the writings and transcribe them, Rawlinson needed to dangle from the cliffs, or to stand on the very top rung of a long ladder. From these precarious positions, he copied as much of the inscription as possible.
A “Kurdish boy”, whose name seems to be lost to history, assisted the daring endeavour. The boy was said to have used pegs dug into the rock wall as anchors to swing across the cliffs and reach the most inaccessible parts of the writing. Returning home, Rawlinson began working to unlock the secret of the lost script, perhaps with his pet lion cub by his side.
Of the three languages, the Old Persian was the first to be decoded by Rawlinson. Scholars working on deciphering the script gained a sense of the chronological placement of the inscription and recognised some repeated signs, thereby gleaning something of the content and structure of the writings.
The presence of king lists in the Behistun inscription, which could be compared with lists in Herodotus’ Histories, provided a point of reference for deciphering the signs. Other Greek historians, and the Bible, were also consulted in the process. Through the contributions of a number of scholars in the first half of the 19th century, cuneiform slowly began to reveal its secrets.
The significance of the Behistun inscription in the translation of cuneiform is often likened to the importance of the Rosetta Stone for deciphering Egyptian hieroglyphs. In recent years, the inscription has been the focus of restorative efforts, after sustaining various types of damage — notably when Allied troops used the inscription for target practice during World War II. It is now a UNESCO World Heritage site.
As the deciphering went on, divisions developed in the academic community over whether efforts to unravel cuneiform had proven successful. Part of the controversy stemmed from the extreme intricacy of the writing system. Cuneiform languages are made up of a collection of signs, and the meaning of these signs shows a great deal of variety.
In the Akkadian language, for example, a cuneiform sign may have a phonetic value — but not always the same phonetic value — or it may be a logogram, symbolising a word (such as “temple”), or a determinative sign, such as for a place or an occupation. This gives the translation of cuneiform a puzzle-like quality. The translator must select the value of the sign that appears best suited to the context.
Some scholars probably had sensible reasons for questioning the deciphering of cuneiform. Others held the inaccurate view that ancient Assyrians would have lacked the capacity to comprehend such a difficult writing system. To resolve the controversy, the British scientist W.H. Fox Talbot suggested a kind of cuneiform competition.
The British Royal Asiatic Society held the contest in 1857. Four scholars – Fox Talbot, Rawlinson and a Dr Hincks and a Dr Oppert – made unique translations of a single, previously unseen, cuneiform inscription. Each scholar then sent their translation in strict confidence to the society for comparison. After opening the sealed letters and examining the four translations, the society decided that the similarities between them were sufficiently compelling to declare cuneiform deciphered.
The rediscovery of cuneiform literature was not without further controversy. Fierce debates were conducted in eloquent handwritten letters over who had contributed to the discovery and decipherment of texts, and who deserved credit for the achievement.
As well as this, the content of the literature caused friction in the academic communities of the 19th century. Prior to the rediscovery of cuneiform, the most prominent source for the Ancient Near East was the Hebrew Bible. The ability of cuneiform literature to provide a new perspective on the rich history of Egypt and Mesopotamia was embraced by many, but viewed with suspicion by others. For some, the translation of the long-forgotten writings raised the possibility of conflict between cuneiform sources and biblical literature.
Perhaps one of the most overt examples of these tensions in scholarly circles can be seen in the career of Nathaniel Schmidt from Colgate University. Schmidt was tried for heresy in 1895, due to the view that many of his translations of cuneiform appeared contrary to biblical traditions. He was dismissed from his position at Colgate in 1896. Following his dismissal, the eminent scholar was recruited by Cornell University (his controversial departure from Cornell made his appointment something of a “bargain”), where he taught Hebrew, Arabic, Aramaic, Coptic, Syriac and many other ancient languages.
From cuneiform to the stars
The recovery of cuneiform has provided access to an embarrassment of textual riches, including hundreds of thousands of legal and economic records, magico-medical texts, omens and prophecies, wisdom literature and lullabies.
Cuneiform has also aided scientific mysteries. Babylonian records of a solar eclipse, written in cuneiform, have helped astronomers figure out how much Earth’s rotation has slowed.
The decipherment of the cuneiform script has reopened a timeless dialogue beyond ancient and modern civilisations, providing continued opportunities to better understand the world around us, and beyond.
Note: This essay contains details from the article “Comparative Translations”, in the Journal of the Royal Asiatic Society 18, 1861. My grateful thanks to the Royal Asiatic Society for generously allowing access to their collection.
Previous reports had described the polar party of the British Antarctic Expedition striking out confidently just 2.5º latitude from their objective: the geographic South Pole. The journals and letters recovered from the bodies, however, told a tale of heartbreak and desperation: the explorers were shattered to find themselves beaten to the pole by Norwegian rival Roald Amundsen, and weakened terribly during their journey back to base.
Of the five men in Captain Robert F. Scott’s party, Petty Officer Edgar Evans was the first to die, while descending from the high-altitude Antarctic Plateau. Then, while searching in vain on the vast Ross Ice Shelf for the dog sleds ordered to speed their return to base, Captain Lawrence “Titus” Oates realised that his ever-slowing pace was threatening the others, and famously walked out into a blizzard with the parting words: “I am just going outside and may be some time.”
Pushing on with limited supplies, the remaining men (Scott, Dr Edward Wilson and Henry “Birdie” Bowers) found themselves trapped by a nine-day blizzard. All three wrote messages to friends and loved ones while waiting, until eventually their food ran out on about March 29, 1912.
Evans’s actions raise the possibility that he played a role in the deaths of the five men. Furious at not being included in the attempt on the pole, Evans was returning to base when he collapsed with scurvy. Evans was the only expedition member to develop scurvy, most probably due to his refusal to eat fresh seal meat, a known preventive measure.
His companions Tom Crean and William Lashly heroically saved Evans’s life, a tale made famous in no small measure by expeditioner Apsley Cherry-Garrard’s classic book on the expedition, The Worst Journey in the World.
Foul play over food?
Buried in the British Library, I found a crucial piece of evidence about Evans’s trip back to camp. Seven pages of notes detail meetings held in April 1913 between Lord Curzon, president of the Royal Geographical Society, and Scott’s and Wilson’s widows, both of whom had read their late husbands’ diaries and correspondence.
According to the notes, Kathleen Scott reported that:
Scott’s words in his diary on exhaustion of food & fuel in depots on his return… It appears Lieut Evans – down with Scurvy – and the 2 men with him must on return journey have entered & consumed more than their share.
Several days later, also according to the meeting notes, Oriana Wilson described how:
…there was a passage in her husband’s diary which spoke of the “inexplicable” shortage of fuel & pemmican [sledging ration] on the return journey… This passage however she proposes to show to no one and to keep secret.
Closer examination of diary entries suggest that the food in question went missing from a depot at the southern end of the Ross Ice Shelf. Letters from the time indicate that Curzon immediately shut down the inquiry he was planning to hold. It is not unreasonable to assume that Curzon’s interpretation of events was that Evans was dangerously ill and if he had not taken the food would have also died.
But the account of exactly when Evans fell down with scurvy changed over time. Returning to civilisation in 1912, Evans described in a letter how he was stricken when he was 300 miles from base, a distance confirmed by media interviews from the time.
But by the following year, this figure had changed to 500 miles, a distance also reported in the book Scott’s Last Expedition. This would put the onset of his sickness at the southern end of the Ross Ice Shelf, precisely where the food appears to have gone missing.
Unwittingly, Cherry-Garrard published a substantially embellished version of Lashly’s sledging diary in The Worst Journey in the World, in which Evans’s sickness was shifted one week earlier to align with the public timeline.
Overall, the evidence strongly suggests that Evans took the cached food when he had not yet succumbed to scurvy, possibly because of his anger at having been sent back early and forced to drag his sledge with just two men. The timing of the various pieces evidence suggest that his story was later changed to fit with the idea that he took the food because he was ill.
With its round amphitheatre, The Globe is the most famous playhouse associated with Shakespeare – indeed, a working, pop up replica of it is currently in Melbourne. But long before Shakespeare or his plays appeared at the Globe, another forgotten stage was the Bard’s temporary home.
It is even possible that the first purpose-built stage to house Shakespeare was at a playhouse that stood a mile south of the London Thames at the Newington Butts juncture. Rather than round, the playhouse would have been relatively small and rectangular – a conversion of an existing commercial building.
It was here, in June 1594, that theatre entrepreneur Philip Henslowe recorded the first known performances of the Lord Chamberlain’s Men, a theatre troupe of which Shakespeare was a founding member, playwright and actor. The company performed versions of Hamlet, Taming of the Shrew, and Titus Andronicus over 11 days.
The evidence also suggests that the actor Richard Burbage wouldn’t have been at the Newington Butts playhouse. Yet most have assumed Hamlet was a play Shakespeare wrote for Burbage.
While Shakespeare’s plays were performed at smaller venues such as inns and courtyards (possibly as early as 1589), the Newington Butts’ shows were very likely to have been the first on a major Elizabethan stage constructed specifically for the kind of theatre for which he was about to become famous. It soon vanished from history, and was largely forgotten by Shakespeare scholars.
But using 18th-century maps, I’ve been able to figure out where it likely once stood. This historically significant site is likely now under a shopping centre south of the Thames.
The Newington Butts playhouse was built in 1575 and continued operating until 1594. The playhouse would have had at least two tiers of seating around the perimeter to be financially viable, seating about 700 to 800 patrons. It was closed down when the new leaseholder Paul Buck agreed to convert it to some other purpose – it is likely he converted the building to tenement housing.
One of the reasons the playhouse has been easy to forget, and difficult to locate, is that there are no maps from the period that show the junction there. From the perspective of the Elizabethan mapmaker, there was not much to see south of the Thames – London was located on the north side of the river, and the road to the south quickly ran into fields only pockmarked by the occasional dwelling place or church.
While early modern maps and panoramas have been very helpful in locating the more famous playhouses like the Globe on London’s Bankside, they provide no help in searching for the playhouse at Newington Butts.
Some maps of the roads survive from at least 1681. In 1955, surveyor Ida Darlington pointed out that a property to the east of the juncture on this map was the same as that on which the playhouse stood. However, the map is of too poor quality to find a precise location.
I used another map from 1746 drawn up by surveyor John Rocque to pinpoint the playhouse. The building north of the junction has remained in the same place for several hundred years. It began as stables, later becoming the Elephant and Castle Inn. Knowing this, and using early leases that record the site of the playhouse, I could figure out that the playhouse stood southeast of the inn.
In 1960, the Newington Butts junction was replaced by the Elephant and Castle roundabout. The site of the playhouse now likely lies under the Elephant and Castle shopping centre, named after the inn that stood there until 1960. Any archaeological remains, if they survived the redevelopment, would thus be under where the market stalls are situated. Unfortunately, this would seem unlikely, as the shopping centre’s foundations were very deep.
So where did Shakespeare’s troupe go after Newington Butts? Their next known stopping point was in Marlborough. By the end of 1594, they ended up performing at the Theatre in Shoreditch, the first of the famed round theatres. In 1598, the Theatre was closed down and the more famous Globe was built in 1599.
One could say the consequences of the planet’s warming climate can be seen on fashion week runways and the shelves of Anthropologie and H&M. Silhouettes shrink as midriffs and backs open. Sheer fabrics, breathable textiles and flowy draping are in. And in response to climate change’s rapid pace, some corners of the fashion industry are moving toward implementing sustainable business practices and incorporating more flexibility within their designs.
Today people may see global warming as a modern phenomenon, but fashion has a long history of responding to worldwide climate change.
The only difference is that while we sweat, early modern Europeans froze. The Little Ice Age was an interval of erratic cooling that ravaged the Northern Hemisphere roughly between the 14th and 19th centuries. And like today’s designers, Renaissance fashion designers were forced to contend with shifting temperatures and strange weather.
A menacing chill settles on Europe
Scientists have yet to determine the primary cause of the Little Ice Age, and historians are still pinning down its exact chronological parameters. But voices from the era describe a rapidly cooling climate.
“At this time there was such a great cold that we almost froze to death in our quarters,” a soldier wrote in his diary while traveling through Germany in 1640. “And,” he continued, “on the road, three people did freeze to death: a cavalry-man, a woman, and a boy.”
The entry was from August.
Scholars do agree that the Little Ice Age impacted our shared global history in myriad traceable ways. Its unpredictable temperature fluctuations and sudden freezes devastated harvests, escalated civil unrest and left thousands to starve. It may have inspired the menacingly chilly settings of Shakespeare’s “King Lear” and Charles Dickens’s “A Christmas Carol.” Darkness and clouds haunt the skies of paintings created during the period.
And the Little Ice Age also altered the history of fashion. As the cold ramped up in the 16th century, fashion championed warmer styles: Heavy drapery, multiple layers and sleeves that trailed on the floor became more common across the visual and material record, while examples of the oldest surviving European gloves, hats, capes and coats from the era populate museum costume collections today.
“No one in Egypt used to know about wearing furs,” a Turkish man traveling through northern Africa wrote in 1670. “There was no winter. But now we have severe winters and we have started wearing furs because of the cold.”
Staying fashionably warm
This change can be observed by comparing medieval and Renaissance dress.
In one French medieval manuscript (illustrated between 1115 and 1125), the knight’s skirt is slit to the hip, and his squire’s hemline stops above the knee. There are no capes, fur or headgear; the garments are light and loose – especially compared to what men wore 400 years later, when the Little Ice Age was in full swing.
Take Hans Holbien’s iconic 1553 painting “The French Ambassadors,” which depicts two courtiers to King Henry VIII. The man on the left, wearing thick, dark velvets and a heavily fur-lined overcoat, is the French ambassador to England, Jean de Dinteville. Georges de Selve, the bishop of Lavaur, stands on the right.
The cleric has donned a floor-length coat befitting his godly station. But it would have also been very effective against cold. Both men sport fashionable caps and undergarments. The laced collar of De Selve’s undershirt peaks above his robes, and those white slashes in de Dinteville shiny pink shirt show off his hidden layers.
As with all portraits from the era, these men dressed to impress for the sitting – meaning their fanciest clothes were possibly their warmest.
Women’s clothing also had to sustain temperature fluctuations that tended to range colder during the Little Ice Age. In a 16th-century portrait of Katherine Parr, the sixth wife of Henry VIII, Parr wears a headdress and a multi-layered gown with billowing sleeves.
Several petticoats would have been required to sustain the bell shape of her skirts. If you look closely, you’ll see a thin, translucent layer of fabric that shields her exposed skin where the neckline ends. Meanwhile, a large fur mantle – at the time, an essential accessory – is draped over her arms.
For example, one Spanish dress is outfitted with a cape atop the thick fabrics that make up the bodice, skirt and stacked sleeves. Beneath this densely layered gown, the wearer would have also needed to don several tiers of skirts and undergarments.
A British lady’s jacket from around 1616 also may hint at cold weather. Tailored from linen, silk and metal, this tight bodice probably kept its wearer very warm. (Early modern clothing often featured cloth-of-gold thread, which was made from actual thin strips of gold metal and painstakingly wrapped around sewing thread.)
Portraits and preserved garments from the Little Ice Age tend to have one thing in common: They are all the pictures or products of elites who enjoyed the means to have a likeness made of themselves. Their wealth is evident in the very existence of these images and the expensive clothes they wear.
Their opulence ignores the various crises of the era. While countless peasants were displaced from their homes and died from starvation or rampant disease, the rich simply transitioned to sable-lined sleeves and mantels threaded with gold.
It’s dangerous to oversimplify historical narrative. But the parallels to our current situation are hard to ignore. Climate change is a looming threat, with deep social and political ramifications.
Yet for many, it remains a distant phenomenon, something that – beyond buying lighter, looser clothing – is easy to dismiss.
13. I do set my bow in the cloud, and it shall be for a token of a covenant between me and the earth. 14. And it shall come to pass, when I bring a cloud over the earth, that the bow shall be seen in the cloud: 15. And I will remember my covenant, which is between me and you and every living creature of all flesh; and the waters shall no more become a flood to destroy all flesh. 16. And the bow shall be in the cloud; and I will look upon it, that I may remember the everlasting covenant between God and every living creature of all flesh that is upon the earth.