Category Archives: article

‘Palace letters’ show the queen did not advise, or encourage, Kerr to sack Whitlam government



AAP/EPA/Toby Melville

Anne Twomey, University of Sydney

For more than four decades, the question has been asked: did the queen know the governor-general, Sir John Kerr, was about to dismiss the Whitlam government, and did she encourage or support that action?

The release of the “palace letters” between Kerr and the palace can now lay that question to rest. The answer was given, unequivocally, by the queen’s private secretary, Sir Martin Charteris, in a letter to Kerr on November 17 1975. He said:

If I may say so with the greatest respect, I believe that in NOT informing The Queen what you intended to do before doing it, you acted not only with perfect constitutional propriety but also with admirable consideration for Her Majesty’s position.

Certainly, Kerr had kept the palace up to date with the various developments in Australia. While governors-general usually communicate with the queen only three or four times a year during ordinary times, it is common during a crisis for updates on the political situation to be made every few days – particularly if there is a risk of the queen becoming involved or the exercise of a reserve power.




Read more:
The big reveal: Jenny Hocking on what the ‘palace letters’ may tell us, finally, about The Dismissal


Drawing the palace into the crisis

In 1975, there were multiple issues that might have drawn the palace into the crisis.

First, there was the question of whether Kerr should exercise a reserve power to refuse royal assent to an appropriation bill that had been passed by the House of Representatives but not the Senate. Fortunately, Whitlam dropped this idea, so that controversy disappeared.

Then there was the question of whether state premiers would advise state governors to refuse to issue the writs for a half-Senate election, and whether Whitlam would then advise the queen to instruct the governors to issue the writs. This didn’t happen either, because Whitlam did not get to hold his half-Senate election. But the prospect was enough to worry the palace.

The Whitlam government was dismissed on November 11 1975.
AAP/National Archives of Australia

Next there was the issue of what to do with the Queensland governor, Sir Colin Hannah. Hannah, in a speech, had referred to the “fumbling ineptitude” of the Whitlam government. Hannah held a “dormant commission” to act as administrator of the Commonwealth when the governor-general was away.

Whitlam, contrary to the advice of both the Department of the Prime Minister and Cabinet and the Attorney-General’s Department, advised the queen to remove Hannah’s commission to be administrator.

Separately, the Queensland opposition petitioned for Hannah to be removed as governor, but that required the advice of British ministers, as Queensland was still in those days a “dependency” of the British Crown.

So the palace had to juggle advice on Hannah from two different sources.

A race to the palace

Another pressing question was what should be done if Whitlam advised Kerr’s dismissal. Kerr’s letters more than once referred to Whitlam talking of a “race to the Palace” to see whether he could dismiss Kerr before Kerr dismissed him.

Kerr saw these “jokes” as having an underlying menace. Kerr knew he didn’t have to race to the palace – he could dismiss the prime minister immediately. But he also knew, after Whitlam advised Hannah’s removal merely for using the words “fumbling ineptitude”, that Whitlam wouldn’t hesitate to act.

Sir John Kerr.
AAP/National Archives of Australia

The letters also show Kerr had been told that while the “Queen would take most unkindly” to being told to dismiss her governor-general, she would eventually do so because, as a constitutional sovereign, she had no option but to follow the advice of her prime minister. This would inevitably have brought her into the fray in an essentially Australian constitutional crisis.

Kerr explained in a letter after the dismissal that if he had given Whitlam 24 hours to advise a dissolution or face the prospect of dismissal, there was a considerable risk Whitlam would advise the queen to dismiss Kerr. He wrote:

[…] the position would then have been that either I would in fact be trying to dismiss him whilst he was trying to dismiss me, an impossible position for The Queen, or someone totally inexperienced in the developments of the crisis up to that point, be it a new Governor-General or an Administrator who would have to be a State Governor, would be confronted by the same implacable Prime Minister.

Advice from the palace

The letters reveal much of Kerr’s thinking, but little from the palace. Charteris rightly accepted the reserve powers existed, but they were to be used “in the last resort and then only for constitutional and not for political reasons”.

Charteris stressed the exercise of such powers was a

heavy responsibility and it is only at the very end when there is demonstrably no other course that they should be used.

This did not give Kerr any “green light” or encouragement to act. No-one suggested to him that the end had come and there was no other course to be followed. That was for Kerr to judge, and rightly so, because the powers could only be exercised by him – not the queen.

Whether the end had come and there was no other course is essentially what continues to be debated today. Should Kerr have waited? Should he have warned Whitlam? Was another course of action available?

All of these questions may justly be debated. But, no, the queen did not direct Kerr to dismiss Whitlam. He was not encouraged to do so. He was only encouraged to obey the Australian Constitution, which is something we all should do.The Conversation

Anne Twomey, Professor of Constitutional Law, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Did ancient Americans settle in Polynesia? The evidence doesn’t stack up



Andres Moreno-Estrada

Lisa Matisoo-Smith, University of Otago and Anna Gosling, University of Otago

How did the Polynesian peoples come to live on the far-flung islands of the Pacific? The question has intrigued researchers for centuries.

Norwegian explorer Thor Heyerdahl brought the topic to public attention when he sailed a balsa-wood raft called the Kon-Tiki from Peru to Polynesia in 1947. His goal was to demonstrate such voyages were possible, supporting theories linking Polynesian origins to the Americas.

Decades of research in archaeology, linguistics and genetics now show that Polynesian origins lie to the west, ultimately in the islands of southeast Asia. However, the myth of migrations from America has lingered in folk science and on conspiracy websites.

Pacific migrations: red arrows show expansion from island southeast Asia, blue arrows show Polynesian expansion, yellow arrows show proposed contact with the Americas.
Anna Gosling / Wilmshurst et al. (2011), Author provided

New evidence for American interlopers?

A new study published in Nature reports genetic evidence of Native American ancestry in several Polynesian populations. The work, by Alexander Ioannidis and colleagues, is based on a genetic analysis of 807 individuals from 17 island populations and 15 indigenous communities from South and Central America.

Other researchers have previously found evidence of indigenous American DNA in the genomes of the modern inhabitants of Rapa Nui. (Rapa Nui, also known as Easter Island, is the part of Polynesia closest to South America.)

The estimated timing of these interactions, however, raised concerns. Analyses of DNA from ancient Rapa Nui skeletal remains found no evidence of such mingling, or admixture. This suggests the “Amerindian” genetic component was likely introduced later via Chilean colonists.

Ioannidis and colleagues found southern South American Indigenous DNA in the genomes – the genetic material – of modern Rapa Nui, but they claim it represents a second pulse of contact. They also found signs of earlier contact, coming from as far north as Colombia or even Mexico.

More novel was the fact that this earlier signal was also found in modern DNA samples collected in the 1980s from the Marquesas and the Tuamotu archipelagos. The researchers argue this likely traces to a single “contact event” around 1200 AD, and possibly as early as 1082 AD.

Both suggested dates for this first event are earlier than those generally accepted for the settlement of Rapa Nui (1200-1250 AD). The earlier date predates any archaeological evidence for human settlement of the Marquesas or any of the other islands on which it was identified.

Ioannidis and colleagues make sense of this by suggesting that perhaps “upon their arrival, Polynesian settlers encountered a small, already established, Native American population”.




Read more:
What wind, currents and geography tell us about how people first settled Oceania


Follow the kūmara

The 1200 AD date and the more northerly location of the presumed contact on the South American continent are not unreasonable. They are consistent with the presence and distribution of the sweet potato, or kūmara.

This plant from the Americas is found throughout Eastern Polynesia. It gives us the strongest and most widely accepted archaeological and linguistic evidence of contact between Polynesia and South America.

Kūmara remains about 1,000 years old have been found in the Cook Islands in central Polynesia. When Polynesian colonists settled the extremes of the Polynesian triangle – Hawai’i, Rapa Nui, and Aotearoa New Zealand – between 1200 and 1300 AD, they brought kūmara in their canoes.

So contact with the Americas by that time fits with archaeological data. The suggestion that it was Native Americans who made the voyage, however, is where we think this argument goes off the rails.

Polynesian voyagers travelled in double-hulled canoes much like the Hokule’a, a reconstruction of a traditional vessel built in the 1970s.
Phil Uhl / Wikimedia, CC BY-SA

A great feat of sailing

Polynesians are among the greatest navigators and sailors in the world. Their ancestors had been undertaking voyages on the open ocean for at least 3,000 years.

Double hulled Polynesian voyaging canoes were rapidly and systematically sailing eastwards across the Pacific. They would not have stopped until they hit the coast of the Americas. Then, they would have returned home, using their well proven skills in navigation and sailing.

While Heyerdahl showed American-made rafts could make it out to the Pacific, Indigenous Americans have no history of open ocean voyaging. Similarly, there is no archaeological evidence of pre-Polynesian occupation on any of the islands of Polynesia.




Read more:
Chickens tell tale of human migration across Pacific


The limitations of genetic analysis

Genetic analyses attempting to reconstruct historical events based on data from modern populations are fraught with potential sources of error. Addressing questions where only a few hundred years make a major difference is particularly difficult.

Modelling population history needs to consider demographic impacts such as the massive depopulation caused by disease and other factors associated with European colonisation.

Ioannidis and colleagues took this into account for Rapa Nui, but not for the Marquesas. Estimates of population decline in the Marquesas from 20,000 in 1840 to around 3,600 by 1902 indicate a significant bottleneck.

The choice of comparative populations was also interesting. The only non-East Polynesian Pacific population used in analyses was from Vanuatu. Taiwanese Aboriginal populations were used as representatives of the “pure” Austronesian ancestral population for Polynesians.

This is wrong and overly simplistic. Polynesian genomes themselves are inherently admixed. They result from intermarriages between people probably from a homeland in island southeast Asia (not necessarily Taiwan) and other populations encountered en route through the Pacific.

Polynesian Y chromosomes and other markers show clear evidence of admixture with western Pacific populations. Excluding other Oceanic and Asian populations from the analyses may have skewed the results. Interestingly, the amount of Native American admixture identified in the Polynesian samples correlates with the amount of European admixture found in those populations.

Finally, like many recent population genetic studies, Ioannidis and colleagues did not look at sequences of the whole genome. Instead, they used what are called single nucleotide polymorphism (SNP) arrays.

SNP arrays are designed based on genetic variation identified through studies of primarily Asian, African and European genomes. Very few Pacific or other indigenous genomes were included in the databases used to design SNP arrays. This means variation in these populations may be misinterpreted or underestimated.

Summing up

While the results presented by Ioannidis and colleagues are very interesting, to fully understand them will require a level of scholarly engagement that may take some time.

Did contact between Polynesians and indigenous Americans happen? Significant evidence indicates that it did. Do these new data prove this? Perhaps, though there are a number of factors that need further investigation. Ideally, we would like to see evidence in ancient genetic samples. Engagement with the Pacific communities involved is also critical.

However, if the data and analyses are correct, did the process likely occur via the arrival of indigenous Americans, on their own, on an island in eastern Polynesia? This, we argue, is highly questionable.The Conversation

Lisa Matisoo-Smith, Professor of Biological Anthropology, University of Otago and Anna Gosling, Research Fellow, University of Otago

This article is republished from The Conversation under a Creative Commons license. Read the original article.


When France extorted Haiti – the greatest heist in history



Haitian President Jean-Pierre Boyer receiving Charles X’s decree recognizing Haitian independence on July 11, 1825.
Bibliotheque Nationale de France

Marlene Daut, University of Virginia

In the wake of George Floyd’s killing, there have been calls for defunding police departments and demands for the removal of statues. The issue of reparations for slavery has also resurfaced.

Much of the reparations debate has revolved around whether the United States and the United Kingdom should finally compensate some of their citizens for the economic and social costs of slavery that still linger today.

But to me, there’s never been a more clear-cut case for reparations than that of Haiti.

I’m a specialist on colonialism and slavery, and what France did to the Haitian people after the Haitian Revolution is a particularly notorious examples of colonial theft. France instituted slavery on the island in the 17th century, but, in the late 18th century, the enslaved population rebelled and eventually declared independence. Yet, somehow, in the 19th century, the thinking went that the former enslavers of the Haitian people needed to be compensated, rather than the other way around.

Just as the legacy of slavery in the United States has created a gross economic disparity between Black and white Americans, the tax on its freedom that France forced Haiti to pay – referred to as an “indemnity” at the time – severely damaged the newly independent country’s ability to prosper.

The cost of independence

Haiti officially declared its independence from France in 1804. In October 1806, the country was split into two, with Alexandre Pétion ruling in the south and Henry Christophe ruling in the north.

Despite the fact that both of Haiti’s rulers were veterans of the Haitian Revolution, the French had never quite given up on reconquering their former colony.

In 1814 King Louis XVIII, who had helped overthrow Napoléon earlier that year, sent three commissioners to Haiti to assess the willingness of the country’s rulers to surrender. Christophe, having made himself a king in 1811, remained obstinate in the face of France’s exposed plan to bring back slavery. Threatening war, the most prominent member of Christophe’s cabinet, Baron de Vastey, insisted,“ Our independence will be guaranteed by the tips of our bayonets!”

A portrait of Alexandre Pétion.
Alfred Nemours Archive of Haitian History, University of Puerto Rico

In contrast, Pétion, the ruler of the south, was willing to negotiate, hoping that the country might be able to pay France for recognition of its independence.

In 1803, Napoléon had sold Louisiana to the United States for 15 million francs. Using this number as his compass, Pétion proposed paying the same amount. Unwilling to compromise with those he viewed as “runaway slaves,” Louis XVIII rejected the offer.

Pétion died suddenly in 1818, but Jean-Pierre Boyer, his successor, kept up the negotiations. Talks, however, continued to stall due to Christophe’s stubborn opposition.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

“Any indemnification of the ex-colonists,” Christophe’s government stated, was “inadmissible.”

Once Christophe died in October 1820, Boyer was able to reunify the two sides of the country. However, even with the obstacle of Christophe gone, Boyer repeatedly failed to successfully negotiate France’s recognition of independence. Determined to gain at least suzerainty over the island – which would have made Haiti a protectorate of France – Louis XVIII’s successor, Charles X, rebuked the two commissioners Boyer sent to Paris in 1824 to try to negotiate an indemnity in exchange for recognition.

On April 17, 1825, the French king suddenly changed his mind. He issued a decree stating France would recognize Haitian independence but only at the price of 150 million francs – or 10 times the amount the U.S. had paid for the Louisiana territory. The sum was meant to compensate the French colonists for their lost revenues from slavery.

Baron de Mackau, whom Charles X sent to deliver the ordinance, arrived in Haiti in July, accompanied by a squadron of 14 brigs of war carrying more than 500 cannons.

Rejection of the ordinance almost certainly meant war. This was not diplomacy. It was extortion.

With the threat of violence looming, on July 11, 1825, Boyer signed the fatal document, which stated, “The present inhabitants of the French part of St. Domingue shall pay … in five equal installments … the sum of 150,000,000 francs, destined to indemnify the former colonists.”

French prosperity built on Haitian poverty

Newspaper articles from the period reveal that the French king knew the Haitian government was hardly capable of making these payments, as the total was more than 10 times Haiti’s annual budget. The rest of the world seemed to agree that the amount was absurd. One British journalist noted that the “enormous price” constituted a “sum which few states in Europe could bear to sacrifice.”

A facsimile of the bank note for the 30 million francs that Haiti borrowed from a French bank.
Lepelletier de Saint-Remy, ‘Étude Et Solution Nouvelle de la Question Haïtienne.’

Forced to borrow 30 million francs from French banks to make the first two payments, it was hardly a surprise to anyone when Haiti defaulted soon thereafter. Still, the new French king sent another expedition in 1838 with 12 warships to force the Haitian president’s hand. The 1838 revision, inaccurately labeled “Traité d’Amitié” – or “Treaty of Friendship” – reduced the outstanding amount owed to 60 million francs, but the Haitian government was once again ordered to take out crushing loans to pay the balance.

Although the colonists claimed that the indemnity would only cover one-twelfth the value of their lost properties, including the people they claimed as their slaves, the total amount of 90 million francs was actually five times France’s annual budget.

The Haitian people suffered the brunt of the consequences of France’s theft. Boyer levied draconian taxes in order to pay back the loans. And while Christophe had been busy developing a national school system during his reign, under Boyer, and all subsequent presidents, such projects had to be put on hold. Moreover, researchers have found that the independence debt and the resulting drain on the Haitian treasury were directly responsible not only for the underfunding of education in 20th-century Haiti, but also lack of health care and the country’s inability to develop public infrastructure.

Contemporary assessments, furthermore, reveal that with the interest from all the loans, which were not completely paid off until 1947, Haitians ended up paying more than twice the value of the colonists’ claims. Recognizing the gravity of this scandal, French economist Thomas Piketty acknowledged that France should repay at least US$28 billion to Haiti in restitution.

A debt that’s both moral and material

Former French presidents, from Jacques Chirac, to Nicolas Sarkozy, to François Hollande, have a history of punishing, skirting or downplaying Haitian demands for recompense.

In May 2015, when French President François Hollande became only France’s second head of state to visit Haiti, he admitted that his country needed to “settle the debt.” Later, realizing he had unwittingly provided fuel for the legal claims already prepared by attorney Ira Kurzban on behalf of the Haitian people – former Haitian President Jean-Bertrand Aristide had demanded formal recompense in 2002 – Hollande clarified that he meant France’s debt was merely “moral.”

To deny that the consequences of slavery were also material is to deny French history itself. France belatedly abolished slavery in 1848 in its remaining colonies of Martinique, Guadeloupe, Réunion and French Guyana, which are still territories of France today. Afterwards, the French government demonstrated once again its understanding of slavery’s relationship to economics when it took it upon itself to financially compensate the former “owners” of enslaved people.

The resulting racial wealth gap is no metaphor. In metropolitan France 14.1% of the population lives below the poverty line. In Martinique and Guadeloupe, in contrast, where more than 80% of the population is of African descent, the poverty rates are 38% and 46%, respectively. The poverty rate in Haiti is even more dire at 59%. And whereas the median annual income of a French family is $31,112, it’s only $450 for a Haitian family.

These discrepancies are the concrete consequence of stolen labor from generations of Africans and their descendants. And because the indemnity Haiti paid to France is the first and only time a formerly enslaved people were forced to compensate those who had once enslaved them, Haiti should be at the center of the global movement for reparations.The Conversation

Marlene Daut, Professor of African Diaspora Studies, University of Virginia

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Britain’s monument culture obscures a violent history of white supremacy and colonial violence



The statue of slave trader Robert Milligan was removed from outside the London Docklands Musuem.
Emma Tarrant/Shutterstock

Rebecca Senior, University of Nottingham

After 125 years in the same spot, the bronze statue of slaver Edward Colston lies at the bottom of Bristol Harbour. Unsurprisingly, many were unhappy with the move by Black Lives Matter protestors. Only a day later a group of dissenters attempted to fish the heavy statue from its watery grave.

It would seem that such attempts at retrieval are as futile as resisting the long-awaited reckoning on the nature and meaning of public monuments across Britain. Colston’s removal has featured prominently in the international press and sparked debates about history and erasure across the world. It has also prompted widespread conversation around which statues and monuments should be scrutinised for their celebration of violent colonialism and white supremacy.

The significance of this moment cannot be overlooked by art historians. Monuments demonstrate how visual and material culture can be weaponised to obscure the violence that characterised British colonial expansion. From single statues to elaborate multi-figure designs, monuments represent a visual culture that has been mobilised as a means to celebrate and justify white supremacy throughout history. To this end, they did not solely rely on sculptural statues of colonial “heroes” such as Colston, but also other types of visual communication to misrepresent empire as a noble and heroic pursuit.

Artistic state propaganda

Sculptures of allegorical figures are the omnipresent artistic symbol of state propaganda and oppression on sculpted monuments. Fictional female figures such as “Victory”, “Peace”, “Justice” and “Britannia” gained popularity during the most aggressive period of British imperial expansion in the 18th and 19th centuries.

A statue of Victory at the top of the Victoria Memorial in front of Buckingham Palace, London.
Anibal Trejo/Shutterstock

It is important that the role of these figures is not forgotten in this current moment. Firstly, because they demonstrate how monuments enabled sculptors to not only commemorate the deceased, but also to propagate the message of British colonialism and white supremacy for future generations. Secondly, because understanding them enables the public to recognise how visual culture can obfuscate state oppression. The “Victory” figures that appeared on British monuments in the 18th century were reused en masse for Confederate monuments over a hundred years later.

It was not a coincidence that, ahead of the civil rights protest, Bristol City council decided to cover Colston’s statue under a canvas. Much like history as a discipline itself, monuments are not neutral records but revisionist objects that mobilise art as a means to oppress. Acknowledging them as state-sponsored attempts to transform slavery and genocide into palatable subjects for public consumption exposes them as a visual accompaniment to Britain’s violent programme of colonisation.

Campaigns for removal

More statues commemorating white perpetrators of colonial atrocities are coming down daily, such as that of Leopold II outside Antwerp Museum in Belgium. New resources are also being developed to identify which should be next. The statue of slaver Robert Milligan has already been removed from outside the Museum of London Docklands. The decision was made by the Canal and River Trust in response to to petitions calling for its removal, showing that institutions can and should take decisions into their own hands.

However, those advocating for removal are increasingly met with the now-familiar argument that such moves represent an erasure of history. It’s an argument that has been firmly established in debates around Confederate monuments in the United States. Its an argument that has become louder in the UK over the past few days.

Over the coming weeks, figures from British history will be scrutinised and judged in an unprecedented way. This process will educate more people on the real history of Britain and bring new meaning to the monuments people walk by daily. This will hopefully, as the historian David Olusoga has argued, achieve tangible results where previous campaigns have failed.

#RhodesMustFall is a movement that started in 2015 in Cape Town, South Africa.
JeremyRichards/Shutterstock

London’s mayor, Sadiq Khan, has announced that he will be establishing a Commission for Diversity in the Public Realm to review London’s landmarks. But the move echoes the move by New York City mayor Bill de Blasio in 2017 to establish a Mayoral Advisory Commission on City Art, Monuments, and Markers, which recommended only one removal. This was monument to the torturer James Marion Simms, who in the 19th century performed horrific experiments on enslaved black women. Rather than being fully removed, it was relocated to a public cemetery in Brooklyn. There are fears that there will be similar outcomes in the London review.

Bolstered by the recent protests against anti-Black racism and state violence, public art continues to be reckoned with across the world, with ongoing campaigns such as #RhodesMustFall in Oxford and Take Em Down Nola in New Orleans, US.

Allegorical figures are one of the ways that monuments fictionalise history through visual culture. Understanding the role art played as a sanitiser of violence shows that destroying or removing monuments from public view does not erase history. Instead, monuments were designed to do just that by obfuscating state oppression and white supremacy through a thin veil of sculptural order.The Conversation

Rebecca Senior, Postdoctoral research fellow, University of Nottingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Shillings, gods and runes: clues in language suggest a Semitic superpower in ancient northern Europe



Dido building Carthage, or The Rise of the Carthaginian Empire. Joseph Mallord William Turner, c 1815.
The National Gallery, CC BY-NC-SA

Robert Mailhammer, Western Sydney University

Remember when Australians paid in shillings and pence? New research suggests the words for these coins and other culturally important items and concepts are the result of close contact between the early Germanic people and the Carthaginian Empire more than 2,000 years ago.

The city of Carthage, in modern-day Tunisia, was founded in the 9th century BCE by the Phoenicians. The Carthaginian Empire took over the Phoenician sphere of influence, with its own sphere of influence from the Mediterranean in the east to the Atlantic in the west and further into Africa in the south. The empire was destroyed in 146 BCE after an epic struggle against the Romans.

Carthaginian sphere of influence.
Adapted from Kelly Macquire/Ancient History Encyclopedia, CC BY-NC-SA

The presence of the Carthaginians on the Iberian Peninsula is well documented, and it is commonly assumed they had commercial relations with the British Isles. But it is not generally believed they had a permanent physical presence in northern Europe.

By studying the origin of key Germanic words and other parts of Germanic languages, Theo Vennemann and I have found traces of such a physical presence, giving us a completely new understanding of the influence of this Semitic superpower in northern Europe.

Linguistic history

Language can be a major source of historical knowledge. Words can tell stories about their speakers even if there is no material evidence from archeology or genetics. The many early Latin words in English, such as “street”, “wine” and “wall”, are evidence for the influence of Roman civilisation.




Read more:
Uncovering the language of the first Christmas


Punic was the language of the Carthaginians. It is a Semitic language and closely related to Hebrew. Unfortunately, there are few surviving texts in Punic and so we often have to use Biblical Hebrew as a proxy.

Proto-Germanic was spoken in what is now northern Germany and southern Scandinavia more than 2,000 years ago, and is the ancestor of contemporary Germanic languages such as English, German, Norwegian and Dutch.

Identifying traces of Punic in Proto-Germanic languages tell an interesting story.

Take the words “shilling” and “penny”: both words are found in Proto-Germanic. The early Germanic people did not have their own coins, but it is likely they knew coins if they had words for them.

Silver double shekel of Carthage.
© The Trustees of the British Museum, CC BY-NC-SA

In antiquity, coins were used in the Mediterranean. One major coin minted in Carthage was the shekel, the current name for currency of Israel. We think this is the historical origin of the word “shilling” because of the specific way the Carthaginians pronounced “shekel”, which is different from how it is pronounced in Hebrew.

The pronunciation of Punic can be reasonably inferred from Greek and Latin spellings, as the sounds of Greek and Latin letters are well known. Punic placed a strong emphasis on the second syllable of shekel and had a plain “s” at the beginning, instead of the “esh” sound in Hebrew.

But to speakers of Proto-Germanic – who normally put the emphasis on the first syllable of words – it would have sounded like “skel”. This is exactly how the crucial first part of the word “shilling” is constructed. The second part, “-(l)ing”, is undoubtedly Germanic. It was added to express an individuating meaning, as in Old German silbarling, literally “piece of silver”.

This combining of languages in one word shows early Germanic people must have been familiar with Punic.

Similarly, our word “penny” derives from the Punic word for “face”, panē. Punic coins were minted with the face of the goddess Tanit, so we believe panē would have been a likely name for a Carthaginian coin.

A silver coin minted in Carthage, featuring the Head of Tanit and Pegasus.
© The Trustees of the British Museum, CC BY-NC-SA

Cultural and social dominance

Sharing names for coins could indicate a trade relationship. Other words suggest the Carthaginians and early Germanic people had a much closer relationship.

By studying loan words between Punic and Proto-Germanic, we can infer the Carthaginians were culturally and socially dominant.

One area of Carthage leadership was agricultural technology. Our work traces the word “plough” back to a Punic verb root meaning “divide”. Importantly, “plough” was used by Proto-Germanic speakers to refer to a more advanced type of plough than the old scratch plough, or ard.

Close contact with the Carthaginians can explain why speakers of Proto-Germanic knew this innovative tool.

The Old Germanic and Old English words for the nobility, for example æþele, are also most likely Punic loanwords. If a word referring to the ruling class of people comes from another language, this is a good indication the people speaking this language were socially dominant.

Intersections of language and culture

We found Punic also strongly influenced the grammar of early Germanic, Germanic mythology and the Runic alphabet used in inscriptions in Germanic languages, until the Middle Ages.

Four of the first five letters of the Punic alphabet and the first four letters of the Germanic Runic alphabet.
Mailhammer & Vennemann (2019), Author provided

This new evidence suggests many early Germanic people learnt Punic and worked for the Carthaginians, married into their families, and had bilingual and bicultural children.

When Carthage was destroyed this connection was eventually lost. But the traces of this Semitic superpower remain in modern Germanic languages, their culture and their ancient letters.The Conversation

Robert Mailhammer, Associate Dean, Research, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Did a tragic family secret influence Kate Sheppard’s mission to give New Zealand women the vote?



Kate Sheppard (seated at centre) with the National Council of Women in Christchurch. 1896.

Katie Pickles, University of Canterbury

The family of pioneering New Zealand suffragist Kate Sheppard kept an important secret – one that possibly explains a lot about her life, her beliefs and her motivation.

The secret involved her father, Andrew Wilson Malcolm, and what happened to him after Kate was born. An extensive and painstaking quest by her great great niece Tessa Malcolm has revealed the truth about his fate.

Sadly, Tessa died in 2013 before publishing her decades-long research. I am now completing her work and hope to publish a new biography of Sheppard in 2023, the 130th anniversary of New Zealand becoming the first place in the world to give women the vote.

Solving the mystery of Andrew’s death deepens our understanding of Kate and her extraordinary life.

What happened to Kate Sheppard’s father?

Following family leads and with detailed searches of official and military records, wills and graves, Tessa finally established the truth: Andrew Malcolm died aged 42 of the delirium tremens (DTs) in New Mexico on January 26, 1862.

The DTs are a severe form of alcohol withdrawal and a horrible way to die. Symptoms include fever, seizures and hallucinations.

Kate Sheppard.

It had already been a long and difficult slog for Andrew. He was one of thousands of Scotsmen who served in overseas armies throughout the 19th century, motivated by a lust for adventure, sympathy for a cause, financial reward, a desire to emigrate or just to escape their lives at home.

When he died he was months short of completing ten years service in the Union Army. His burial site at Fort Craig was recently looted, which led to the official exhumation and reburial of bodies, Andrew’s remains possibly among them.

So we now know the Scottish father of a leader in the New Zealand Women’s Christian Temperance Union (WCTU) died an alcoholic amid the horrors of the American Civil War. He had served and sacrificed his life on US soil, far from his wife and five children at home in the British Isles.




Read more:
NZ was first to grant women the vote in 1893, but then took 26 years to let them stand for parliament


The personal becomes political

As is well known, after the family left Scotland and re-grouped in New Zealand, Kate went on to play a key role in the movement to grant women the vote.

The late Tessa Malcolm, great great niece of Kate Sheppard.
Author provided

The peaceful campaign was closely aligned with the temperance movement. It argued that moral, enfranchised women were needed to clean up society by voting against the “demon drink”.

A New Zealand tour in 1885 by Mary Leavitt of the American WCTU was a catalyst for local organising. Sheppard became the secretary of the WCTU franchise department.

With her own family experience and connection with America, we can certainly speculate that for Kate temperance was more than a platform from which women could gain the vote. It’s highly probable that her quests for a sober society and votes for women were personally entwined.

A missing page from history

So why did Andrew’s death remain a secret? Stigma, a sense of shame, or just the natural desire for privacy could all be explanations.

In her 1992 biography of Kate Sheppard, Judith Devaliant dedicated only two pages to Kate’s life prior to her 1869 migration to New Zealand around the age of 21. Of Andrew she wrote: “His death has not been traced with any accuracy, although it is known that he died at an early age leaving his widow to cope with five young children.”




Read more:
Hundred years of votes for women: how far we’ve come and how far there’s still to go


The biography is also vague about the details of his life. He was born in Dunfermline, Fifeshire, in 1819 and married Jemima Crawford Souter on Islay in the Hebrides in 1842. Documents describe his occupation variously as lawyer, banker, brewer’s clerk and legal clerk.

There is no mention of Andrew in either the New Zealand History Net or Book of New Zealand Women entries on Kate Sheppard. Until now, the focus is on Kate’s adult life and work, with family taking a back seat.

Even in her own 1993 entry on Kate in the Dictionary of New Zealand Biography Tessa simply wrote: “Her father died in 1862”. The implication was that Andrew had died in Scotland, although Dublin and Jamaica also appear in genealogical records.

Ruins of the officers’ quarters, Fort Craig, New Mexico, USA: last resting place of Kate Sheppard’s father.
CC BY-SA

The search goes on

But Tessa was already aware of Andrew’s New Mexico fate by 1990, two years before Devaliant’s book was published. After following dead ends and disproving family rumours she had solved the puzzle of what really happened to the ancestor she referred to as the “bete noire” of her research.

Can we conclusively say that Kate Sheppard’s temperance and suffragist work was directly linked to knowledge of her father’s death? Or are we dealing with an irony of history, albeit a sad one?

As yet we can’t be sure. But Kate’s mother definitely knew the cause of Andrew’s death and we know she greatly influenced Kate. I believe it was also likely known by other senior (and also influential) family members, but kept quiet.

The fact the truth was hidden so well suggests a degree of deliberate concealment. By building on Tessa’s groundbreaking research I hope to reveal more of a remarkable story that connects Scotland, America and New Zealand to a global first for women.The Conversation

Katie Pickles, Professor of History at the University of Canterbury, University of Canterbury

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Border closures, identity and political tensions: how Australia’s past pandemics shape our COVID-19 response


Susan Moloney, Griffith University and Kim Moloney, Murdoch University

Tensions over border closures are in the news again, now states are gradually lifting travel restrictions to all except Victorians.

Prime Minister Scott Morrison says singling out Victorians is an overreaction to Melbourne’s coronavirus spike, urging the states “to get some perspective”.

Federal-state tensions over border closures and other pandemic quarantine measures are not new, and not limited to the COVID-19 pandemic.

Our new research shows such measures are entwined in our history and tied to Australia’s identity as a nation. We also show how our experiences during past pandemics guide the plans we now use, and alter, to control the coronavirus.




Read more:
National and state leaders may not always agree, but this hasn’t hindered our coronavirus response


Bubonic plague, federation and national identity

In early 1900, bubonic plague broke out just months before federation, introduced by infected rats on ships.

When a new vaccine was available, the New South Wales government planned to inoculate just front-line workers.

Journalists called for a broader inoculation campaign and the government soon faced a “melee” in which:

…men fought, women fainted and the offices [of the Board of Health] were damaged.

Patients and contacts were quarantined at the North Head Quarantine Station. Affected suburbs were quarantined and sanitation commenced.

The health board openly criticised the government for its handling of the quarantine measures, laying the groundwork for quarantine policy in the newly independent Australia.

Quarantine then became essential to a vision of Australia as an island nation where “island” stood for immunity and where non-Australians were viewed as “diseased”.

Public health is mentioned twice in the Australian constitution. Section 51(ix) gives parliament the power to quarantine, and section 69 requires states and territories to transfer quarantine services to the Commonwealth.

The Quarantine Act was later merged to form the Immigration Restriction Act, with quarantine influencing immigration policy.

Ports then became centres of immigration, trade, biopolitics and biosecurity.

Spanish flu sparked border disputes too

In 1918, at the onset of the Spanish flu, quarantine policy included border closures, quarantine camps (for people stuck at borders) and school closures. These measures initially controlled widespread outbreaks in Australia.

However, Victoria quibbled over whether NSW had accurately diagnosed this as an influenza pandemic. Queensland closed its borders, despite only the Commonwealth having the legal powers to do so.




Read more:
This isn’t the first global pandemic, and it won’t be the last. Here’s what we’ve learned from 4 others throughout history


When World War I ended, many returning soldiers broke quarantine. Quarantine measures were not coordinated at the Commonwealth level; states and territories each went their own way.

Quarantine camps, like this one at Wallangarra in Queensland, were set up during the Spanish flu pandemic.
Aussie~mobs/Public Domain/Flickr

There were different policies about state border closures, quarantine camps, mask wearing, school closures and public gatherings. Infection spread and hospitals were overwhelmed.

The legacy? The states and territories ceded quarantine control to the Commonwealth. And in 1921, the Commonwealth created its own health department.

The 1990s brought new threats

Over the next seven decades, Australia linked quarantine surveillance to national survival. It shifted from prioritising human health to biosecurity and protection of Australia’s flora, fauna and agriculture.

In the 1990s, new human threats emerged. Avian influenza in 1997 led the federal government to recognise Australia may be ill-prepared to face a pandemic. By 1999 Australia had its first influenza pandemic plan.




Read more:
Today’s disease names are less catchy, but also less likely to cause stigma


In 2003, severe acute respiratory syndrome (or SARS) emerged in China and Hong Kong. Australia responded by discouraging nonessential travel and started health screening incoming passengers.

The next threat, 2004 H5N1 Avian influenza, was a dry run for future responses. This resulted in the 2008 Australian Health Management Plan for Pandemic Influenza, which included border control and social isolation measures.

Which brings us to today

While lessons learned from past pandemics are with us today, we’ve seen changes to policy mid-pandemic. March saw the formation of the National Cabinet to endorse and coordinate actions across the nation.

Uncertainty over border control continues, especially surrounding the potential for cruise and live-export ships to import coronavirus infections.




Read more:
Coronavirus has seriously tested our border security. Have we learned from our mistakes?


Then there are border closures between states and territories, creating tensions and a potential high court challenge.

Border quibbles between states and territories will likely continue in this and future pandemics due to geographical, epidemiological and political differences.

Australia’s success during COVID-19 as a nation, is in part due to Australian quarantine policy being so closely tied to its island nature and learnings from previous pandemics.

Lessons learnt from handling COVID-19 will also strengthen future pandemic responses and hopefully will make them more coordinated.




Read more:
4 ways Australia’s coronavirus response was a triumph, and 4 ways it fell short


The Conversation


Susan Moloney, Associate Professor, Paediatrics, Griffith University and Kim Moloney, Senior Lecturer in Global Public Administration and Public Policy, Murdoch University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Coronavirus is taking English pubs back in time



A tapster delivers a frothing tankard to seated alehouse customers in this 1824 etching.
British Museum, CC BY-NC-SA

James Brown, University of Sheffield

The announcement by Boris Johnson, the UK prime minister, that pubs in England will be allowed to resume trading from July 4 was greeted with rousing cheers from some. But having a pint in the pandemic era will be slightly different. While two-metre social distancing rules are being relaxed to one metre to ensure economic viability for publicans, to maintain the safety of customers and staff, pubs will where practical be restricted to “table service”.

Standing at the bar is one of the most cherished rituals of the British pub experience – and many people are worried that the new rules could be the beginning of the end of a tradition that dates back centuries. Except, it doesn’t – the bar as we now know it is of relatively recent vintage and, in many respects, the new regulations are returning us to the practices of a much earlier era.

Before the 19th century, propping up the bar would have been an unfamiliar concept in England’s dense network of alehouses, taverns and inns. Alehouses and taverns in particular were seldom purpose-built, but were instead ordinary dwelling houses made over for commercial hospitality. Only their pictorial signboards and a few items of additional furniture distinguished them from surrounding houses. In particular, there was no bar in the modern sense of a fixed counter over which alcohol could be purchased and served.


Check out: Intoxicating spaces


Instead, beverages were ferried directly to seated customers from barrels and bottles in cellars and store rooms by the host and, in larger establishments, drawers, pot-boys, tapsters and waiters. The layout of Margaret Bowker’s large Manchester alehouse in 1641 is typical: chairs, stools and tables were distributed across the hall, parlours, and chambers, while drink was stored in “hogsheads”, “barrels”, and “rundlets” in her cellar.

Five customers receive table service from a tapster in this woodcut illustration from a late 17th-century ballad.
English Broadside Ballad Archive

The bar as we know it didn’t emerge organically from these arrangements, but rather from the introduction of a new commodity in the 18th century: gin.
Originally it was imported from the Netherlands and distilled in large quantities domestically from the later decades of the 17th century, but the emergence of a mass market for gin in the 1700s gave rise to the specialised gin or dram shop. Found mainly in London – especially in districts such as the East End and south of the river – an innovation of these establishments was a large counter that traversed their width.

Along with a lack of seating, this maximised serving and standing space and encouraged low-value but high-volume turnover from a predominantly poor clientele. The flamboyant gin palaces of the later 18th and early 19th century – described by caricaturist and temperance enthusiast George Cruikshank as “gaudy, gold be-plastered temples” – retained the bar, along with other features drawn from the retail sector such as plate-glass windows, gas lighting, elaborate wrought iron and mahogany fittings, and displays of bottles and glasses. While originally regarded as alien to local drinking cultures, by the 1830s these architectural elements started making their way into all English pubs, with the bar literally front and centre.

An 1808 aquatint after Thomas Rowlandson, showing human and canine customers standing at the bar in a gin shop.
Metropolitan Museum of Art

As architectural historian Mark Girouard has pointed out, the adoption of the bar was a “revolutionary innovation” – a “time-and-motion breakthrough” that transformed the relationship between customers and staff. It brought unprecedented efficiencies that were especially important in the expanding and industrialising cities of the early 1800s.

In particular, a fixed counter with taps, cocks and pumps connected to spirit casks and beer barrels was more efficient than employees scurrying between cellars, storerooms and drinking areas. This was especially the case for “off-sales” – customers purchasing drinks to take home – which had always been a large component of the drinks trade and still accounted for an estimated one-third of takings into the 19th century.

An 1833 lithograph depicting an ‘obliging bar-maid’ using a beer engine.
Wellcome Collection, CC BY-NC

Posterity has paid little attention to the armies of service staff who kept the world of the tavern spinning on its axis before the age of the bar. But they are occasionally glimpsed in historical sources – such as Margaret Sephton, who was “drawing beer” at Widow Knee’s Chester alehouse in 1629, when she gave evidence about a theft of linen. While skilled – one tapster at a Chester tavern styled himself rather grandly in 1640 as a “drawer and sommelier of wine” – drink work was poorly paid. Staff were often paid in kind with food and lodgings and the work was usually undertaken by people who were young, poor, or new to the community.

The lack of a bar made the job especially challenging. It was physically demanding – in 1665 a young tapster at a Cheshire alehouse described how during her shift she was “called to and fro in the house and to other company, testifying to the constant back and forth. The fact that drinks were not poured in front of patrons made staff more vulnerable to accusations of adulteration and short measure – sometimes with good reason – and close physical proximity to customers when serving and collecting payment meant such disputes could more readily turn violent. For female employees, the absence of the insulating layer of material and space later provided by the bar meant they were much more exposed to sexual abuse from male patrons.

What can the historical record teach proprietors of any newly bar-less pubs? There are, of course, modern advantages such as apps and other digital tools – plus the example of European and North American establishments, where table service was never fully displaced. But there are practical lessons to be learned from the past all the same. Publicans today might streamline the range of drinks on offer and encourage the use of jugs for refills. Landlords could develop careful zoning for their staff – in larger alehouses and taverns tapsters were allocated specific booths and rooms. Most importantly they need to establish and enforce clear rules about behaviour towards staff – especially in terms of physical contact. Better to have premodern pubs than no pubs at all, after all.The Conversation

James Brown, Research Associate & Project Manager (UK), University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Comets, omens and fear: understanding plague in the Middle Ages



A comet depicted in medieval times in the Bayeux tapestry.
Bayeux Museam, Author provided

Marilina Cesario, Queen’s University Belfast and Francis Leneghan, University of Oxford

On August 30 2019, a comet from outside our solar system was observed by amateur astronomer Gennady Borisov at the MARGO observatory in Crimea. This was only the second time an interstellar comet had ever been recorded. Comet 19 or C/2019 Q4 , as it is now known, made its closest approach to the sun on December 8 2019, roughly coinciding with the first recorded human cases of COVID-19.

While we know that this is merely coincidence, in medieval times authorities regarded natural phenomena such as comets and eclipses as portents of natural disasters, including plagues.

One of the most learned men of the early Middle Ages was the Venerable Bede, an Anglo-Saxon monk who lived in Northumbria in the late seventh and early eighth centuries. In chapter 25 of his scientific treatise, De natura rerum (On the Nature of Things) , he describes comets as “stars with flames like hair. They are born suddenly, portending a change of royal power or plague or wars or winds or heat”.

Plagues and natural phenomena

Outbreaks of the bubonic plague were recorded long before the Black Death of the 14th century. In the 6th century, a plague spread from Egypt to Europe and lingered for the next 200 years. At the end of the seventh century, the Irish scholar Adomnán, Abbot of Iona wrote in book 42 of his Life of St Columba of “the great mortality which twice in our time has ravaged a large part of the world”. The effects of this plague were so severe in England that, according to Bede, the kingdom of Essex reverted to paganism.

The Anglo-Saxon Chronicle records that 664 “the sun grew dark, and in this year came to the island of Britain a great plague among men (‘micel man cwealm’ in Anglo Saxon)”. The year 664 held great significance for the English and Irish churches: a great meeting (or synod) was held in Whitby in Northumbria to decide whether the English church should follow the Irish or Roman system for calculating the date of Easter. By describing the occurrence of an eclipse and plague in the same year as the synod, Bede makes this important event in the English Church more memorable and meaningful.

In the Middle Ages, comets like 2019’s C/2019 Q4 signalled a calamitous event on earth to come.
NASA, ESA & D. Jewitt (UCLA), CC BY

Plague and medieval religion

In the Middle Ages, occurrences like plague and disease were thought of as expressions of God’s will. In the Bible, God uses natural phenomena to punish humankind for sin. In the Book of Revelation 6:8, for example, pestilence is described as one of the signs of Judgement Day. Medieval scholars were aware that some plagues and diseases were spread through the air, as explained by the seventh-century scholar Isidore of Seville in chapter 39 of his De natura rerum (On the Nature of Things):

Pestilence is a disease spreading widely and infecting by its contagion whatever it touches. When plague (‘plaga’) smites the earth because of mankind’s sins, then from some cause, that is, either the force of drought or of heat or an excess of rain, the air is corrupted.

Bede based his On the Nature of Things on this work by Isidore. In a discussion of plague in the Old English version of Bede’s Ecclesiastical History we find a reference to the “an-fleoga”, meaning something like “the one who flies” or “solitary flier”. This same idea of airborne disease is a feature of Anglo-Saxon medicine. One example comes from an Old English poem we call a metrical charm, which combines ancient Germanic folklore with Christian prayer and ritual. In the Nine Herbs Charm, the charmer addresses each herb individually and invokes its power over disease:

This is against poison, and this is against the one who flies,

this is against the loathsome one that travels throughout the land …

if any poison come flying from the east,

or any come from the north,

or any from the west over the nations of men,

Christ stood over the disease of every kind.

As well as fearing plague, medieval scholars attempted to pinpoint its origins and carefully recorded its occurrence and effects. Like us, they used whatever means they could to protect themselves from disease. But it is clear medieval chroniclers presented historical events as part of a divine plan for humankind by linking them with natural phenomena like plagues and comets.The Conversation

Marilina Cesario, Senior Lecturer, School of Arts, English and Languages, Queen’s University Belfast and Francis Leneghan, Associate Professor of Old English, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Friday essay: how a ‘gonzo’ press gang forged the Ned Kelly legend



Destruction of the Kelly Gang. Drawn by Thomas Carrington during the siege.
State Library of Victoria

Kerrie Davies, UNSW and Willa McDonald, Macquarie University

Washington Post publisher, Philip L. Graham, famously declared that journalism is the “first rough draft of history”. It’s also the first rough draft of inspiration for movies and books “based on a true story”.

Since four Victorian journalists witnessed Ned Kelly’s last stand on June 28 1880, their vivid accounts have influenced portrayals of the bushranger – from the world’s first feature film in 1906 to Peter Carey’s 2000 novel, True History of the Kelly Gang, adapted to a gender-bending punk film earlier this year.

In the hours before the Glenrowan siege, the four newspaper men – Joseph Dalgarno Melvin of The Argus, George Vesey Allen of the Melbourne Daily Telegraph, John McWhirter of The Age and illustrator Francis Thomas Dean Carrington of The Australasian Sketcher with Pen and Pencil – received a last-minute telegram to join the Special Police Train from Melbourne to confront the Kelly Gang.

The rail journey would prove to be one hell of an assignment and inspiration for Kelly retellings over the next 140 years.




Read more:
True History of the Kelly Gang review: an unheroic portrait of a violent, unhinged, colonial punk


All aboard

The journalists have a fleeting scene in the 1970 Ned Kelly film starring a pouty Mick Jagger. Two characters rush up to the train, holding huge pads of paper to signal their press credentials to the audience.

It’s a cinematic glimpse of the journalists whose historic descriptions continue to influence the Ned Kelly cultural industry that is the cornerstone of Australia’s bushranger genre.

Four reporters (plus a volunteer) huddle in the train’s press carriage in an image drawn by Carrington.
T. Carrington/SLV

The train left Melbourne late Sunday evening. Carrington, “embedded” along with the others, described the journey:

… the great speed we were going at caused the carriage to oscillate very violently … The night was intensely cold.

McWhirter’s take was somewhat more upbeat, suggesting a thrill in the cold evening air. He wrote the night was

a splendid one, the moon shining with unusual brightness whilst the sharp, frosty air caused the slightest noise in the forest beyond to be distinctly heard.

After 1am Monday, the train arrived at Benalla, where it picked up more troopers, horses and “Kelly hunter” Superintendent Francis Hare, played by Geoffrey Rush in Gregor Jordan’s 2003 adaptation of Robert Drewe’s novel, Our Sunshine.

Sometime later, the train was flagged down before Glenrowan by schoolteacher Thomas Curnow, alerting the travelling party to the dangerous Kelly gang ahead. In a follow-up article about the siege, Melvin reported the first details of the teacher’s bravery. This would become a pivotal scene in future Kelly recreations: “Kindling a light behind a red handkerchief, he improvised a danger signal”.

When the train arrived at Glenrowan station, the horses were released and bolted “pell-nell into a paddock”, wrote Carrington, as the Kellys opened fire.

A 1906 Australian-made production is thought to be the world’s first feature-length narrative movie.

Part of the story

Unhindered by modern media ethics, the journalists became actively involved in the siege. Their involvement is a nod to “gonzo journalism” practices – made famous nearly a century later by writer Hunter S. Thompson – in which journalists join the action rather than neutrally report on it.

Kelly had a love-hate relationship with the press. He once wrote:

Had I robbed, plundered, ravished and murdered everything I met, my character could not be painted blacker than it is at present, but I thank God my conscience is as clear as the snow in Peru …

Early in the siege, the journalists sheltered from the gunfire at the station, until they saw Hare bleeding from the wrist. Carrington wrote:

We plugged each end of the wound with some cotton waste and bound it up with a silk pocket handkerchief … Mr Hare again essayed to start for the hotel. He had got about fifty yards when he turned back and reeled. We ran to him and supported him to a railway carriage, and there he fainted from loss of blood … Some of the bullets from the verandah came whistling and pinging about us.

As the siege continued into the early hours, the journalists recorded the wails of the Glenrowan Inn’s matron, Ann Jones, when her son was shot, as well as the eerie tapping of Kelly’s gun on his helmet, which Carrington wrote sounded like “the noise like the ring of a hammer on an anvil”.

Their interviews with released hostages revealed gang member Joe Byrne was shot as he reached for a bottle of whiskey that, like Curnow flagging down the train, has become another key Kelly siege scene.

In one frame, drawn during the siege by Carrington, 25 prisoners are released.
State Library of Victoria

Man in the iron mask

Of all the gripping details the journalists recorded, their first descriptions of the bushranger emerging in his armour in the morning mist were what proved most inspiring to subsequent Kelly creators.

Allen wrote the helmet was “made of ploughshares stolen from the farmers around Greta”, describing the cutting blade construction, and called him “the man in the iron mask”. Carrington wrote:

Presently we noticed a very tall figure in white stalking slowly along in the direction of the hotel. There was no head visible, and in the dim light of morning, with the steam rising from the ground, it looked, for all the world, like the ghost of Hamlet’s father with no head, only a very long, thick neck.

After Kelly was shot in the legs, the writer described his collapse and his dramatic unmasking:

The figure staggered and reeled like a drunken man, and in a few moments afterwards fell near the dead timber. The spell was then broken, and we all rushed forward to see who and what our ghostly antagonist was […] the iron mask was torn off, and there, in the broad light of day, were the features of the veritable bloodthirsty Ned Kelly himself.

Precious film footage restored by the Australian National Film and Sound Archive of the 1906 film The Story of the Kelly Gang, the world’s first feature film, shows Kelly shooting at police in his iconic armour, then collapsing by a dead trunk on the ground surrounded by police. The scene is just as Carrington and his colleagues described it in their reports.

Perhaps the most faithful rendering of Carrington’s Kelly description is Peter Carey’s fictional witness in the preface of True History of the Kelly Gang.

Carey’s witness echoes the description of Kelly as a “creature” and describes its “headless neck”.

After he was shot in the legs, the witness recounts Kelly “reeled and staggered like a drunken man” and falling near dead timber. The book’s preface and Melvin’s first Argus report both describe Kelly after he fell as “a wild beast brought to bay”.

Carey’s witness may be fictional, but his account is based on journalists’ accounts of witnessing Kelly’s capture. Carey credited many of his research sources to Kelly historian Ian Jones, who republished Carrington’s account titled Catching the Kellys – A Personal Narrative of One who Went in the Special Train along with illustrations in Ned Kelly: The Last Stand, Written and Illustrated by an Eyewitness.

‘Hunted like a dog’

The journalists helped the police strip Kelly of his armour and carry him back to the station, cut off his boots and kept him warm, all the while interviewing him as the siege continued with the remaining bushrangers inside the inn.

McWhirter remarked the bushranger was “composed”.

“I had several conversations with him, and he told me he was sick of his life, as he was hunted like a dog, and could get no rest,” Carrington wrote. He described Kelly’s clothes underneath the armour – a crimean (meaning a coloured, no button flannel) shirt with large black spots.

The journalists then turned their attention to the burning of the inn, featured in the background of Sidney Nolan’s 1946 painting, Glenrowan which depicts a fallen Kelly towering in his armour over policemen and Aboriginal trackers.

Kelly was hanged in Melbourne in November 1880, a few months after the journalists’ train ride and the siege.

The journalists continued their careers, with Melvin becoming the most prominent of the four in participatory journalism. After a stint as a war correspondent, he joined the Helena ship as an crew member to investigate, undercover, the “blackbirding” trade that indentured South Pacific Islanders to the Australian cane fields.


IMDB

In the 1906 review of the first feature film – The Story of the Kelly Gang and exhibition, The Age critic wrote, “if there were any imperfections in detail probably few in the hall had memories long enough to detect them”.

Yet, the 1906 film was criticised by the Argus for not being faithful to the original descriptions of his “bushman dandy” dress as described by Carrington and his colleagues on the day.

The art may be in the interpreting eye, but the scenes are from that first rough draft of history.The Conversation

Kerrie Davies, Lecturer, School of the Arts & Media, UNSW and Willa McDonald, Senior Lecturer, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


%d bloggers like this: