Tag Archives: article

As it celebrates its 25th birthday, how does the Clinton administration look today?



File 20180116 53310 18wfl0u.jpg?ixlib=rb 1.1

Joseph Sohm/Shutterstock.com

Patrick Andelic, Northumbria University, Newcastle

Bill Clinton is about to mark the 25th anniversary of his inauguration as the 42nd US president. Until the night of November 8 2016, millions of voters and experts assumed that he would be celebrating that milestone as the First Gentleman in a second Clinton administration, and that when he returned, he would be welcomed by the party and country both.

On both fronts, they were wrong. Instead, Clinton’s quarter-century anniversary on January 20, 2017 is also Donald Trump’s first – and while once beloved of his country, Clinton’s star has apparently started to fall.

For years, Clinton was a popular figure both nationally and within the Democratic Party. His 2012 speech to the Democratic convention, backing Barack Obama’s reelection bid, was enthusiastically received both inside and outside the hall; Politico wrote that he “stated the case for the 44th president’s reelection in language that was crisper and more compelling than the case Obama so far has made for himself”.

But lately, Clinton’s scent seems to be turning fetid. For the first time since he left office in 2001, more Americans view Clinton unfavourably than do favourably. After peaking at 69% in 2013, Clinton’s favourability rating has slumped to 45%. This trend is unusual among retired presidents. Most can count on nostalgia to sanctify even the most benighted tenure; even the once heinously unpopular George W. Bush enjoyed favourability ratings of 59% as of late 2016.

Two major events kickstarted this unflattering reassessment. First came the 2016 presidential campaign, during which both the Democratic primary and the general election saw his legacy picked over without mercy. The Hillary Clinton-Bernie Sanders duel put Bill Clinton’s policies on welfare, financial regulation, and criminal justice reform under the microscope. Meanwhile, Donald Trump lambasted the North American Free Trade Agreement (NAFTA), signed by Clinton in 1994, as the “worst trade deal ever made”.

More recently, the #MeToo movement has prompted a reassessment of Clinton’s personal history, particularly longstanding, unresolved and unproven allegations against him of sexual harassment, sexual assault, and even rape. New York senator Kirsten Gillibrand, a onetime protégé of Hillary Clinton, recently suggested that it would have been “appropriate” for Clinton to have resigned the presidency over the Monica Lewinsky scandal.

These are the political and personal fissures that still cleave Clinton’s legacy 25 years after he took office, and they can seem impossible to close. Was he a pseudo-liberal who enacted watered-down Republicanism or the saviour who brought the Democrats out of the wilderness? A roguish lothario or a sexual predator?

A different kind of president

Clinton was an anomaly from the off. His election marked a transition between generations. He was the first Baby Boomer president and the first not to have served in World War II. He was also a profoundly unlikely president.

1992 was not supposed to be a Democratic year. The incumbent Republican president, George H.W. Bush, was still surfing a wave of popularity following the first Gulf War. Better-known Democratic contenders declined to run, leaving an opening for an obscure Arkansas governor to win the party’s presidential nomination.

Clinton ran as a representative of the New Democrat movement, a faction that emerged in response to the party’s continued political misfortunes. The Democratic candidate had lost in every presidential election since 1976, and the New Democrats blamed the party’s leftward shift, which they claimed alienated Middle Americans. They sought to move the party to the centre by embracing market solutions and limited government, rejecting “identity politics”, and avoiding the appearance of dovishness in foreign policy.

Passing the torch: Bill Clinton takes over from George Bush Senior.
Smithsonian via Wikimedia Commons

Clinton pursued a New Democrat agenda in the White House, out of both choice and necessity (he had to contend with a Republican-controlled Congress after the 1994 midterms). This makes his legislative legacy a curious hybrid of liberal and conservative measures.

In his first year, Clinton signed a major gun control law, mandating background checks on most firearm purchases, and pushed unsuccessfully to enact sweeping healthcare reform. He also oversaw the repeal of Glass-Steagall, the law that kept commercial and investment banking separate, and signed the Defence of Marriage Act, prohibiting the federal government from recognising same-sex marriages.

But the Clinton presidency will always be defined by its most dramatic confrontation: the impeachment trial that resulted from the revelation that Clinton had conducted an affair with a White House intern, Monica Lewinsky. Though the conflict was ferocious, Clinton not only survived, but emerged politically strengthened. His approval ratings peaked at 73% in December 1998, at the end of the impeachment trial. Though dismissed by many at the time as an irrelevant foible, Clinton’s relationship with Lewinsky, and the abuse of power that it entailed, are being reevaluated.

If Bill Clinton faces a personal reckoning, what about “Clintonism”? A comparison between Bill Clinton’s two presidential campaigns and that of Hillary Clinton in 2016 reveals a Democratic Party that has been moving leftwards since the 1990s, on both economic and social issues. Though still centrist in tone, Hillary Clinton’s 2016 platform was – to quote none other than Bernie Sanders himself – the “most progressive platform in party history”.

The ConversationAt one time, it seemed Bill Clinton represented the future of centre-left politics; the “Third Way” philosophy he pioneered was taken up by other leaders, most notably Tony Blair and New Labour. But now his first inauguration shares an anniversary not with his wife’s, but with Donald Trump’s – and even the party he once led seems to be turning away from his legacy.

Patrick Andelic, Lecturer in American History, Northumbria University, Newcastle

This article was originally published on The Conversation. Read the original article.

Advertisements

The larrikin as leader: how Bob Hawke came to be one of the best (and luckiest) prime ministers



File 20180207 74512 16fnhdb.jpg?ixlib=rb 1.1
Prime Minister Bob Hawke celebrates the final cabinet meeting in Old Parliament House, 1988.
National Archives of Australia

Frank Bongiorno, Australian National University

The rise of Bob Hawke to the prime ministership now seems to have been so unstoppable, so inevitable, that it is hard to imagine Australian political history might have unfolded differently.

But what if, instead of entering the House of Representatives at the 1980 election, Hawke had retired from his leadership of the union movement into, say, a business career? What if he’d not had willpower to give up the booze? What if he’d lacked the inclination to tone down his image as a larrikin union leader?

In that event, we might perhaps recall Hawke as a gifted union leader – probably a bit of a “character” – but one who had lacked the personal discipline to fulfil his potential. Perhaps we would remember him as epitomising those olden days when mighty trade unions imagined they were a kind of fifth estate, and when their big bosses were giants whose power rivalled, and sometimes eclipsed, that of leading politicians and capitalists. Hawke might have justly been recalled as a symbol of the pride before the fall.

Instead, Hawke is recalled as one of our greatest prime ministers and certainly among the most influential. It is a strength of the ABC’s upcoming two-part documentary, Hawke: The Larrikin and the Leader, narrated by Richard Roxburgh, that it evokes the industrial world that gave Hawke both a long and rich apprenticeship in public life and a remarkable celebrity status. Some of the 1960s and 1970s footage is marvellous. You can almost smell the beer and Brylcreem.

Bob Hawke is still frequently called on to scull a beer – often at the cricket.
Reuters/David Gray

But we are also reminded of the personal transformation that was needed before Hawke could be seriously considered for national political leadership. As the pollster Rod Cameron comments in the program, the public might have been willing to tolerate, while frowning on, a womanising prime minister, but they would not take a drunkard.

The larrikin side of the Hawke personality is now a popular favourite at events, where the octogenarian acquiesces to the urgings of an adoring public by sculling a beer – a reprise of his record-breaking student effort at Oxford. But the beer-swilling larrikin, who would still be there at closing time in the bar of Melbourne’s John Curtin Hotel, had to be placed in the shade in the 1980s.

The reformed larrikin, of course, is a familiar type in Australian culture, most famously embodied in Bill, the protagonist of C.J. Dennis’s The Songs of a Sentimental Bloke. Bill gives up stoushing to become a properly domesticated husband and father, “Livin’ an’ lovin”. Hawke did a lot of both. The program’s discussion of his philandering is more coy than its handling of his drinking, but the expression on Hawke government minister Susan Ryan’s face when discussing Hawke’s relationship with women paints a thousand words.

The treatment of Hawke in this series is rather generous. Hawke was himself interviewed and all the talking heads clearly admire him to a greater or lesser extent – mainly greater. There are occasional hints of a darker side. Graham Richardson says he did some pretty appalling things under the influence of drink, but will not tell us what; only that Hawke would not have made it to the prime ministership in the age of the internet and the mobile phone.

Hawke’s 1971 Victorian Father of the Year award is treated ironically. The news footage has Hawke looking decidedly sheepish; the long-suffering Hazel privately wondered whether the judges had been on opium. Neal Blewett, a minister in Hawke’s government but a Bill Hayden supporter, thought Hawke and the party’s brutal treatment of Hayden on the eve of the 1983 election did long-term damage to the Labor Party’s morality.

The documentary does bring together many of the threads that help explain Hawke’s success as a politician. There was the sense of destiny, instilled in this Congregational minister’s son from childhood. His mother claimed that her Bible was forever marvellously opening at Isaiah 9:6: “For unto us a child is born, unto us a son is given: and the government shall be upon his shoulder”.

We are reminded of Hawke’s love affair with the Australian people, the “almost mystical bond” with voters. During that golden period of about 18 months after the 1983 election – as the drought broke, the recession ended and Australia II triumphed in the America’s Cup – Hawke was lucky, but he also knew how to exploit the brightening national mood to the full. Hawke did not just ride the wave of national pride and optimism during what Jim Davidson has aptly called the “Age of the Winged Keel”. He embodied it.

For a time at least. The 1984 election, in which Labor lost ground, took off much of the shine. Then there was the “banana republic” crisis of 1986, but the documentary does not pause long over economic policy. It does recognise that Hawke was immensely lucky in the depth and breadth of talent in his ministries, but that he was also skilled in bringing out the best in those he worked with. His ego was colossal, but he had the wisdom to share power.

There would be more election victories – in 1987 and 1990 – but things were never the same once his relationship with his younger treasurer and natural successor, Paul Keating, degenerated into acrimony. Yet, to the very end, as his approval rating plunged during “the recession we had to have”, Hawke clung to the idea that his relationship with voters was special. Like so many others, he failed to grasp the opportunity to leave office on his own terms.

Hawke: The Larrikin and the Leader moves along rather breezily. The episodes in Hawke’s career that reveal his attachment to high moral principle, such as his hostility to racism, or those achievements that rhyme with the present preoccupations of progressive politics – environmental protection and Medicare – receive loving attention. Hawke’s failures are not ignored, but get more superficial treatment. An exception is the abandonment of national Aboriginal land rights legislation and the proposal for a treaty, which figures in a melancholy few minutes towards the end of the second episode. But Hawke always has good intentions.

This is a nostalgic program that begins by noting that Australians today “have never been so distrusting of politicians. But there was a time when things were different”. So, how did we get from there to here? On this question, Hawke: The Larrikin and the Leader is silent.

The ConversationBut it may be that for all of Hawke’s achievements, the era’s narrowing of political possibilities – the equation of economic efficiency with good government, and of national productivity and competitiveness with national achievement – planted the seeds of both later economic success and political decay.

Frank Bongiorno, Professor of History, ANU College of Arts and Social Sciences, Australian National University

This article was originally published on The Conversation. Read the original article.


The Assyrian City of Tushan


The link below is to an article examing the archaeological work being done on the Assyrian city of Tushan, which is soon to be lost to a dam in Turkey.

For more visit:
https://www.theguardian.com/science/2018/feb/07/archaeologists-discovered-an-ancient-assyrian-city-only-to-lose-it-again


Relics of colonialism: the Whitlam dismissal and the fight over the Palace letters


File 20180109 83553 1bhucel.jpg?ixlib=rb 1.1
A reversion to imperial imbalance in the British-Australian relationship began with the Whitlam government’s election and ended with its dismissal.
AAP/NAA

Jenny Hocking, Monash University

This piece is republished with permission from Commonwealth Now, the 59th edition of Griffith Review. Articles are a little longer than most published on The Conversation, presenting an in-depth analysis on the relevance of the Commonwealth of Nations in today’s geopolitical landscape.


We will make better decisions on all the great issues of the day and for the century to come, if we better understand the past. – Gough Whitlam

The celebration of the “Queen’s birthday” in Australia is a perfect reflection of a fading, remnant, relationship. Commemorated in the Australian states as a public holiday on three different days – none of which is her birthday – and honouring an event of dubious significance, the “Queen’s birthday” reminds us that, despite our national independence, the symbolic ties of colonial deference remain.

The “Queen’s birthday” may seem a fitting if absurd genuflection to a powerless relic of a former time, and in itself confirmation that the Queen no longer has a role in post-dominion matters. But things are not always as they seem.

Neither sovereignty nor national independence flowed neatly from federation. The Commonwealth of Australia Constitution Act created Australia as a federation of the former colonies and a constitutional monarchy, with all the tension inherent in that term – between a democratic government chosen by the people and a monarchical head of state whose ultimate constitutional power stemmed solely from inherited aristocratic assumption and unchallenged legal privilege.

The gradual devolution of Australian autonomy appeared assured at the Imperial Conference of 1926. This affirmed the relationship between Great Britain and its dominions as being that of:

… autonomous communities within the British Empire, equal in status, in no way subordinate one to another in any aspect of their domestic or external affairs, though united by a common allegiance to the Crown.

The critical qualifier in this proclamation of an imperial gift of national autonomy, equality and independence is this: “though united by a common allegiance to the Crown”.

The imperial assertion of continued dominion allegiance to the Crown was a stark counterpoint to the proclaimed national autonomy. Indeed, it undermined the very autonomy and equality of nations the conference so proudly affirmed.

Five years after the Imperial Conference, the Statute of Westminster gave statutory expression to the principles of equality established at the Imperial Conference and vested full legislative authority and independence in the “dominions”.

Nevertheless, it remained the case that some bills would continue to require the Queen’s assent to be passed into law. The Statute of Westminster also granted dominion ministers the right of direct access to the sovereign. This access had previously been available only indirectly through UK ministers and reflected their then incomplete post-colonial status.

The ‘Queen’s birthday’ is celebrated in Australia on three different days – none of which is her actual birthday.
AAP/Jodie Richter

A fight for independence

Yet, in reality, neither of these critical junctures in the evolving British–Australian relationship created the clear-cut path to national independence that these paternalistic statements of ceded imperial power might suggest.

Although the dominions were entitled to separate representation at the League of Nations and subsequently the United Nations, as made clear at the Imperial Conference, the cultural expectation of continued British primacy and Australian dominion subservience remained.

It can be seen in the British attitude toward the efforts of Australia’s minister for external affairs, H.V. Evatt, to champion the role of the smaller nations against the Great Powers at the San Francisco conference in 1945 that established the ground rules for the UN.

Evatt’s insistence that Australia would take its own independent position as an autonomous nation in these high-level international negotiations infuriated the British representatives at the fledgling discussions over the UN.

At a preliminary meeting of Commonwealth nations in London, British Prime Minister Winston Churchill had bemoaned Evatt’s defiant independent stance.

Describing the Commonwealth as “the third of the Great Powers”, Churchill argued that the Commonwealth could only maintain its influence by ensuring unity among members and speaking with one voice – and that one voice of course would be Britain’s, not Australia’s.

These expectations of British administrative, legal and political authority, based more in the established imperial mindset, behaviours and networks than an exercise of formal political control, remained powerful resistors to change throughout the 20th century.

The undercurrent of lasting imperial privilege and hierarchy proved to be a major obstacle in ending the complex web of residual colonial ties across legal, constitutional and political domains.

In particular, continued allegiance to the British Crown as the imperial condition of dominion nationhood was a political oxymoron. It cast an impossible constraint on the form of national autonomy, while Australian allegiance to the British Crown was superimposed on the representative model of parliamentary democracy.

The fundamental contradiction this established at the heart of the Australian polity remained largely dormant during the long years under the avowed Anglophile prime minister Sir Robert Menzies, until inevitably rupturing along the faultlines of divided allegiance – to the British Crown on the one hand and to Australian democratic governance on the other – with the 1972 election of the Whitlam Labor government.

Australia’s continued allegiance to the British Crown above all else was a theme of the Menzies/Churchill years.
Keystone-France/Getty Images

Whitlam tries to loosen the ties

Gough Whitlam came to office with a core policy agenda of ending the residual colonial ties between Australia and Britain.

Although largely seen as ceremonial and symbolic, these colonial links were to be immensely significant in the trajectory of the Whitlam government and its dismissal threeyears later.

Whitlam moved rapidly on some of these. He ended the British honours system and introduced Australian honours, introduced an Australian national anthem to replace God Save the Queen, changed the Queen’s title by removing arcane references to God and Empire, and, in 1974, removed the words “God save the Queen” from the official proclamation dissolving parliament.

Eighteen months after his second election victory in the double dissolution of May 1974, Whitlam was peremptorily removed from office by the Queen’s representative in Australia, Governor-General Sir John Kerr, without warning and despite Whitlam maintaining a clear majority in the House of Representatives at all times.

Concerns were immediately raised over the possible role of Buckingham Palace and British authorities in this unprecedented vice-regal action. The suspicion that the Queen knew more about Kerr’s intentions than has ever been publicly acknowledged has grown in recent years with the Queen’s embargo of her correspondence with Kerr at the time of Whitlam’s dismissal.

Of all the residual colonial ties, the one that Whitlam found particularly abhorrent, and was determined to sever, was the right of appeal from some state supreme courts to the Privy Council. In his view:

No people with an ounce of self-respect would allow decisions made by their own judges … to be overruled by judges sitting in another country.

Whitlam described this as an “absurd” and “ludicrous” situation. Yet his efforts to end remaining state Privy Council appeals were stymied at every point.

Whitlam’s attorney-general, Lionel Murphy, reported he had struck nothing but intransigence, non-co-operation and obstruction from the British authorities in the government’s moves to implement this core policy.

Returning from his first visit to England as prime minister in 1973, Whitlam was clearly frustrated by the UK’s reluctance to end colonial ties when he told reporters, more in hope than confidence:

We are a separate country from Britain. We are an entirely independent country.

A tense meeting with Edward Heath, the British Conservative prime minister, the following year saw little change. An exasperated Whitlam again declared that:

All these colonial relics are incompatible with the position of Australia as a separate, sovereign country.

When the Whitlam government was removed from office by Kerr, three years later, these state-based appeals to the Privy Council remained, unchanged.

Gough Whitlam aimed to end the residual colonialties between Australia and Britain.
AAP/Alan Porritt

What we know about the Palace’s role

The archival records of the British Foreign and Commonwealth Office (FCO) covering these official visits are at once illuminating and disturbing. They show a troubling lack of respect for such a significant engagement with a senior member of a new Australian government.

Murphy’s visit was, after all, the first official visit by any of Whitlam’s cabinet to England. And yet, even before his arrival, the FCO files show that British authorities viewed Murphy, and indeed the Whitlam government itself, as a troublesome interloper whose presence they barely tolerated and whose policy concerns they did not share.

More than mere intransigence, or even simply a refusal to accept the legitimacy of the Whitlam government, these archival records disclose profound breaches of confidence, secrecy and even deception of Whitlam by the FCO, the British High Commission in Canberra, and the Queen’s private secretary Sir Martin Charteris. They show a partisan pattern of disrespect for and undermining of the new Labor government.

Most significantly, far from any equality of national status, “in no way subordinate one to another” professed at the Imperial Conference, these files reveal the FCO’s brazen presumption – “our right as the colonial power” – to deceive the prime minister, to liaise in secret with the conservative states and, ultimately, to intervene in Australian politics to prevent the government holding a half-Senate election to resolve a stalemate in the Senate over the passage of supply bills.

From October 16, 1975, opposition senators refused to vote on the government’s supply bills, which provided the annual funds for government expenditure. In the new political vernacular, supply was “blocked”.

Calling the half-Senate election, which was then due, had been Whitlam’s resolution to this unprecedented situation since the day supply was first blocked. The Labor caucus had voted unanimously in support of Whitlam calling the half-Senate election “at a time of his choosing”.

The FCO files document a rapid breakdown and reversion to imperial imbalance in the British-Australian administrative relationship that began with the election of the Whitlam government and ended with its dismissal. They reveal a deep suspicion of the new government that quickly led to secrecy, deception and to routine breaches of the highest levels of confidentiality by both the British prime minister’s office and the Palace throughout the terms of the Whitlam government.

Most alarming is that the FCO files also reveal overt British involvement in Australian politics in the weeks before the dismissal – specifically with the half-Senate election due at that time and which Whitlam was to call on November 11, 1975, to end the blocking of supply in the Senate.

Kerr’s papers in the National Archives of Australia provided the first glimpse of the Palace’s role in the dismissal.

Although there are some who continue to claim that the Palace was not involved, this has increasingly become more a matter of faith than fact. Revelations from Kerr’s papers, the Palace letters, and the FCO’s files have rendered that position untenable.

We now know that Charteris wrote to Kerr in October 1975 to discuss action the Palace would take if Whitlam became aware of Kerr’s plans to remove him from office and sought to recall him as governor-general. Charteris told Kerr that the Palace would, in that instance, “try to delay things”.

This communication between the Queen’s private secretary and the governor-general over the position of the governor-general himself is politically and constitutionally shocking. It reveals the Palace to be in deep intrigue with Kerr, to protect his tenure as governor-general, in the weeks before the dismissal – unknown to Whitlam.

It was also a breathtaking rupture of the vice-regal relationship. At the heart of this relationship in a constitutional monarchy is that the appointment of the governor-general is made by the Queen on the advice of the Australian prime minister alone. This has certainly been the case since 1930, when King George V accepted Labor prime minister James Scullin’s advice to appoint Sir Isaac Isaacs as governor-general.

Despite being vehemently opposed to Isaacs’ appointment, the King told Scullin:

… being a constitutional monarch I must, Mr Scullin, accept your advice.

For the Queen’s private secretary to intervene with Kerr himself on the question of the governor-general’s tenure was a staggering breach of that relationship.

From this point on, knowing that Kerr was considering dismissing Whitlam and concerned that Whitlam might then recall him, and having agreed to a course of action in order to protect Kerr’s position should Whitlam do so, the Palace was already involved in the dismissal.

Buckingham Palace was in deep intrigue with Sir John Kerr in the weeks before he dismissed the Whitlam government.
EPA/Will Oliver

The fight over the Palace letters

The letters between Charteris and Kerr are part of the so-called “Palace letters”. This is the secret correspondence between the governor-general and the Queen, her private secretary, and Prince Charles, in the weeks before the dismissal.

Although these letters are among Kerr’s papers and held by the National Archives in Canberra, they are closed to us. This is because the Palace letters are considered “personal” and not official “Commonwealth” records. This is despite Kerr’s own description of them as his “duty” as governor-general, and despite their obvious significance to our history.

The Palace letters are embargoed until 2027, “at her Majesty the Queen’s instructions”, with the Queen’s private secretary retaining an indefinite veto over their release even after this date. It is quite possible, then, that they will never be released.

The Palace letters are extraordinarily significant historical documents. They are contemporaneous real-time communications between the Queen and her representative in Australia, written at a time of great political drama, and are a vital part of our national historical record.

At the heart of this still-secret vice-regal correspondence was the prospect of the dismissal of the Whitlam government, which Kerr had already raised in September 1975 with Prince Charles and Charteris.

The designation of the Queen’s correspondence with her representative in Australia as “personal” means they do not come under Australia’s Archives Act, which relates only to official “Commonwealth records”.

And so, in a rather neat catch-22, the decision by the National Archives to deny access to the correspondence cannot be appealed to the Administrative Appeals Tribunal.

There is only one way to challenge this decision: through a Federal Court action, which is a complex, expensive and onerous proposition. This is clearly an area in need of legislative reform to ensure a viable appeal process is in place for records described as “personal” in this way.

In an effort to secure the release of the Palace letters, I launched an action against the National Archives in the Federal Court last year, with a legal team working on a pro-bono basis and supported by a crowdfunding campaign. This concluded in September 2017; the decision is anticipated within months.

At the heart of the case is this central question of just what constitutes “personal” as opposed to “Commonwealth” records. Lead barrister Antony Whitlam (Gough Whitlam’s eldest son) argued to the court that “personal records” would be records covering matters “unrelated to the performance of Sir John’s official duties”, and that this could not extend to correspondence between the Queen and her representative in Australia prior to the dismissal. He said:

It cannot seriously be suggested that there was a personal relationship between the Queen and Sir John Kerr.

It is difficult to see, from common sense alone, that the correspondence between the Queen and her representative in Australia could in any way be seen as “personal”. The precise legal points on which the question of Palace letters’ status will turn – whether as personal or Commonwealth records – will be a different matter.

The case itself has brought to light a significant amount of new historical and contemporary material on the relationship between the Queen and the governor-general and its implications for Australian national sovereignty.

One thing that can be said is that from the moment this case came before the court, the question of the release of the Palace letters changed irrevocably. Their status and their release will now be determined by an Australian court, according to Australian law – and not as a quasi-imperial grant of release by the Queen.

This alone is an historic and important outcome that ends one of the few remaining “colonial relics” that continue to deny us access to historical documents relating to the Queen about a historical episode also relating to the Queen.

The continued embargo by the Queen of the Palace letters and the revelations from the British archives of the FCO all point to the lingering imperial power that comes from an incomplete severance of colonial ties. They show above all that the residues of colonialism, the “imperial aftermath” in Whitlam’s words, can never be fully extinguished until Australia becomes a fully independent republic.

It is surely absurd that in the 21st century we can still see the Australian prime minister giving an Australian knighthood to the Queen’s consort, Prince Philip, and that the governor-general, the Queen’s representative in Australia, can still dismiss an elected government on the basis of claimed “reserve powers” derived from, and in the name of, the Queen.

As an independent autonomous nation, Australia has a right to know its own history, including and in particular the records pointing to British involvement in that history, if we are to ensure such a profound rupture in our political structures and denial of our national sovereignty cannot happen again.

This troubling time in our history and in the Australian–British relationship is also critical to our decisions as we recommence the debate over the inevitable move toward a republic.

The fundamental issues to be confronted in that debate will relate absolutely to the events surrounding the dismissal of the Whitlam government: how to protect the institutions of democratic parliamentary governance, how to secure the formation of government in the House of Representatives, and what the powers of the new, Australian, head of state should be.


The ConversationYou can read other essays from Griffith Review’s latest edition here.

Jenny Hocking, Emeritus Professor, Monash University

This article was originally published on The Conversation. Read the original article.


How the Australian Constitution, and its custodians, ended up so wrong on dual citizenship



File 20180205 19952 3rkjka.jpg?ixlib=rb 1.1
Members of the Australasian Federation Conference, 1890.
Parliamentary Education Office

Hal Colebatch, UNSW

For those who take only an ordinary interest in politics, the drama over citizenship and eligibility to be a member of parliament has been puzzling. Surely these people looked at the rule book, the Australian Constitution, before deciding to stand for election? Why were their nominations accepted if they weren’t qualified?

Well, it’s not quite that simple. The constitution is not the rule book, but the record of a deal between the leaders of six self-governing colonies to form a federation; it covers what they wanted to cover, and it means what relevant people make it mean.

It doesn’t say that there has to be a prime minister, but it does say that “there shall be an Inter-State Commission”. That we do have a prime minister and don’t have an inter-state commission reflects the way relevant people have used the words in the constitution.

What did the constitution writers think they were doing?

The constitution was put together by many hands over ten years. The qualifications for candidature were drafted by the Tasmanian attorney-general, Andrew Inglis Clark, in a straightforward and inclusive way: at least 21 years old, resident of the electorate, and a subject of the Queen (which would have included New Zealanders, Canadians and Britons).




Read more:
If High Court decides against ministers with dual citizenship, could their decisions in office be challenged?


But Samuel Griffith, the Queensland prime minister (as they were then called), wanted a section on disqualification. This would cover felony, bankruptcy and:

any person who has taken an oath or made a declaration or affirmation of allegiance to a Foreign Power or done any act whereby he becomes a subject or citizen … of a Foreign Power.

So there were separate sections on qualifications and disqualifications, from different sources and reflecting different values, and they took this form in the successive drafts of the constitution.

In the smoke-filled room: the drafting committee

The final session of the constitutional convention was held in Melbourne early in 1898. There was no further discussion of what became the now-infamous section 44, and a drafting committee took over to prepare a final draft.

Edmund Barton – soon to become Australia’s first prime minister – was the chair and dominant figure. He insisted on working till 4 or 5am, even though the other two members of the committee had gone to bed and only Robert Garran, the secretary, was left to maintain the illusion of a committee.

Sir Edmund Barton, who snuck in 400 amendments to the constitution at the last minute.
Parliamentary Education Office

After four days of drafting, Barton presented the convention, on its second-last day, with 400 amendments. He proposed a three-hour break for the delegates to study them, after which they could be put to the vote en bloc.

Barton assured the convention that there was only one amendment of substance – to section 44(ii). What he did not say was that section 44(i) had been completely rewritten, changing it from an active voice (“done any act whereby”) to a passive voice (“is a subject or citizen … or is entitled to”).

No attention was drawn to this change, there was no explanation of it, and there was no time for debate on any clause unless someone objected to it. The constitutional text that proved so significant more than a century later was a last-minute change, drafted in private and accepted out of weariness.

In his history of the convention, J.A. La Nauze points out that, by this stage, the delegates “had had enough”, but muses:

it may one day interest a curious lawyer to inquire whether judicial review has lingered with significant consequences on new words approved on trust and intended … merely ‘to put the wishes of the convention in more complete and concise form’.

As it turned out, it interested more than the curious lawyer, and created a problem which has yet to be adequately managed.

Appealing to the umpire?

The constitution was rather unclear about how these provisions would be enforced. It said both that questions about qualification could be settled by each house, but also that “any person” who believed that an elected representative was disqualified by section 44 could sue them in “any court of competent jurisdiction”.

In any case, there was little call for either until the High Court decided in 1999 that the UK was a foreign power.

Even then it refused to hear a case calling for Tony Abbott and Julia Gillard to produce evidence they had renounced their UK citizenship, on the basis that they had declared that they were qualified, and so the court should presume that they were. To do otherwise would be a vexation and an abuse of the court’s time.

But when the court did deign to interest itself in the matter, it took the traditional High Court view that it was not interested in the problem, or what the writers of the constitution were trying to do, but only with the possible meaning that a black-letter lawyer could squeeze from these words, irrespective of its impact on the governing of Australia.

Where does this leave us?

The situation now is that the qualifications for candidature for the Australian parliament are set by the parliament, but the disqualifications are largely set by foreign governments via the High Court. This diminishes the ability of electorates to choose the representative they want (though, when given the chance, electorates show what they think of the High Court’s action by returning the ousted members in the ensuing byelection).




Read more:
Grattan on Friday: Voters just want citizenship crisis fixed – but it isn’t that easy


And the High Court’s escapade in the china shop is not yet over, for it has yet to rule on the disqualification of those who are “entitled to” foreign citizenship, even if they have not applied for it. If the court applied the same logic that it has used in the cases already decided, this would disqualify not only any Jew, but also anyone with a Jewish parent, grandparent or spouse, all of whom are entitled to Israeli citizenship under the Israeli Law of Return.

The best course would be to start with recognising the problem, rather than searching for a preferred solution. In contemporary Australia, identities are often complex, and citizenship entitlements may be multiple and overlapping. How these are to be recognised in the qualifications for candidature demands a period of public discussion culminating in political action.

The only way we could get this is to take the matter out of the hands of the High Court and foreign governments and return the task of defining qualifications and disqualifications for candidature to parliament. This could be done by adding to section 44 the phrase “until the parliament otherwise provides”, which is used in section 30 on qualifications, and at a number of other points in the constitution.

This would be a logical and constitutional response to the political problem that has landed on us. If the five main parties in the parliament (all of which have had their parliamentary representation threatened by the High Court’s actions) supported a referendum to achieve this change, it would probably be carried.

The ConversationThe voters, too, as they showed in New England and Bennelong, have had enough. They want the political leaders to lead.

Hal Colebatch, Visiting Professorial Fellow, UNSW

This article was originally published on The Conversation. Read the original article.


Cool Britannia: TV drama doesn’t capture the story being unearthed of the Roman invasion



File 20180128 100926 fntqyk.jpg?ixlib=rb 1.1
Kelly Reilly as the Briton warrior Kerra in Britannia.
IMDB

Craig Barker, University of Sydney

The new TV series Britannia airing now (produced for Sky Atlantic in the UK, screening on Foxtel’s Showcase in Australia) is undoubtedly influenced by the scale and success of Game of Thrones. Created by acclaimed English playwright Jez Butterworth, the nine-part series is an ambitious exploration of a profoundly important era in British history.

It tells the story of the second Roman invasion of Britain in 43 AD, when the Roman fleet under the control of the ruthless general Aulus Plautius (a real historical figure played in the series by David Morrissey) lands on the coast of the near-mythical island. It is the story, too, of Celtic Britain tribalism and the Machiavellian interplay with the new Roman arrivals.

Trailer for season one of Britannia.

Although reviews to date have been mixed, the series is a bold undertaking exploring the clash of cultures. It is the second significant pop cultural exploration of key historical moments in the relationship between Britain and Europe since the Brexit vote. (Christopher Nolan’s film Dunkirk was the first). It also comes at a time when the relationship between historical fact and fiction is being hotly debated, particularly in relation to the Netflix series The Crown.




Read more:
Britannia, Druids and the surprisingly modern origins of myths


While the series does suffer from various historical inaccuracies, Butterworth has said he was more interested in exploring the drama around two religions, Roman and druidic, coming together.

Most of what we think we know about the Briton’s religion was proposed by later writers; no contemporary accounts survive. In Britannia, the role of druids, the Celtic priests, within society is presented in an otherworldly, trance-like way. The main druid character Veran (played by MacKenzie Crook) feels like a hallucination.

A druidic ceremony performed in Britannia.
Sky Vision

This renewed popular cultural interest in the establishment of Roman Britain and the Celtic response to the Roman arrival coincides with an exciting and profound period of archaeological discovery and historical reinterpretation of this historical event.

A nod to Caesar

Plautius’s invasion of Britain on orders of Emperor Claudius established Roman rule in much of Britain for nearly 400 years. But it was not the first time Rome came into contact with the tribes of the island.

Julius Caesar’s invasions of Britain in 55 BC and 54 BC are mentioned briefly in an opening title of the TV series, which suggests that fear of the Celts drove the Romans to abandon ideas of permanent occupation of the island. The real reason is far more complex. Caesar’s account provided the Romans with their first description of the island and its inter-tribal fighting.

David Morrissey as Aulus Plautius.
Sky Vision

In 2016 and 2017, excavations by the University of Leicester demonstrated that Caesar’s 54 BC fleet was blown off course and landed on the sandy shores of Pegwell Bay in Kent.

Although Caesar left without leaving a military force on Britain, he established a series of client relationships with British royal families in the south-east. These allegiances may have assisted the later Claudian invasion. At the very least, they mean the arrival of Roman forces wasn’t the surprise it is presented as in Britannia.

The Claudian Invasion

The real general Aulus Plautius was regarded highly in Rome, but like his fictional counterpart he did struggle against soldier rumblings. Roman historian Cassius Dio writes that he “had difficulty in inducing his army to advance beyond Gaul. For the soldiers were indignant at the thought of carrying on a campaign outside the limits of the known world”.




Read more:
Guide to the Classics: Suetonius’s The Twelve Caesars explores vice and virtue in ancient Rome


Why did the emperor Claudius choose to invade Britain? It seems most likely that, like a number of modern political leaders, the invasion allowed Claudius to distract public attention from domestic political issues.

No contemporary account survives of the real invasion. It seems most likely that as depicted in the show, Plautius landed 40,000 men on the coast of Kent, and attempted to negotiate truces and restore Roman friendly monarchs. These ultimately failed. The Romans then undertook a campaign of “shock and awe”, even bringing war elephants to the fight, and established their first provincial capital at Camulodunum (today’s Colchester).

A distinctive Roman brick wall excavated at Camulodunum (Colchester)
Wikicommons

A Roman view

Most of what is known about the Britons, including the dubious druids, was written not by them but by the Romans. The main literary source was written in the late first century AD by Tacitus, whose father-in-law, Agricola, served as governor of Roman Britain (after the events of the series).

Drawing of Gaius Cornelius Tacitus by an unknown artist, said to be based on an ancient bust.
Wikicommons

The traditional Tacitian historical narrative is now being questioned, by both archaeological evidence and new historical interpretation. These point out that in creating a biography of Agricola, Tacitus was presenting a heroic figure, freed from the moral corruption of Rome. In this narrative, he needed worthy adversaries in the form of rebels such as the British chief Calgacus.

It was obviously a complex relationship between native and conqueror, politically and culturally. Mary Beard has famously described Britain as “Rome’s Afghanistan”, an endless struggle to wins hearts and minds in the four centuries that followed Plautius’s forces. Hers is a provocative, but valid description.

The brawling Britons

In the first century AD, Britain was politically fragmented, with a series of constant wars between various tribes, (at least according to Roman sources). In the series tribal warfare is represented by the fictitious Cantii (led by King Pellenor, played by Ian McDiarmid) and Regni tribes (led by Queen Antedia, played by Zoë Wanamaker).




Read more:
From Elizabeth I to high fashion, the tales behind Game of Thrones’ costumes


This plotline reflects the Roman literary trope of the brawling Britons; but it appears the reality of the internal political structure was far more complex than Roman writers could ever comprehend. Many of the political groups of the south-east of England had already adopted some Mediterranean cultural traits before the invasion through trade and other contacts with the Roman World. New archaeological excavations at Silchester for example, demonstrate urban planning during the Iron Age, prior to Roman occupation.

British society certainly seems to have been more egalitarian than Roman, with both men and women holding political and military power. The character of the warrior Kerra (played by Kelly Reilly) appears to be presented as some sort of precursor to a figure like Boudica, a Celtic queen who would lead a rebellion within decades.

What about Stonehenge?

Trailers for forthcoming episodes suggest Stonehenge will play a significant role in Britannia’s plot . (It has beenfilmed using a scale replica constructed in the Czech Republic). Meanwhile, the real Stonehenge has undergone a series of recent excavations supervised by Mike Parker Pearson, which have revolutionised the way we think about the site, and destroyed many myths as well.

Again, ultimately, very little is known about the use of Stonehenge at the time of the invasion, nor about Roman conceptions of the structure. (Despite this, modern druids have associated themselves with the structure.)

‘Druid’ summer solstice service at Stonehenge 1956.
Getty Images

The ConversationIrrespective of the quality and historical accuracy of Britannia, the series is a dynamic presentation of an important period of British history. But the real story being slowly teased out by the archaeologist’s trowel is just as dynamic and in many ways, more dramatic and exciting than the fictional one.

Craig Barker, Education Manager, Sydney University Museums, University of Sydney

This article was originally published on The Conversation. Read the original article.


Explainer: the evidence for the Tasmanian genocide



File 20171208 11299 es2euu.jpg?ixlib=rb 1.1
The painting Group of Natives of Tasmania, 1859, by Robert Dowling.
Wikimedia

Kristyn Harman, University of Tasmania

At a public meeting in Hobart in the late 1830s, Solicitor-General Alfred Stephen, later Chief Justice of New South Wales, shared with the assembled crowd his solution for dealing with “the Aboriginal problem”. If the colony could not protect its convict servants from Aboriginal attack “without extermination”, said Stephen, “then I say boldly and broadly exterminate!”

Voluminous written and archaeological records and oral histories provide irrefutable proof that colonial wars were fought on Australian soil between British colonists and Aboriginal people. More controversially, surviving evidence indicates the British enacted genocidal policies and practices – the intentional destruction of a people and their culture.

When lawyer Raphael Lemkin formulated the idea of “genocide” after the second world war, he included Tasmania as a case study in his history of the concept. Lemkin drew heavily on James Bonwick’s 1870 book, The Last of the Tasmanians, to engage with the island’s violent colonial past.

An image of Wooreddy by English artist Benjamin Duterrau.
Wikimedia

Curiously, books published before and since Bonwick’s have stuck to a master narrative crafted during and immediately after the Tasmanian conflict. This held that the implementation and subsequent failure of conciliatory policies were the ultimate cause of the destruction of the majority of Tasmanian Aboriginal people. The effect of this narrative was to play down the culpability of the government and senior colonists.

More recent works have challenged this narrative. In his 2014 book, The Last Man: A British Genocide in Tasmania, Professor Tom Lawson made a compelling case for the use of the word “genocide” in the context of Tasmania’s colonial war in the 1820s and early 1830s, a time when the island was called Van Diemen’s Land. As Lawson writes, in the colony’s early decades, “extermination” and “extirpation” were words used by colonists when discussing the devastating consequences of the colonial invasion for the island’s Aboriginal inhabitants.

Nick Brodie’s 2017 book, The Vandemonian War: The Secret History of Britain’s Tasmanian Invasion, argues that the war was a highly orchestrated, yet deliberately downplayed, series of campaigns to efface Tasmanian Aboriginal people from their country. Brodie’s book makes extensive use of over 1,000 pages handwritten by Colonel George Arthur, revealing exactly how he prosecuted the Vandemonian War. (Disclaimer: Nick Brodie is my partner and occasional research collaborator.)

Arthur’s correspondence tells all

In his dual roles as lieutenant-governor of the colony and colonel commanding the military, Arthur directed a series of offensives against Aboriginal people.

Imperial soldiers, paramilitaries and volunteer parties were regularly deployed. Some parties were assigned Aboriginal auxiliaries as guides. Arthur’s war eventually included the largest ground offensive in Australian colonial history.

The last four Tasmanian Aboriginal captives at Oyster Cove Aboriginal Station. This photo was taken in the 1860s.
Wikimedia

Shortly after he arrived in the colony in 1824, Arthur began stockpiling weapons. He blurred the lines between military men and civilians. Military officers and soldiers were given civil powers.

Former soldiers were encouraged to settle in Van Diemen’s Land and to help quell Aboriginal resistance. Settlers were issued with hundreds of guns and thousands of rounds of ammunition. Convicts who fought against Aboriginal people were rewarded.

Military and civilian parties scoured the island for Aboriginal people, taking some prisoner and injuring or killing others. They destroyed Aboriginal campsites and caches of weapons.

Arthur knew his war parties were killing their opponents, but continued to send them out regardless. He feigned ignorance after John Batman, leader of one of the parties and later founding father of Melbourne, fatally shot two injured Aboriginal prisoners in his custody.

Colonial strategy became more severe over time. Bounties were introduced at £5 for an adult Aboriginal person and £2 per child to encourage colonists to bring in live captives. These payments were later extended to cover not only the living but also the dead.

Arthur’s regime leaked stories to the press to manage the public’s understanding of the war. It publicly announced the retirement of parties that it continued to support, and selectively recorded evidence given to an investigative committee.

As the war progressed, Arthur ordered men to conduct many covert operations. While there were some expressions of empathy for Aboriginal people, many reports painted them as aggressors, thereby justifying government action and even secrecy.

Ultimately, a couple of thousand soldiers, settlers and convicts were recruited for a general movement against Aboriginal people in late 1830. During this major campaign, Arthur rode his horse up and down the lines. He personally oversaw the operation. He sent dedicated skirmishing parties out in front of “the line”. Surviving records do not reveal how many casualties may have resulted.

Map of Indigenous Tasmania.
Wikimedia

In the latter stages of the war, Arthur sent George Augustus Robinson to carry out so-called diplomatic “friendly missions” to Aboriginal people. While these were taking place, Arthur continued to orchestrate military and paramilitary operations, including some conducted by nominally diplomatic operatives.

Eventually, Arthur declared that details of the war had to become a military secret. He then continued with a series of major military offensives against the island’s remaining Aboriginal population.

By the mid-1830s almost all of Tasmania’s surviving Aboriginal inhabitants lived on small islands in Bass Strait, some with sealers and others at the Aboriginal Establishment on Flinders Island. From an Aboriginal population numbering somewhere in the thousands on the eve of invasion, within a generation just a few dozen remained.

Whereas the master narrative framed this state of affairs as proof of a benign government caring for unfortunate victims of circumstance, the colony’s archives reveal that Aboriginal people were removed from their ancient homelands by means fair and foul. This was the intent of the government, revealed by its actions and instructions and obfuscations. In the language of the day the Aboriginal Tasmanians had been deliberately, knowingly and wilfully extirpated. Today we could call it genocide.

Learning from New Zealand

As well as legacies of death and dispossession, the colony left a legacy of deliberate forgetting. Our neighbours across the Tasman Sea acknowledge and now formally commemorate the 19th-century New Zealand wars. The first Rā Maumahara, a national day of remembrance, was held on October 28 2017.

Yet today in Australia people quibble over whether the nation’s colonial conflicts ought to be called “wars”, or indeed whether any conflicts took place.

Despite some differences, wars prosecuted in the Australian colonies share strong similarities with the New Zealand wars. British colonists and imperial soldiers fought against Indigenous people who took up arms to protect their families, land, resources and sovereignty.

Yet colonists perceived their Indigenous opponents differently. Through British eyes, Māori were feared as a martial foe. Australian Aboriginal people, on the other hand, were considered incapable of organising armed resistance despite extensive evidence to the contrary.

The ConversationNew Zealand has begun a new chapter of national commemoration for the wars fought on its soil. Is Australia ready to follow suit? Or will it, by omission, continue to perpetuate the secrecies of its own wartime propaganda?

Kristyn Harman, Senior Lecturer in History, University of Tasmania

This article was originally published on The Conversation. Read the original article.


Mythbusting Ancient Rome: cruel and unusual punishment



File 20171130 30896 5tx8es.jpg?ixlib=rb 1.1
It is commonly thought that anyone in ancient Rome who killed his father, mother, or another relative was subjected to the ‘punishment of the sack’. But is this true?
Creative Commons

Shushma Malik, University of Roehampton and Caillan Davenport, Macquarie University

Early Roman history is full of stories about the terrible fates that befell citizens who broke the law. When a certain Tarpeia let the enemy Sabines into Rome, she was crushed and thrown headlong from a precipice above the Roman forum.

Such tales not only served as a warning for future generations, they also provided a backstory for some of Rome’s cruellest punishments. Tarpeia is one of many legendary figures who appear in Livy’s History from the Foundation of the City; regardless of whether she was a real person, it became established practice to throw traitors from the “Tarpeian Rock”.

However, not all of the cruel and unusual punishments we associate with the Romans were carried out in practice or uniformly enforced, and some changed significantly over time.

Obey thy father

Roman society was fundamentally hierarchical and patriarchal. A Roman paterfamilias (the family’s oldest living male) had, in theory, the power to kill someone within his household with impunity. This included not only those physically living under his roof, but the wider family of brothers, sisters, nieces, and nephews as well.

However, historians have debated whether the power may have been largely symbolic and little used in practice. Filippo Carlà-Uhink has argued that the power did exist, but didn’t give heads of the household carte blanche to act as they pleased. For example, the senator Quintus Fabius Maximus Eburnus is said to have killed his son for his “dubious chastity”. But punishing a crime of a sexual nature was not seen as the proper use of a father’s power, so Quintus himself was tried and exiled.

In order for the use of such power to be justified, the son had to have committed a crime against the state. When Aulus Fulvius was killed by his father for his involvement in the conspiracy of Catiline (63 BC), the head of the household was not prosecuted. This was because Catiline and his followers had committed treason by plotting to murder the consul Cicero and seize power for themselves.

A watery and crowded grave

One of the most pervasive misconceptions about Roman criminal justice concerns the penalty for parricide. Anyone who killed his father, mother, or another relative was subjected to the “punishment of the sack” (poena cullei in Latin). This allegedly involved the criminal being sewn into a leather sack together with four animals – a snake, a monkey, a rooster, and a dog – then being thrown into a river. But was such a punishment ever actually carried out?

The epitome of Livy’s History from the Foundation of the City records that in 101 B.C.:

Publicius Malleolus, who had killed his mother, was the first to be sewn into a sack and thrown into the sea.

In practice, the penalty for parricide often just involved feeding the offender to wild beasts.
Creative Commons, CC BY-SA

There is no mention here of any animals in the sack, nor do they appear in contemporary evidence for legal procedure in the late Roman Republic. In 80 B.C., Cicero defended a young man called Sextus Roscius on a charge of parricide, but the murderous menagerie is conspicuously absent from his defence speech.

The animals are attested in a passage from the writings of the jurist Modestinus, who lived in the mid-third century A.D. This excerpt survives because it was later quoted in the Digest compiled at the behest of the emperor Justinian in the sixth century A.D.:

The penalty of parricide, as prescribed by our ancestors, is that the culprit shall be beaten with rods stained with his blood, and then shall be sewed up in a sack with a dog, a rooster, a snake, and a monkey, and the bag cast into the depth of the sea, that is to say, if the sea is near at hand; otherwise, he shall be thrown to wild beasts, according to the constitution of the Deified Hadrian.

The snake and the monkey feature in the satirical poems of Juvenal (writing during the age of Hadrian), who suggested that the emperor Nero deserved to be “sacked” with multiple animals for murdering his mother Agrippina. But the dog and the rooster do not appear until the third century A.D., when Modestinus was writing.

The punishment fits the (Roman) crime

So was anyone ever actually punished with all these creatures? The emperor Constantine’s penalty for parricide only specified that snakes should be added to the sack. Parricides were commonly punished in other ways such as being condemned to the beasts, which was very popular in the Roman world.

One of the four animals that was said to have been placed in the sack was a snake.
Creative Commons, CC BY-SA

Many historians have thought that the practicalities of sewing up a dog, a monkey, a rooster, a snake, and a human in a sack together indicates that the penalty was never actually enforced – for one thing, it would be as much a punishment for the executioners as it would for the condemned.

The Romans themselves believed the poena cullei was an ancestral custom – but as with many customs, it was based on preconceptions about the nature of ancient punishments. The best-known version of the penalty for parricide, with all the ferocious fauna included, was a product of the later Roman empire. It was designed to terrify, rather than to be enforced.

The poena cullei entered the standard accounts of Roman criminal law because it fascinated medieval scholars who tried to identify the symbolism of the animals. Florike Egmond has shown that this inspired the introduction of the sack filled with creatures as a punishment in Germanic law, reflecting the belief that a civilized society should follow Roman judicial practices.

To the relief of Germans in the medieval and early modern period, such punishments were rarely carried out. On one occasion, images of the animals were sewn into the sack, as they were considered sufficient substitutes for the real thing.

Thinking of dodging the census?

There was a steep price to pay if Romans did not take part in the census.
Creative Commons

Taking part in the Roman census was compulsory as the state needed a complete record of citizens’ property for tax purposes. According to the first-century B.C. historian Dionysius of Halicarnassus, the sixth king of Rome Servius Tullius decreed that anyone who did not participate in the census would lose their property and be sold into slavery.

But questions remain over whether this punishment actually happened – Dionysius was writing centuries after the sixth king’s reign, and Servius Tullius was probably fictional anyway. Dionysius’ contemporary, Livy, records a different penalty – citizens who failed to register were threatened with death and imprisonment.

There is no recorded example of either penalty being enforced. Ancient historian Peter Brunt has proposed that this may have been because Romans always turned up to be registered in order to ensure that their rights as citizens would be guaranteed. It’s worth noting, however, that neither Dionysius nor Livy suggested that the law was still in use in their own time – the harsh punishments may have reflected a later conception of cruelty in early Rome, rather than any historical reality.

Writing in the late Republic, the famous lawyer and politician Cicero states that one man, Publius Annius Asellus, decided not to present himself at the census in order to circumvent an inheritance law – and he only lost the right to vote. The Roman authorities had bigger problems as they were rarely able to carry out the census effectively in the first century B.C. (the first #censusfail). Besides, if you were fighting abroad, living outside of Italy, or unable to travel owing to extreme poverty, the Romans in charge could be quite lenient.

The ConversationThe penalties of census slavery, the power of the father, and the punishment of the sack reflect the Romans’ own conception of their ancestors and the idea that authorities must impose harsh penalties in order to deter offenders. But we need to be careful in reconstructing the histories of such punishments. As the case of parricide shows, the versions we are familiar with today are often a collage of sources from different periods assembled to create one specific punishment that seems authentically “Roman”.

Shushma Malik, Lecturer in Classics, University of Roehampton and Caillan Davenport, Senior Lecturer in Roman History, Macquarie University

This article was originally published on The Conversation. Read the original article.


The stories behind Aboriginal star names now recognised by the world’s astronomical body



File 20171122 6055 1mdxnff.jpg?ixlib=rb 1.1
Milky Way star map by Bill Yidumduma Harney, Senior Wardaman Edler.
Bill Yidumduma Harney, CC BY

Duane W. Hamacher, Monash University

Four stars in the night sky have been formally recognised by their Australian Aboriginal names.

The names include three from the Wardaman people of the Northern Territory and one from the Boorong people of western Victoria. The Wardaman star names are Larawag, Wurren and Ginan in the Western constellations Scorpius, Phoenix and Crux (the Southern Cross). The Boorong star name is Unurgunite in Canis Majoris (the Great Dog).

They are among 86 new star names drawn from Chinese, Coptic, Hindu, Mayan, Polynesian, South African and Aboriginal Australian cultures.

These names represent a step forward by the International Astronomical Union (IAU) – the global network of the world’s roughly 12,000 professional astronomers – in recognising the importance of traditional language and Indigenous starlore.

What’s that star called?

Many cultures around the world have their own names for the stars scattered across the night sky. But until 2016, the IAU never officially recognised any popular name for any star.

Instead, each star is assigned a Bayer Designation, thanks to a book published in 1603 by German astronomer Johann Bayer. He systematically assigned visible stars a designation: a combination of a Greek letter and the Latin name of the constellation in which it is found.

He gave the brightest star in a constellation the letter Alpha, then the next brightest star Beta, and so on down the list. For example, the brightest star in the Southern Cross is Alpha Crucis.

Alpha Crucis is the bottom star on the Southern Cross constellation on the right of this image, photographed from the Northern Territory over a two minute exposure.
Flickr/Eddie Yip, CC BY-SA

The IAU recognised that the lack of official star names was a problem. So the Working Group on Star Names (WGSN) was formed in 2016 to officially assign popular names to the hundreds of stars visible in the night sky.

That year the working group officiated 313 star names, derived mainly from the most commonly used Arabic, Roman and Greek names in astronomy. But the list contained few Indigenous or non-Western names.

That changed last year when the WGSN formally approved the 86 new star names drawn from other cultures. Aboriginal Australian cultures stretch back at least 65,000 years, representing the most ancient star names on the list.

The WGSN is looking to identify even more star names from Australia and other Indigenous cultures around the world. As Indigenous cultures have a rich collection of names for even the faintest stars, many new star names could gain IAU recognition.

So what do we know about these four stars and the origin of their names?

Wardaman star names

The Wardaman people live 145km southwest of Katherine in the Northern Territory. Wardaman star names come from Senior Elder Bill Yidumduma Harney, a well known artist, author and musician.

He worked with Dr Hugh Cairns to publish some of his traditional star knowledge in the books Dark Sparklers (2003) and Four Circles (2015). These books remain the most detailed records of the astronomical knowledge of any Aboriginal group in Australia.

Uncle Bill Yidumduma Harney, Senior Wardaman Elder.
Jayne Nankivell, Author provided

Larawag (Epsilon Scorpii)

The stars of the Western constellation Scorpius feature prominently in Wardaman traditions, which inform the procedures of initiation ceremonies.

Merrerrebena is the wife of the Sky Boss, Nardi. She mandates ceremonial law, which is embodied in the red star Antares (Alpha Scorpii). Each star in the body of Scorpius represents a different person involved in the ceremony.

Larawag is the signal watcher, noting when only legitimate participants are present and in view of the ceremony. He gives the “All clear” signal, allowing the secret part of the ceremony to continue.

Epsilon Scorpii is an orange giant star, lying 63.7 light years away.

Epsilon Scorpii in the constellation Scorpius. Scorpius is not to be confused with the Wardaman scorpion constellation, Mundarla, in the Western constellation Serpens.
International Astronomical Union, CC BY

Wurren (Zeta Phoenicis)

Wurren means “child” in Wardaman. In this context it refers to the “Little Fish”, a child of Dungdung – the life-creating Frog Lady. Wurren gives water to Gawalyan, the echidna (the star Achernar), which they direct Earthly initiates to carry in small bowls. The water came from a great waterfall used to cool the people during ceremony.

Just as the water at the base of the waterfall keeps people cool and rises to the sky as mist, the water in the initiates’ bowls keeps them cool and symbolically transforms into clouds that bring the wet rains of the monsoon season. These ceremonies occur in late December when the weather is hot and these stars are high in the evening sky, signalling the start of the monsoon.

Zeta Phoenicis comprises two blue stars orbiting each other, 300 light years away. From our perspective, these two stars eclipse each other, changing in brightness from magnitude 3.9 to 4.4 every 1.7 days.

Zeta Phoenicis in the constellation Phoenix.
International Astronomical Union, CC BY

Ginan (Epsilon Crucis)

Ginan is the fifth-brightest star in the Southern Cross. It represents a red dilly-bag filled with special songs of knowledge.

Ginan was found by Mulugurnden (the crayfish), who brought the red flying foxes from the underworld to the sky. The bats flew up the track of the Milky Way and traded the spiritual song to Guyaru, the Night Owl (the star Sirius). The bats fly through the constellation Scorpius on their way to the Southern Cross, trading songs as they go.

The song informs the people about initiation, which is managed by the stars in Scorpius and related to Larawag (who ensures the appropriate personnel are present for the final stages of the ceremony).

The brownish-red colour of the dilly bag is represented by the colour of Epsilon Crucis, which is an orange giant that lies 228 light years away.

Epsilon Crucis in the constellation Crux (the Southern Cross).
International Astronomical Union, CC BY

Boorong star name

Unurgunite (Sigma Canis Majoris)

The Boorong people of the Wergaia language group near Lake Tyrell in northwestern Victoria pride themselves on their detailed astronomical knowledge. In the 1840s, they imparted more than 40 star and planet names and their associated stories to the Englishman William Stanbridge, which he published in 1857.

In Boorong astronomy, Unurgunite is an ancestral figure with two wives. The Moon is called Mityan, the quoll. Mityan fell in love with one of the wives of Unurgunite and tried to lure her away.

Unurgunite discovered Mityan’s trickery and attacked him, leading to a great fight in which Mityan was defeated. The Moon has been wandering the heavens ever since, the scars of the battle still visible on his face.

Mityan, the Moon (the quoll) in Boorong traditions.
Wikimedia/Michael J Fromholtz, CC BY-SA

Unurgunite can be seen as the star Sigma Canis Majoris (the Great Dog), with the two brighter stars on either side representing his wives.

One of the wives (Delta Canis Majoris) lies further away from Unurgunite and is closer to the Moon than the other wife (Epsilon Canis Majoris). This is the wife Mityan tried to lure away.

On rare occasions, the Moon passes directly over the wife of his desires, symbolising his attempts to draw her away. He also passes over Unurgunite, representing their battle in the sky. But Mityan, and Moon, never passes over the other wife (with the Arabic name Adhara).

The ConversationDelta Canis Majoris is an orange-red supergiant that lies 1,120 light years away.

Sigma Canis Majoris in the constellation Canis Major.
International Astronomical Union, CC BY

Duane W. Hamacher, Senior Research Fellow, Monash University

This article was originally published on The Conversation. Read the original article.


Transforming the Parramatta Female Factory institutional precinct into a site of conscience


Bonney Djuric, UNSW; Lily Hibberd, UNSW, and Linda Steele, University of Technology Sydney

With the inclusion of the Parramatta Female Factory institutional precinct on the national heritage list, the federal government has recognised for the first time that institutionalisation is and has been a central part of Australia’s welfare system over two centuries.

The listing is testament to this precinct’s unique capacity to tell the stories of institutionalised women and generations of Australians who experienced out-of-home care, known as forgotten Australians, child migrants and Stolen Generations. It is now up to national, state and local interests to embrace this change.

The Parramatta Female Factory was identified as a site of abuse by the Royal Commission into Institutional Responses to Child Sexual Abuse, which has now made its final recommendations.

It is timely to ask how past sites of institutional abuse can be transformed from places of incomprehensible violence and suffering into places that can be harnessed to achieve the commission’s goals of redress, justice and the prevention of future institutional abuse.

The long wait for justice

The Parramatta Female Factory institutional precinct has been in continuous use since an assignment depot for female convicts was established there in 1821. In 1847, the original site was repurposed as Parramatta Lunatic Asylum, and again, in 1983, as the present-day Cumberland Hospital.

The adjacent Roman Catholic orphanage site, founded in 1844, became Parramatta Girls Industrial School in 1887, and operated as Norma Parker Women’s Detention Centre until 2010. An estimated 30,000 women and children passed through the portals of the child welfare and Female Factory institutional complex alone.

This is Australia’s longest-operating site of institutional incarceration and violence against females. It is also a place of punitive incarceration of children, women and Indigenous Australians and those labelled as mentally ill. Why did it take so long for this site to be added to the national heritage register?

If not for former residents of Parramatta Girls Home this listing would have never happened. Parragirls founder Bonney Djuric lodged the original national heritage application in 2011, which was the basis for its final listing in 2017.

Parragirls have continuously fought, for more than a decade, to preserve this place so that the injustices they suffered will never be repeated again.

But, until today, the neglect of the girls’ home and the entire precinct has replicated the abandonment the women have experienced in seeking justice for themselves and the thousands who passed before them.

Girls interned at Parramatta Girls Home experienced systematic and endemic levels of violence and neglect – the effects of which are endured by survivors to this day. These violations have been recorded by the royal commission.


Read more: Explainer: royal commission into child sex abuse


Findings from the commission’s investigation into the girls’ home catalogue a regime of discipline and punishment and emotional trauma, including physical and medical control, and physical and sexual abuse. Compensation and civil claim processes related to the home also came in for criticism in its report.

The problem confronting both the commission and Australians more generally is how to contend with personal and collective trauma on this scale. With the site now earmarked for redevelopment under the Parramatta North urban transformation plan, the New South Wales government faces this same challenge.

Creating a site of conscience

Apologies, stone memorials and trauma tourism no longer suffice for those living with the consequences of serious abuse. We urgently need a new imaginary for our past, where we make use of Australian heritage to do justice.

Former residents of Parramatta Girls Home have shown us how this is done by implementing a singular vision to transform this forgotten place. It’s called a site of conscience.

In principle, the site of conscience global movement proposes the reclamation of places of human suffering to make common ground for dignity, respect and civil participation, instead of abuse and neglect.

Engaging with a site’s history in this way, government, civil society and the public can better understand contemporary social justice issues and build a future society that does not repeat the wrongs of the past.


Read more: When it comes to redress for child sexual abuse, all victims should be equal


In practice, on the grounds of Parramatta Girls Home, a site of conscience has been brought into being through the community activities of Parragirls and PFFP memory project. Launched in 2012, the memory project has enabled Parragirls to supplant isolation, shame and silence with shared memory, creativity and social gathering.

Activities include inaugurating an annual children’s day and memory garden, collaborative exhibitions and performances, and Stolen Generations’ songwriting and live music events. The memory project has also enabled Parragirls to contribute to the design of the Parramatta Girls Home memorial and to impact academic research on ethics and policy on child welfare records.

Agency is crucial to the activation of this institutional precinct as a site of conscience. This means, first and foremost, those who experienced injustice – its former occupants – are empowered to determine how we remember the past and how to use it build a better present and future.

Transformative justice

Imagine a living public memorial that includes all Australians in the commitment to ensure our children are protected both now and in the future.

From this precinct, we can learn how past legacies and social issues impact contemporary practices of institutionalisation and systemic violence against women and children.

It is here, in this very place of inordinate pain and loss, that we can best put justice to work and make use of past wrongs for future good. And this enables us, as a nation, to put into action the royal commission’s goals of redress, justice and the prevention of future institutional abuse.

The ConversationThis vision calls for our collective embrace of transformative justice. It also demands our civic engagement to hold the government to account in the development and future use of Australia’s principal site of institutional welfare heritage.

Bonney Djuric, Adjunct Lecturer, UNSW Art & Design, UNSW; Lily Hibberd, ARC DECRA Research Fellow, UNSW, and Linda Steele, Senior Lecturer, Faculty of Law, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.


%d bloggers like this: