The link below is to an article that takes a look at new monuments discovered in Ireland.
Tag Archives: article
Today (August 5) marks the 75th anniversary of Australia’s largest prison escape: the Cowra breakout, in New South Wales, during the second world war. In fact, it is one of the largest prison escapes in world history, but unless you are a keen war historian you may have never heard about it.
A small farming community was forever changed in 1944, when the sound of a bugle cut through the crisp night air at the Cowra Prisoner of War camp.
Rushing through a hail of bullets fired by the Australian guards, hundreds of prisoners escaped into the countryside. In the following days, 334 prisoners were recaptured.
As the dust settled, many would question why the prisoners would attempt such a bold and ultimately lethal escape plan. How do we as a society make sense of such bloodshed?
From non-fiction to fiction
While there have been a number of non-fiction works written on this event by authors such as Hugh Clarke, Charlotte Carr-Gregg, and Harry Gordon, it is works of fiction that have sought to fill in the gaps of history. They give us a way of understanding the incomprehensible.
The first author to do so was Australian poet and novelist Kenneth Seaforth Mackenzie. Mackenzie was stationed at Cowra during WWII and was on duty the night of the breakout.
His novel Dead Men Rising was based on his experiences. Because of this, the book was initially halted from Australian release due to the publisher’s fears of libel claims.
The book was released in the UK and USA in 1951 but Australian readers had to wait until 1969, several years after Mackenzie’s death, to read his interpretation of the event.
Dead Men Rising is largely focused on camp life through the eyes of the guards in the lead up to the break out. There is little interaction with the Japanese inmates who are represented as “un-human”, “animal-like” and “unpredictable”.
Mackenzie depicts them as utterly foreign and incomprehensible to the Australian soldiers. This narrative likely reflects attitudes at the time with anti-Japanese sentiment still high in the early post-war years.
A Japanese perspective
Several years later, Japanese author and former military doctor Teruhiko Asada, wrote Hiroku Kaura no Bōdō a title that translates as “The Secret Record of the Cowra Riot” in 1967.
It was received eagerly by English speaking audiences when it was translated by former Australian soldier and interpreter Ray Cowan in 1970 under the sensationalist title The Night of a Thousand Suicides.
Presented as a first-person narrative, the story had an intimate feel lacking in previous accounts, which led to some claiming the book was more fact than fiction, no doubt reinforced by Cowan’s inclusion of photographs from the Australian War Memorial. But this attribution is problematic given Asada was never imprisoned at Cowra.
Alternating between an Australian and a Japanese perspective of the war, this novel highlights the unlikely similarities shared between the story’s two opposing protagonists, an ex-farmer from occupied New Guinea and an imprisoned Japanese Sergeant.
The coming of age novel depicts a young boy who seeks to help the “samurai” escape from the POW camp, amid a backdrop of familial trauma and the hardships of rural life.
The boy’s innocence highlights the inherent racism, bigotry and violence that permeate the town’s pleasant façade, disrupting the notion that the “enemies” are the ones behind the barbed wire fence.
In 1989 Thomas Keneally revised and republished his 1965 novel The Fear.
The 1965 edition drew upon his boyhood memories of the breakout with this work briefly depicting the camp and subsequent breakout in the latter half of the book.
With this fresh perspective, Keneally returned again to the breakout in 2013 with Shame and the Captives which is set in the town of Gawell, a fictionalised version of Cowra.
Keneally said in his introduction that now, rather than drawing on his faulty memories of childhood, he spent considerable time researching the historical event which informs his work.
By aiming to create a “a truth in this fiction” Keneally hoped to “interpret the phenomenon of Cowra”. His reimagining included explorations of Italian and Korean POWs who were also held at Cowra, but whose stories are often overlooked.
The most recent work which revisits the breakout is by Wiradjuri author Anita Heiss.
Issues of race, discrimination and loyalty take on a new sense of urgency in this wartime setting, yet also highlight that while much has changed in the last 75 years, so much has stayed the same.
Heiss echoed this view when she asserted there “are lessons still to be learned from the history of Cowra”, lamenting the regression in Australia’s treatment of detainees in centres such as Manus Island or Don Dale.
From this bloody chapter of history, the township of Cowra – today, a four hour drive inland from Sydney – has moved forward to promote itself as a beacon of peace, friendship, and understanding.
In a show of respect for the dead, the Cowra RSL Sub-branch cared for the Japanese burial ground informally until eventually the graves were relocated to what is now the Cowra Japanese War Cemetery, which opened in 1964.
The gardens and the cemetery were symbolically linked by an avenue of cherry blossoms in 1988, and in 1992 Cowra was awarded further recognition to its peace efforts with The Australian World Peace Bell.
This Major League Baseball season, fans may notice a patch on the players’ uniforms that reads “MLB 150.”
The logo commemorates the Cincinnati Red Stockings, who, in 1869, became the first professional baseball team – and went on to win an unprecedented 81 straight games.
As the league’s first openly salaried club, the Red Stockings made professionalism – which had been previously frowned upon – acceptable to the American public.
But the winning streak was just as pivotal.
“This did not just make the city famous,” John Thorn, Major League Baseball’s official historian, said in an interview for this article. “It made baseball famous.”
Pay to play?
In the years after the Civil War, baseball’s popularity exploded, and thousands of American communities fielded teams. Initially most players were gentry – lawyers, bankers and merchants whose wealth allowed them to train and play as a hobby. The National Association of Base Ball Players banned the practice of paying players.
At the time, the concept of amateurism was especially popular among fans. Inspired by classical ideas of sportsmanship, its proponents argued that playing sport for a reason other than for the love of the game was immoral, even corrupt.
Nonetheless, some of the major clubs in the East and Midwest began disregarding the rule prohibiting professionalism and secretly hired talented young working-class players to get an edge.
After the 1868 season, the national association reversed its position and sanctified the practice of paying players. The move recognized the reality that some players were already getting paid, and that was unlikely to change because professionals clearly helped teams win.
Yet the taint of professionalism restrained virtually every club from paying an entire roster of players.
The Cincinnati Red Stockings, however, became the exception.
The Cincinnati experiment
In the years after the Civil War, Cincinnati was a young, growing, grimy city.
The city had experienced an influx of German and Irish immigrants who toiled in the multiplying slaughterhouses. The stench of hog flesh wafted through the streets, while the black fumes of steamboats, locomotives and factories lingered over the skyline.
Nonetheless, money was pouring into the coffers of the city’s gentry. And with prosperity, the city sought respectability; it wanted to be as significant as the big cities that ran along the Atlantic seaboard – New York, Philadelphia and Baltimore.
Cincinnati’s main club, the Red Stockings, was run by an ambitious young lawyer named Aaron Champion. Prior to the 1869 season, he budgeted US$10,000 for his payroll and hired Harry Wright to captain and manage the squad. Wright was lauded later in his career as a “baseball Edison” for his ability to find talent. But the best player on the team was his 22-year-old brother, George, who played shortstop. George Wright would end up finishing the 1869 season with a .633 batting average and 49 home runs.
Only one player hailed from Cincinnati; the rest had been recruited from other teams around the nation. Wright had hoped to attract the top player in the country for each position. He didn’t quite get the best of the best, but the team was loaded with stars.
As the season began, the Red Stockings and their new salaries attracted little press attention.
“The benefits of professionalism were not immediately recognized,” Greg Rhodes, a co-author of “Baseball Revolutionaries: How the 1869 Red Stockings Rocked the Country and Made Baseball Famous,” told me. “So the Cincinnati experiment wasn’t seen as all that radical.”
The Red Stockings opened the season by winning 45 to 9. They kept winning and winning and winning – huge blowouts.
At first only the Cincinnati sports writers had caught on that something special was going on. Then, in June, the team took its first road trip east. Playing in hostile territory against what were considered the best teams in baseball, they were also performing before the most influential sports writers.
The pivotal victory was a tight 4-to-2 win against what had been considered by many the best team in baseball, the powerful New York Mutuals, in a game played with Tammany Hall “boss” William Tweed watching from the stands.
Now the national press was paying attention. The Red Stockings continued to win, and, by the conclusion of the road trip in Washington, they were puffing stogies at the White House with their host, President Ulysses Grant.
The players chugged home in a boozy, satisfied revel and were met by 4,000 joyous fans at Cincinnati’s Union Station.
The Red Stockings had become a sensation. They were profiled in magazines and serenaded in sheet music. Ticket prices doubled to 50 cents. They drew such huge crowds that during a game played outside of Chicago, an overloaded bleacher collapsed.
Most scores were ridiculously lopsided; during the 1869 season the team averaged 42 runs a game. Once they even scored 103. The most controversial contest was in August against the Haymakers of Troy, New York. The game was rife with rumors of $17,000 bets, and bookmakers bribing umpires and players. The game ended suspiciously at 17 to 17, when the Haymakers left the field in the sixth inning, incensed by an umpire’s call. The Red Stockings were declared the winners.
The season climaxed with a road trip west on the new transcontinental railroad, which had just opened in May. The players, armed with rifles, shot out windows at bison, antelope and even prairie dogs and slept in wooden Coleman cars lighted with whale oil. More than 2,000 excited baseball fans greeted the team in San Francisco, where admission to games was one dollar in gold.
Cincinnati ended its season with an undefeated record: 57 wins, 0 losses. The nation’s most prominent sports writer of the day, Henry Chadwick, declared them “champion club of the United States.”
Despite fears that others clubs would outbid Cincinnati for their players, every Red Stockings player demonstrated his loyalty by signing contracts to return for the 1870 season.
The demise begins
The winning streak continued into the next season – up until a June 14, 1870, game against the Brooklyn Atlantics.
After nine innings, the teams were tied at 5. Under the era’s rules, the game could have been declared a draw, leaving the streak intact. Instead Harry Wright opted to continue, and the Red Stockings ended up losing in extra innings after an error by the second baseman, Charlie Sweasy.
The 81-game win streak had ended.
The Red Stockings did not return in 1871. Ticket sales had fallen after their first loss, and other teams began to outbid the Red Stockings for their star players. Ultimately the cost of retaining all of its players was more than the Cincinnati club could afford.
Yet the team had made its mark.
“It made baseball from something of a provincial fare to a national game,” Thorn explained.
A few years later, in 1876, the National League was founded and still exists today. The Cincinnati Reds were a charter member. And not surprisingly, some of the biggest 150-year celebrations of the first professional baseball team are occurring in the town they once called Porkopolis.
No one doubts the job of president of the United States is stressful and demanding. The chief executive deserves downtime.
But how much is enough, and when is it too much?
These questions came into focus after Axios’ release of President Donald Trump’s schedule. The hours blocked off for nebulous “executive time” seem, to many critics, disproportionate to the number of scheduled working hours.
While Trump’s workdays may ultimately prove to be shorter than those of past presidents, he’s not the first to face criticism. For every president praised for his work ethic, there’s one disparaged for sleeping on the job.
Teddy Roosevelt, locomotive president
Before Theodore Roosevelt ascended to the presidency in 1901, the question of how hard a president toiled was of little concern to Americans.
Except in times of national crisis, his predecessors neither labored under the same expectations, nor faced the same level of popular scrutiny. Since the country’s founding, Congress had been the main engine for identifying national problems and outlining legislative solutions. Congressmen were generally more accessible to journalists than the president was.
But when Roosevelt shifted the balance of power from Congress to the White House, he created the expectation that an activist president, consumed by affairs of state, would work endlessly in the best interests of the people.
Roosevelt, whom Sen. Joseph Foraker called a “steam engine in trousers,” personified the hard-working chief executive. He filled his days with official functions and unofficial gatherings. He asserted his personality on policy and stamped the presidency firmly on the nation’s consciousness.
Taft had a tough act to follow
His successor, William Howard Taft, suffered by comparison. While it’s fair to observe that nearly anyone would have looked like a slacker compared with Roosevelt, it didn’t help that Taft weighed 300 pounds, which his contemporaries equated with laziness.
Taft helped neither his cause nor his image when he snored through meetings, at evening entertainments and, as author Jeffrey Rosen noted, “even while standing at public events.” Watching Taft’s eyelids close, Sen. James Watson said to him, “Mr. President, you are the largest audience I ever put entirely to sleep.”
An early biographer called Taft “slow-moving, easy-going if not lazy” with “a placid nature.” Others have suggested that Taft’s obesity caused sleep apnea and daytime drowsiness, a finding not inconsistent with historian Lewis L. Gould’s conclusion that Taft was capable of work “at an intense pace” and “a high rate of efficiency.”
It seems that Taft could work quickly, but in short bursts.
Coolidge the snoozer
Other presidents were more intentional about their daytime sleeping. Calvin Coolidge’s penchant for hourlong naps after lunch earned him amused scorn from contemporaries. But when he missed his nap, he fell asleep at afternoon meetings. He even napped on vacation. Tourists stared in amazement as the president, blissfully unaware, swayed in a hammock on his front porch in Vermont.
This, for many Republicans, wasn’t a problem: The Republican Party of the 1920s was averse to an activist federal government, so the fact that Coolidge wasn’t seen as a hard-charging, incessantly busy president was fine.
Biographer Amity Shlaes wrote that “Coolidge made a virtue of inaction” while simultaneously exhibiting “a ferocious discipline in work.” Political scientist Robert Gilbert argued that after Coolidge’s son died during his first year as president, Coolidge’s “affinity for sleep became more extreme.” Grief, according to Gilbert, explained his growing penchant for slumbering, which expanded into a pre-lunch nap, a two- to four-hour post-lunch snooze and 11 hours of shut-eye nightly.
For Reagan, the jury’s out
Ronald Reagan may have had a tendency to nod off.
“I have left orders to be awakened at any time in case of a national emergency – even if I’m in a cabinet meeting,” he joked. Word got out that he napped daily, and historian Michael Schaller wrote in 1994 that Reagan’s staff “released a false daily schedule that showed him working long hours,” labeling his afternoon nap “personal staff time.” But some family members denied that he napped in the White House.
Journalists were divided. Some found him “lazy, passive, stupid or even senile” and “intellectually lazy … without a constant curiosity,” while others claimed he was “a hard worker,” who put in long days and worked over lunch. Perhaps age played a role in Reagan’s naps – if they happened at all.
Clinton crams in the hours
One president not prone to napping was Bill Clinton. Frustrated that he could not find time to think, Clinton ordered a formal study of how he spent his days. His ideal was four hours in the afternoon “to talk to people, to read, to do whatever.” Sometimes he got half that much.
Two years later, a second study found that, during Clinton’s 50-hour workweek, “regularly scheduled meetings” took up 29 percent of his time, “public events, etc.” made up 36 percent of his workday, while “thinking time – phone & office work” constituted 35 percent of his day. Unlike presidents whose somnolence drew sneers, Clinton was disparaged for working too much and driving his staff to exhaustion with all-nighters.
Partisanship at the heart of criticism?
The work of being president of the United States never ends. There is always more to be done. Personal time may be a myth, as whatever the president reads, watches or does can almost certainly be applied to some aspect of the job.
Trump’s “executive time” could be a rational response to the demands of the job or life circumstances. Trump, for example, only seems to get four or five hours of sleep a night, which seems to suggest that he has more time to tackle his daily duties than the rest of us.
But, like his predecessors, the appearance of taking time away from running the country will garner criticism. Though they can sometimes catch 40 winks, presidents can seldom catch a break.
If the aim of statue removal is to build a more racially just South, then, as many analysts have pointed out, putting these monuments in storage is a lost opportunity. Simply unseating Confederate statues from highly visible public spaces is just the first step in a much longer process of understanding, grieving and mending the wounds of America’s violent past. Merely hiding away the monuments does not necessarily change the structural racism that birthed them.
Studies show that the environment in which statues are displayed shapes how people understand their meaning. In that sense, relocating monuments, rather than eliminating them, can help people put this painful history into context.
For example, monuments to Confederate war heroes first appeared in cemeteries immediately following the Civil War. That likely evoked in visitors a direct and private honoring and grieving for the dead.
By the early 1900s, hundreds of Confederate statues dotted courthouse lawns and town squares across the South. This prominent, centrally located setting on government property sent an intentionally different message: that local officials endorsed the prevailing white social order.
So what should we do with rejected Confederate monuments? We have a modest proposal: a Confederate statue graveyard.
Lessons from the Soviet past
Our research as cultural geographers recognizes that Confederate monument controversies – while typically considered regional or national issues – are in fact part of global struggles to recognize and heal from the wounds of racism, white supremacy and anti-democratic regimes.
The idea of a Confederate monument graveyard is modeled after ways that the former communist bloc nations of Hungary, Lithuania and Estonia have dealt with statues of Soviet heroes like Joseph Stalin and Vladimir Lenin.
Under communist Soviet rule between 1945 and 1991, Eastern European countries suffered mass starvation, land theft, military rule and rigid censorship. An estimated 15 million people in the Soviet bloc died during this totalitarian reign.
Despite these horrors, many countries have opted not to destroy or hide their Soviet-era monuments, but they haven’t left them to rule over city hall or public plazas, either.
Rather, governments in Eastern Europe have altered the meaning of these politically charged Soviet statues by relocating them. Dozens of Soviet statues across Hungary, Lithuania and Estonia have been pulled from their pedestals and placed in open-air parks, where interested visitors can reflect on their new significance.
The idea behind relocating monuments is to dethrone dominant historical narratives that, in their traditional places of power, are tacitly endorsed.
A statue graveyard
The Eastern European effort to create a new memorial landscape has been met with mixed public reaction.
In Hungary, some see it as a step in the right direction. But, in Lithuania, people have expressed that re-erecting the statues of known dictators is in “poor taste” – an affront to those who suffered under totalitarianism.
The relocation of Soviet statues in Estonia has taken an even more interesting turn.
For the past decade, the Estonian History Museum has been collecting former Soviet monuments with the intention of making an outdoor exhibition out of them. For years it kept a decapitated Lenin and a noseless Stalin, among other degraded Soviet relics, in a field next to the museum.
The statues weathered Eastern European winters and languished in a defunct, toppled state. Weeds grew over them. The elements took their toll.
Travel writer Michael Turtle, who visited the museum in 2015, called the field a “statue graveyard.”
“Everything here seems to fit into some kind of purgatorial limbo,” he wrote on his blog. “The statues are not respected enough to be displayed as history but are culturally significant enough to not just be destroyed.”
To this we would add that these old statues, when repurposed thoughtfully and intentionally, have the potential to mend old wounds.
Confederate monument graveyard
What if the United States created its own graveyard for the distasteful relics of its own racist past?
We envision a cemetery for the American South where removed Confederate statues would be displayed, perhaps, in a felled position – a visual condemnation of the white supremacy they fought to uphold. Already crumpled monuments, like the statue to “The Boys Who Wore Grey” that was forcefully removed from downtown Durham, North Carolina, might be placed in the Confederate statue graveyard in their defunct state.
One art critic has even suggested that old monuments be physically buried under tombstones with epitaphs written by the descendants of those they enslaved.
We are not the first to suggest relocating Confederate statues.
That has proven challenging for curators.
When The University of Texas moved a statue of the Confederate President Jefferson Davis from its pedestal on campus to a campus museum, some students criticized the ensuing exhibit’s “lack of focus on racism and slavery.” One suggested that the statue’s new setting inadvertently glorified Davis, given the inherent value conferred on objects in museums.
And since statues in museums are typically exhibited in their original, upright position, Confederate generals like Robert E. Lee still tower over visitors – maintaining an imposing sense of authority.
We believe felled and crumpled monuments, in contrast, would create a somber commemorative atmosphere that encourages visitors to grieve – without revering – their legacy. A carefully-planned and aesthetically sensitive Confederate monument graveyard could openly and purposefully undermine the power these monuments once held, acknowledging, dissecting and ultimately rejecting the Confederacy’s roots in slavery.
Planning a Confederate monument graveyard will prompt many questions. Where should it be located? Will there be one central Confederate monument graveyard or many? Who will design and plan the graveyard?
Answering these questions would not just be part of a conversation about steel and stone but about the serious pursuit of peace, justice and racial healing in the nation — and about putting the Old South to rest.
In this series, we look at under-acknowledged women through the ages.
When Grata Flos Matilda Greig walked into her first law school class at the University of Melbourne in 1897, it was illegal for women to become lawyers. But though the legal system did not even recognise her as a person, she won the right to practice and helped thousands of other women access justice. In defying the law, Greig literally changed its face.
That she did so is a story worthy of history books. And how she achieved this offers key insights for women a century later as they navigate leadership roles in the legal profession and beyond.
Flos, as she was known, grew up in a household full of possibilities unlimited by gender boundaries. Born in Scotland, as a nine-year-old she spent three months sailing to Australia with her family to settle in Melbourne in 1889. Her father founded a textile manufacturing company. Both parents believed that Flos and her siblings – four sisters and three brothers – should be university educated at a time when women rarely were.
She grew up firm in the knowledge that women could thrive in professional life, and witnessed that reality unfold as older sisters Janet and Jean trained to become doctors. Another sister, Clara, would go on to found a tutoring school for university students. The fourth sister, Stella, followed Flos to study law.
Women could not vote or hold legislative office, let alone be lawyers, when 16-year-old Flos began to study law. Yet she did not let this deter her. As she approached graduation she focused on, “the many obstacles in the path of my full success. I resolved to remove them”.
Other feminine aspirants, she noted, had previously wished to enter the profession, “but the impediments in the way were so great, that they concluded, after consideration, it was not worthwhile”.
Flos felt otherwise. She declared, even in 1903 when women were largely excluded from public life: “Women are men’s equals in every way and they are quite competent to hold their own in all spheres of life.”
‘The Flos Greig Enabling Bill’
Six years after entering the University of Melbourne, Flos witnessed the Victorian Legislative Assembly’s passing of the Women’s Disabilities Removal Bill, also known as the Flos Greig Enabling Bill. Suddenly, women could enter the practice of law. How had she made this happen?
While childhood had provided Flos with role models from both sexes, she did have to rely upon a series of men to navigate her entry into the exclusively male club of the legal profession. Her male classmates had initially questioned the capabilities of a woman lawyer and resisted her presence, but she soon persuaded them otherwise.
Not only did Flos graduate second in her class, but the men took a vote to declare – affirmatively – that women should be allowed to practice law. Their support undoubtedly fuelled her ambitions.
Next, Flos turned to one of her lecturers, John Mackey, who happened to also be a member of the Victorian Legislative Assembly. Together they worked with other supporters to craft the legislative change. Mackey argued that by passing the law, Parliament could ease the concerns of women who believed they could not get justice from a legislative body made up only of men.
Still, Flos needed to complete a period of supervised training known as “articling” before she could be sworn into the bar. No Australian woman had ever engaged in the “articles of clerkship” before. A Melbourne commercial law solicitor Frank Cornwall employed her, and she was officially admitted to the practice of law on August 1, 1905.
At her swearing-in ceremony, Chief Justice John Madden described Flos as “the graceful incoming of a revolution”. He also expressed some scepticism about her future success:
Women are more sympathetic than judicial, more emotional than logical. In the legal profession knowledge of the world is almost if not quite as essential as knowledge of the law, and knowledge of the world, women, even if they possess it, would lie loth to assert.
Flos would prove him wrong about her knowledge of the world, both in law and in her other passion, travel.
‘What did I wear? Don’t ask me!’
At the ceremony, her name was the third called – in alphabetical order – before what was reportedly an “unusually large gathering of lawyers, laymen, and ladies … seldom seen in halls of justice”. Attendees noticed smiles that “flickered over the faces of the judges as they entered the crowded chamber” at the sight of Flos among her “somberly-clad male” counterparts.
News accounts focused more on the physical attributes of the first lady lawyer than her qualifications. When questioned by a reporter about her clothing choice for the occasion, Flos blushed, “What did I wear? Don’t ask me!” But then confessed, “Well, if you insist! I wore grey, with a greenish tinted hat, trimmed with violets!”
Another news reporter critiqued the flower-adorned hat as “a most unlegal costume”. As if there was any basis for making such an assessment – until that moment the nation had never seen the “costume” of a female lawyer. The media’s fixation with female lawyers’ appearance endures more than a century later.
Flos soon established a solo practice in Melbourne focusing on women and children. Among other endeavours, she represented the Women’s Christian Temperance Union in lobbying to establish the Children’s Court of Victoria.
Media fascination with Flos’s attire did not diminish once admitted to practice. She delivered a speech in 1905 to the third annual National Congress of Women of Victoria on a paper she wrote titled, “Some Points of the Law Relating to Women and Children”.
The reporter noted that Flos “treated her subject in a masterly manner, and gave an immense amount of useful and, at times, startling information”. But Flos’s “stylish, yet simple, gown of grey voile, with cream lace vest” was equally newsworthy as were “her pretty black hat and white gloves”. The fashion choices of other (male) speakers went unmentioned.
Flos also helped open the legal profession to other women. She founded The Catalysts’ Society in 1910. Two years later it became the prestigious Lyceum Club in Melbourne, devoted to advancing the careers of women and offering networking opportunities.
After the launch of the Women’s Law Society of Victoria in 1914, Flos was elected its first president. She cared deeply about the right of all women to vote, arguing in a 1905 debate that if “politics were not fit” for women, “the sooner they were made so the better.” (In 1908 Victorian women won the right the vote.)
Law was not Flos’s only pursuit. She travelled extensively. Two decades after graduating from law school, she took a lengthy trip through Asia, spending time in Singapore, China, Bali, Java, Malaysia and two weeks in the Burma jungle. She stayed in local homes and on her return, spoke to audiences about the experience, delighting them with tales of “leopards, tigers, wild pigs, peacocks, … and wild jungle fowl”. She lectured publicly and on radio stations about the geography, religion and race.
The end of her career took Flos to Wangaratta in Northern Victoria. She practised at a law firm headed by Paul McSwiney, and was known to explore the countryside in a “Baby Austin” tourer. She remained an activist, supporting higher education for women and the Douglas Credit Party, a political party that aimed to remedy the economic hardships of the 1930s depression.
Flos died in 1958. While she did not live to see other female firsts, such as the appointment of the first female Chief Justice of the Supreme Court of Victoria in 2003, Flos’ capacity to envision women as equals under the law places her among the profession’s greatest innovators.
Renee Newman Knake’s book Shortlisted: Women, Diversity, the Supreme Court & Beyond will be published by New York University Press in 2020.
The role Australia played in relaying the first television images of astronaut Neil Armstrong’s historic walk on the Moon 50 years ago this July features in the popular movie The Dish.
But that only tells part of the story (with some fictionalisation as well).
What really happened is just as dramatic as the movie, and needed two Australian dishes. Australia actually played host to more NASA tracking stations than any other country outside the United States.
How big is the Moon? Let me compare …
Right place, right time
Our geographical location was ideal as US spacecraft would pass over Australia during their first orbit, soon after launch. Tracking facilities in Australia could confirm and refine their orbits at the earliest possible opportunity for the mission teams.
To maintain continuous coverage of spacecraft in space as the Earth turned, NASA required a network of at least three tracking stations, spaced 120 degrees apart in longitude. Since the first was established in the US at Goldstone, California, Australia was in exactly the right longitude for another tracking station. The third station was near Madrid in Spain.
Australia’s world-leading place in radio astronomy was another factor, having played a key role in founding the science after the second world war. Consequently, Australian engineers and scientists developed great expertise in designing and building sensitive radio receivers and antennas.
While these were great at discovering pulsars and other stars, they also excelled at tracking spacecraft. When the CSIRO Parkes radio telescope opened in 1961 it was the most advanced and sensitive dish in the world. It became the model for NASA’s large tracking antennas.
The Commonwealth Rocket Range at Woomera, South Australia, also allowed Australians to gain experience in tracking missiles and other advanced systems.
The dish you need is at Honeysuckle Creek
NASA invested a considerable amount in its Australian tracking facilities, all staffed and operated by Australians under a nation-to-nation treaty signed in February 1960.
For human spaceflight, the main tracking station was at Honeysuckle Creek, near Canberra. Its 26-metre dish was designed as NASA’s prime antenna in Australia for supporting astronauts on the Moon.
NASA’s nearby Deep Space Network station at Tidbinbilla also had a 26-metre antenna but with a more sensitive radio receiver. It was called on to act as a wing station to Honeysuckle Creek, enhancing its capabilities, and ultimately tracked the orbiting command module during Apollo 11.
Over in Western Australia, Carnarvon’s smaller 9-metre antenna was used to track the Apollo spacecraft when initially in Earth orbit, as well as to receive signals from the lunar surface experiments.
To augment the receiving capabilities of these stations, the 64-metre Parkes radio telescope was asked to support Apollo 11 while astronauts were on the lunar surface. The observatory’s director, John Bolton, was prepared to accept a one-line contract:
The Radiophysics Division would agree to support the Apollo 11 mission.
The original plan
The decision to broadcast the first moonwalk was almost an afterthought.
Originally, the tracking stations were to receive only voice communications and spacecraft and biomedical telemetry. What mattered most to mission control was the vital telemetry on the status of the astronauts and the lunar module systems.
Since Parkes was an astronomical telescope, it could only receive the signals, not transmit. It was regarded as a support station to Honeysuckle Creek, which was also tasked with receiving the signals from the lunar module, Eagle.
When the decision was made to broadcast the moonwalk, Parkes came into its own. The large collecting area of its dish provided extra gain in signal strength, making it ideal for receiving a weak TV signal transmitted 384,000km from the Moon, using the same power output as two LED lights today.
One giant leap
On Monday, July 21 1969, at 6.17am (AEST), astronauts Neil Armstrong and Buzz Aldrin landed the Eagle lunar module on the Sea of Tranquillity.
It occurred during the coverage period of the Goldstone station, while the Moon was still almost seven hours from rising in Australia.
The flight plan had the astronauts sleeping for six hours before preparing to exit the lunar module. Parkes was all set to become the prime receiving station for the TV broadcast.
This changed when Armstrong exercised his option for an immediate walk – five hours before the Moon was to rise at Parkes. With this change of plan, it seemed the moonwalk would be over before the Moon even rose in Australia.
But as the hours passed, it became evident that the process of donning the spacesuits took much more time than anticipated. The astronauts were being deliberately careful in their preparations. They also had some difficulty in depressurising the cabin of the lunar module.
Meanwhile, moonrise was creeping closer in Australia. Staff at Honeysuckle Creek and Parkes began to hope they might get to track the moonwalk after all – at least as a backup to Goldstone in the US.
Bad weather hits
The weather at Parkes on the day of the landing was miserable. It was a typical July winter’s day – grey overcast skies with rain and high winds. During the flight to the Moon and the days in lunar orbit, the weather at Parkes had been perfect, but this day, of all days, a violent squall hit the telescope.
Still, the giant dish of the Parkes radio telescope was fully tipped down to its 30-degree elevation limit (the telescope’s horizon is 30 degrees above the true horizon), waiting for the Moon to rise in the north-east.
As the Moon slowly crept up to the telescope’s horizon, dust was seen racing across the country from the south. The dish, being fully tipped over, was at its most vulnerable, acting like a huge sail.
The winds picked up and two sharp gusts exceeding 110km/h struck the large surface, slamming it back against the zenith angle drive pinions that controlled the telescope’s up and down motion. The control tower shuddered and swayed from this battering, creating concern in all present.
The atmosphere in the control room was tense, with the wind alarm ringing and the 1,000-ton telescope ominously rumbling overhead.
Parkes had two radio receivers installed in the focus cabin of the telescope. The main receiver was on the focus position and a second, less sensitive receiver was offset a very short distance away, which gave it a view just below the main receiver.
Fortunately, as the winds abated, the Moon rose into the field-of-view of the telescope’s offset receiver, just as Aldrin activated the TV at 12.54pm (AEST). It was a remarkable piece of timing.
The 64m antenna at Goldstone, the 26m antenna at Honeysuckle Creek and the 64m dish at Parkes all received the signal simultaneously.
At first, NASA switched between the signals from Goldstone and Honeysuckle Creek, searching for the best-quality TV picture.
After finding Goldstone’s image initially upside down and then of poor quality, Houston selected Honeysuckle’s incoming signal as the one used to broadcast Armstrong’s “one giant leap” to the world.
Eight minutes into the broadcast, at 1.02pm (AEST), the Moon finally rose high enough to be received by Parkes’ main, on-focus receiver. The TV quality improved, so Houston switched to Parkes and stayed with it for the remainder of the two-and-a-half hours of the moonwalk, never switching away.
Honeysuckle continued to concentrate on their main task of communications with the astronauts and receiving that vital telemetry data.
Throughout the moonwalk, the weather remained bad at Parkes. The telescope operated well outside safety limits for the entire duration. It even hailed toward the end, but there was no degradation in the TV signal.
The moonwalk lasted a total of 2 hours, 31 minutes and 40 seconds, from the time the Eagle’s hatch opened to the time the hatch closed.
Australians saw it first
In Australia, the Apollo 11 feed was split. One feed was sent to NASA mission control for broadcast around the world. The other went directly to the ABC’s Gore Hill studios, in Sydney, for distribution to Australian TV networks.
As a result Australians watched the moonwalk, and Armstrong’s first step through Honeysuckle, just 300 milliseconds before the rest of the world.
An estimated 600 million people, one-sixth of the world’s population at the time, watched the historic Apollo 11 moonwalk live on TV. At the time it was the greatest television audience in history. As a proportion of the world’s population, it has not been exceeded since.
The success of the Apollo 11 mission was due to the combined effort, dedication and professionalism of hundreds of thousands of people in the United States and around the planet.
Australians from Canberra to Parkes, remote Western Australia to central Sydney played a critical role in helping broadcast that historic moment to an awestruck world.
You can hear more about the Moon landing in our special podcast series, To the Moon and beyond.
It’s 50 years since the two Apollo 11 astronauts – Neil Armstrong and Buzz Aldrin – spent 22 hours collecting samples, deploying experiments and sometimes just playing in the Sea of Tranquillity on the Moon.
In doing so, they created an archaeological site unique in human history.
Now, with what’s been called the New Space Race and plans to return to the Moon, the Apollo 11 and other lunar sites are under threat. We need to protect this heritage for future generations.
How big is the Moon? Let me compare …
Apollo 11’s archaeological site
The archaeological site of Tranquillity Base consists of the hardware left behind, as well as the marks made in the lunar surface by the astronauts and instruments.
The hardware component includes the landing module, the famous flag (no longer standing), experiment packages, cameras, antennas, commemorative objects, space boots and many other discarded objects – more than 106 in total.
Around these objects are the first human footprints on the Moon as well as the tracks the astronauts made walking around, and the places where they dug out samples of rock and dust to take back to Earth for scientific analysis.
The artefacts, traces and the landscape constitute an archaeological site. The relationships between them can be used by archaeologists to study human behaviour in this environment so different to Earth, with one-sixth terrestrial gravity and no atmosphere.
Assessing the heritage value
Not only this, but the site has heritage value for people on Earth. To assess this, we can look at a number of categories of cultural significance. Those in the Burra Charter are widely used across the world for heritage assessment.
Historic: There is no doubt that, as the first place where humans set foot on another celestial body, this is a very important place in global history. It also represents the ideologies of the Cold War (1947-1992) between the US and the USSR.
Scientific: What can we learn from the site? More particularly, what questions would we no longer be able to answer if Tranquillity Base was damaged or destroyed?
This is not just about archaeological research into human behaviour on the Moon. Apollo 11 has been exposed to the harsh lunar environment for 50 years. The surfaces of the hardware are accidental experiments in themseves: they carry the record of 50 years of micrometeorite and cosmic ray bombardment. Finding out how well the materials have survived can also provide information about how to design future missions.
Aesthetic: This type of cultural significance is about how we experience a place. While we can’t assess it in person, there are films and photographs that give us a feeling for the place. This includes the light, shadows and colours of the lunar surface from the perspective of the human senses. The aesthetic qualities have inspired many artists and musicians, including astronaut Alan Bean who devoted his post-Apollo 12 life to painting the Moon.
Social: This is about the value that contemporary communities place on the site. For the 600 million-plus people who watched the television broadcast of the landing, it was a life-changing moment representing the ingenuity of human technology and visions of a space-age future.
But the mission did not mean the same for everyone. Some African-Americans protested against Apollo 11, seeing it as a waste of resources when there was such great economic and social disparity between white and black communities in the US. For them, it was a sign of human failure rather than a triumph.
The larger the community that has an interest in a heritage place, the higher its level of social significance. It could be argued that Apollo 11 has outstanding universal significance, like places on the World Heritage List (unfortunately the World Heritage Convention cannot be applied to space).
What are the threats?
In the past few years we have seen an increase in proposed missions to return to the Moon. Some have stated their intention to revisit the Apollo sites, by human crew or robot – and this could lead to the removal of material, for souvenirs or science.
But the sites are both fragile and unprotected. The two primary risks to their survival are uncontrolled looting, and damage from abrasive and sticky lunar dust.
Removing material from the sites damages the integrity of the artefacts and the relationships between them. A casual visit could erase the original footprints and astronaut traverses. The corrosive dust disturbed by surface activities could wear away the materials.
Dust was a problem for all the crewed lunar missions. Apollo 16 commander John Young said: “Dust is the number one concern in returning to the Moon.”
The dust can be stirred up by plumes from landing or ascending vehicles, driving vehicles, walking on the surface, or, in the next phase of lunar settlement, by construction and industrial activities, such as mining.
Attempts at protection
The Outer Space Treaty of 1967 forbids making territorial claims in space. Applying any national heritage legislation to a place on the Moon could be interpreted as a territorial claim.
The US states of California and New Mexico have placed the Apollo 11 artefacts left on the Moon on a heritage list. They can do this because, under the treaty, the US legally owns the artefacts. But this does not protect the site itself.
NASA has established a set of heritage guidelines for its sites on the Moon. The guidelines propose buffer zones around these areas, inside which no-one should enter. They make recommendations for approaching the sites to minimise dust disturbance.
In May 2019, a bill called the One Small Step to Protect Human Heritage in Space Act was introduced to the US Congress. Its purpose is:
To require any Federal agency that issues licences to conduct activities in outer space to include in the requirements for such licences an agreement relating to the preservation and protection of the Apollo 11 landing site, and for other purposes.
But the bill applies only to Apollo 11 and does not have similar requirements for the five other Apollo landing sites. It also applies only to US missions. It’s a step in the right direction, but there is still much more to be done.
Only in the last decade has the idea of space archaeology gained legitimacy. Until recently, there was no urgency to establish an international framework to manage the cultural values of lunar heritage.
Why the Moon is such a cratered place
Now we’re in a new situation. On Earth, it’s common for industrial or urban activities that disturb the environment to be subject to an environmental impact assessment, which includes heritage.
Even when there are no laws to force companies to pay attention to heritage, many consider it important to seek a Social Licence to Operate – support from stakeholder communities to continue their activities.
Everyone on Earth is a stakeholder in the heritage of the Moon. Fifty years from now, what will remain of the Apollo 11 and other sites? What new meanings will people draw from it?
Tramping artisans who marched thousands of miles a year are proof that Britain was built by migrants
“If you believe you are a citizen of the world, you are a citizen of nowhere” – so said British prime minister Theresa May in a speech which captured the tone of the Conservative government’s long-running campaign to crack down on immigration. From creating a “hostile environment” for illegal immigrants, to ramping up visa restrictions and pursuing a Brexit deal to end freedom of movement between the UK and Europe, the Conservative government has made strenuous efforts to prevent immigration to the UK.
What’s perhaps more surprising is that the opposition felt compelled to say something similar: the Labour party’s manifesto declares it would honour the EU referendum result and end freedom of movement, replacing it instead with “fair immigration rules”, as yet not clearly defined.
Both parties’ stances contain a grain of irony. The Conservatives – seen in the past as supporting businesses that make money from international labour – now seeking to tighten the borders. Labour – a party descended from unions set up to support worldwide movement of labour – now showing little sense of solidarity with international or EU workers.
But as a professor researching labour history and media communication, I find the greatest irony is that migration helped forge the very social, cultural and economic infrastructures that Britain now seeks to wall off from the rest of the world.
A brief history of British migration
Between 1815 and 1930, an estimated 11m Britons left for North America, Australasia and South Africa. During the same period, 7m Irish shipped out to the US and the British dominions. Migration on this massive scale contributed to imperial and labour diasporas – economic migrants shifting across international borders during a period of great change.
At the same time, between 1840 and 1911 around 4.5m people moved from the countryside to British cities such as London, Leeds, Liverpool, Sheffield, Glasgow, Birmingham and Newcastle to take up work and learn new skills. With this came the need to help those without jobs.
Until the Trade Union Act of 1871, UK trade unions were prevented from organising for political purposes. Instead, workmen banded together as mutual self-help societies. They provided funds for illness and death duties, set up regional support networks and offered members financial support during periods of unemployment.
From the early 1800s onwards, UK labour unions built sophisticated structures to support the movement of people locally, regionally and globally. The general workings were similar: societies issued members with travelling documents indicating their good standing, as well as information on union contacts strung along a circuit of towns.
Travellers presented themselves to such representatives (available in the evening usually in a pub or meeting space), where they would be issued with an official note for lodgings, offered food and drink and paid a small sum for distances tramped (between a half-penny or a penny per mile). If work was forthcoming, they would be directed to relevant employers; if not, they continued onwards.
In such ways, tramping artisans would often cover huge distances over a course of many months. In one extreme case from 1848, a tramping typographer marched over 1,800 miles, leaving London to take in the delights of Southampton, Bristol, Glasgow, Stirling and 21 different Irish towns, before returning to his old haunts a year later.
A global network
International movement was part of that mix. Throughout the 19th and early 20th centuries, union-sponsored emigration grants offset travel costs of union members, enabling them to circulate along transnational routes as part of the British Empire’s colonial expansion in places such as Australia, Canada, New Zealand, South Africa and India.
The Scottish Typographical Association, for example, operated a structured emigration scheme for its members. Between 1903 and 1912 it paid out over £1,626 in emigration grants – worth £625,000 in modern currency. Travel subsidies usually averaged between £5 to £10 per member (worth £500 to £1,000 in modern currency), depending on how long they had been a member of the union. This was quite substantial during a period when you could enjoy a pint of bitter in your local pub for a penny, travel from Birmingham to London for 20p, and the average earnings in 1908 were £70 a year.
Governments and civilians in British settlements were often complicit in subjugating, suppressing and destroying indigenous cultures in pursuit of colonial expansion. The ongoing impacts of colonialism in these places are many and complex. Yet migration played its part in shaping those regions in ways that have since defined their national identities, bringing trade skills and knowledge.
Migrants supported by union schemes started businesses that were central to shaping the economies of emerging communities and towns, such as Lawrence in New Zealand, Ballarat in Australia and Kimberley in South Africa. They parlayed and passed on their knowledge and expertise to others they encountered on their travels.
The unions that emerged in the 19th century developed complex information and support networks to respond to the need for trade worker movement. They were used to support those who could not find long-term work, and to create global knowledge and skills exchange systems.
British people should recognise that the working world today has been greatly shaped by a freedom of movement that was once encouraged and supported. The flotsam and jetsam of the past, also despised as citizens of nowhere, often became civic leaders thanks to union links and support, offering generosity of communal spirit and embrace of potential worth. It’s best not to forget such lessons, in today’s turbulent times.
Algebra, alchemy, artichoke, alcohol, and apricot all derive from Arabic words which came to the West during the age of Crusades.
Even more fundamental are the Indo-Arabic numerals (0-9), which replaced Roman numerals during the same period and revolutionised our capacity to engage in science and trade. This came about through Latin discovery of the ninth-century Persian scholar, Al-Khwarizmi (whose name gives us the word algorithm).
This debt to Islamic civilisation contradicts the claim put forward by political scientist Samuel Huntington in his book The Clash of Civilizations some 25 years ago, that Islam and the West have always been diametrically opposed. In 2004, historian Richard Bulliet proposed an alternative perspective. He argued civilisation is a continuing conversation and exchange, rather than a uniquely Western phenomenon.
Even so, Australia and the West still struggle to acknowledge the contributions of Islamic cultures (whether Arabic speaking, Persian, Ottoman or others) to civilisation.
In an initial curriculum proposed by the Ramsay Centre for Western Civilisation, only one Islamic text was listed, a collection of often-humorous stories about the Crusades from a 12th-century Syrian aristocrat. But Islamic majority cultures have produced many other texts with a greater claim to shaping civilisation.
Philosophical and literary influences
Many of the scientific ideas and luxury goods from this world came into the West following the peaceful capture of the Spanish city of Toledo from its Moorish rulers in 1085.
Over the course of the next century, scholars, often in collaboration with Arabic-speaking Jews, became aware of the intellectual legacy of Islamic culture preserved in the libraries of Toledo.
Their focus was not on Islam, but the philosophy and science in which many great Islamic thinkers had become engaged. One was Ibn Sina (also known as Avicenna), a Persian physician and polymath (a very knowledgable generalist) who combined practical medical learning with a philosophical synthesis of key ideas from both Plato and Aristotle.
Another was Ibn Rushd (or Averroes), an Andalusian physician and polymath, whose criticisms of the way Ibn Sina interpreted Aristotle would have a major impact on Italian theologist and philosopher Thomas Aquinas in shaping both his philosophical and theological ideas in the 13th century. Thomas was also indebted to a compatriot of Ibn Rushd, the Jewish thinker Moses Maimonides, whose Guide to the Perplexed was translated from Arabic into Latin in the 1230s.
While there is debate about the extent to which the Italian writer Dante was exposed to Islamic influences, it is very likely he knew The Book of Mohammed’s Ladder (translated into Castilian, French and Latin), which describes the Prophet’s ascent to heaven. The Divine Comedy, with its account of Dante’s imagined journey from Inferno to Paradise, was following in this tradition.
Dante very likely heard lectures from Riccoldo da Monte di Monte Croce, a learned Dominican who spent many years studying Arabic in Baghdad before returning to Florence around 1300 and writing about his travels in the lands of Islam. Dante may have criticised Muslim teaching, but he was aware of its vast influence.
Guide to the Classics: Dante’s Divine Comedy
Islam also gave us the quintessential image of the Enlightenment, the self-taught philosopher. This character had his origins in an Arabic novel, Hayy ibn Yaqzan, penned by a 12th-century Arab intellectual, Ibn Tufayl. It tells the story of how a feral child abandoned on a desert island comes through reason alone to a vision of reality.
Hayy ibn Yaqzan was published in Oxford, with an Arabic-Latin edition in 1671, and became a catalyst for the contributions of seminal European philosophers including John Locke and Robert Boyle. Translated into English in 1708 as The Improvement of Human Reason, it also influenced novelists, beginning with Daniel Defoe’s Robinson Crusoe in 1719. The sources of the Enlightenment are not simply in Greece and Rome.
Civilisation is always being reinvented. The civilisation some call “Western” has been, and still is, continually shaped by a wide range of political, literary and intellectual influences, all worthy of our attention.