Department stores began as retail innovators. They arose within a changing consumer environment distributing mass produced goods. The scale of their operations, and the breadth of their product ranges helped establish retail dominance. Bit by bit, their competitive advantage has eroded.
The latest results from Myer and David Jones don’t inspire confidence. Myer experienced an 80% drop in profits over the past year, while for David Jones it was 25%.
Myer chief executive Richard Umbers pointed to “heightened competition, subdued consumer sentiment and discount fatigue”.
David Jones chief executive John Dixon blamed the costs involved in turning around a business that he said had been “in a form of managed decline” prior to being acquired by the South African Woolworths company in 2014.
Department stores emerged in Britain and France in the 19th century. They introduced several new innovations, including set prices rather than allowing haggling, a move away from credit to cash sales, off-the rack clothing, improvements in stock control and a high turnover sales model.
In the second half of the century, large-scale, purpose-built stores that stocked goods meeting a full range of needs for the home, as well as ready-made clothing, were constructed in major cities across Europe, North America and Australasia. These included Harrods in London, Le Bon Marché in Paris, Macy’s and Alexander Turney Stewart’s “Marble Palace” in New York and David Jones in Australia.
These stores reached their height in the interwar years, and despite setbacks during the Great Depression and World War II, dominated consumer imaginations and urban skylines until the late-1950s.
Their retail preeminence was based on urban environments built around public transport. The rise of the automobile posed an existential threat. The great city stores that survived longest in Australia – David Jones, Myer and Grace Bros – were those that embraced suburban expansion through shopping centre development.
Firms that were once household names – Farmers, Anthony Hordern & Sons, Foy & Gibson, Buckley & Nunn and a host of others – have all disappeared.
However, by developing and taking tenancies in shopping centres – an innovation that sustained and prolonged their lifespan – department stores created a powerful competitor. With more efficient, specialist retailers serving as its departments, what is a shopping centre but a department store on a larger scale?
Until the 1970s, department stores were also able to buy customer loyalty by offering exclusive credit provisions. This occurred through store cards that customers could use to make purchases at that store and its branches.
The bankcard rollout, beginning in 1974, and the bank credit cards that followed, helped to democratise credit provision, freeing customers to shop wherever they liked.
Discounters and category killers
The next existential threat emerged from a retail innovation that took the United States by storm in the 1950s: discount department stores that used the self-service supermarket retail model to sell department store merchandise.
In Australia, Coles saw an expansion opportunity and launched Kmart in 1969. Myer, observing the impact of discount department stores on traditional department stores in the US, rolled out Target. Woolworths lagged, but introduced Big W in the mid-1970s.
Over time, these stores took a heavy toll on the sales of traditional department stores. Figures provided by Urbis show that in 1974, traditional department stores accounted for 27% of sales of department store-type merchandise such as home furnishings, apparel and cosmetics, with discount stores making just 2% of such sales. By 1991, the figures were 12% and 11% respectively.
Since then, both have had trouble competing, not only with each other, but with “category killers” that took the self-service model and applied it to specific categories of goods.
This allowed them to achieve efficiencies of scale and brand recognition, while providing deep product ranges, specialist service and cheap prices. Think Toys-R-Us, Rebel, Lincraft, The Good Guys, Harvey Norman, Freedom and Officeworks.
You can still buy toys in Myer, but who does? You can buy a fridge if you are happy to pay a significant premium. This is niche retailing – a complete inversion of the mass-market retail principles that underscored the original department store business model.
International fast fashion is just the latest category killer for a product segment sold by department stores. If department stores lose fashion, they are gone. And the retailers they are competing with – Zara, H&M, Uniqlo and others – operate on a scale that eclipses national department store chains.
Department stores may well extend their lifespan. There are plenty of pundits offering solutions, both here and overseas.
Nevertheless, it’s likely Australia will see further market rationalisation. Three chains are likely to be left: David Jones or Myer at the top, Myer or Target in the middle, and Kmart at the bottom. This is not definitive – innovative management can turn around a brand within a few years as Guy Russo did with Kmart recently and Paul Simons did with Woolworths in the late-1980s.
But rationalisation will continue. Time has brought department stores back to the pack. The weakest will merge or fall.
Domestic animals are rarely associated with Antarctica. However, before non-native species (bar humans) were excluded from the continent in the 1990s, many travelled to the far south. These animals included not only the obvious sledge dogs, but also ponies, sheep, pigs, hamsters, hedgehogs and a goat. Perhaps the most curious case occurred in 1933, when US Admiral Richard E. Byrd’s second Antarctic expedition took with it three Guernsey cows.
The cows, named Klondike Gay Nira, Deerfoot Guernsey Maid and Foremost Southern Girl, plus a bull calf born en route, spent over a year in a working dairy on the Ross Ice Shelf. They returned home to the US in 1935 to considerable celebrity.
Keeping the animals healthy in Antarctica took a lot of doing – not least, hauling the materials for a barn, a huge amount of feed and a milking machine across the ocean and then the ice. What could have possessed Byrd to take cows to the icy south?
The answer we suggest in our recently published paper is multi-layered and ultimately points to Antarctica’s complex geopolitical history.
Solving the “milk problem”
The cows’ ostensible purpose was to solve the expedition’s so-called “milk problem”. By the 1930s, fresh milk had become such an icon of health and vigour that it was easy to claim it was needed for the expeditioners’ well-being. Just as important, however, were the symbolic associations of fresh milk with purity, wholesomeness and US national identity.
Powdered or malted milk could have achieved the same nutritional results. Previous expeditions, including those of Ernest Shackleton and Roald Amundsen, had survived just fine with such products. What’s more, William Horlick of Horlick’s Malted Milk sponsored Byrd’s second Antarctic expedition; the seaplane Byrd used was named for this benefactor.
So if fresh milk was not actually a health requirement, and other forms were readily available, why go to the trouble of lugging three cows and their accoutrements across the ice?
The cows represented a first, and Byrd well knew that “firsts” in the polar regions translated into media coverage. The expedition was privately funded, and Byrd was adept at attracting media attention and hence sponsorship. His backers expected a return, whether in the form of photographs of their product on the ice or mentions in the regular radio updates by the expedition.
The novelty value that the cows brought to the expedition was a valuable asset in its own right, but Byrd hedged his bets by including a pregnant cow – Klondike was due to give birth just as the expedition ship sailed across the Antarctic Circle. The calf, named “Iceberg”, was a media darling and became better known than the expeditioners themselves.
The celebrity attached to the cows helped the expedition remain in the headlines throughout its time in Antarctica, and they received an enthusiastic welcome upon its return. Although the unfortunate Klondike, suffering from frostbite, had to be put down mid-expedition, her companions made it home in good condition. They were feted on their return, meeting politicians in Washington, enjoying “hay cocktails” at fancy hotels, and making the front page of The New York Times.
It would be easy, then, to conclude that the real reason Byrd took cows south was for the publicity he knew they would generate, but his interest in the animals may also have had a more politically motivated layer.
A third reason for taking cows to Antarctica relates to the geopolitics of the period and the resonances the cows had with colonial settlement. By the 1930s several nations had claimed sectors of Antarctica. Byrd wanted the US to make its own claim, but this was not as straightforward as just planting a flag on the ice.
According to the Hughes Doctrine, a claim had to be based on settlement, not just discovery. But how do you show settlement of a continent covered in ice? In this context, symbolic gestures such as running a post office – or farming livestock – are useful.
Domestic animals have long been used as colonial agents, and cattle in particular were a key component of settler colonialism in frontier America. The image of the explorer-hero Byrd, descended from one of the First Families of Virginia, bringing cows to a new land and successfully farming them evoked this history.
The cows’ presence in Antarctica helped symbolically to turn the expedition base – not coincidentally named “Little America” – into a frontier town. While the US did not end up making a claim to any sector of Antarctica, the polar dairy represented a novel way of demonstrating national interest in the frozen continent.
The Antarctic cows are not just a quirky story from the depths of history. As well as producing milk, they had promotional and geopolitical functions. On an ice continent, settlement is performed rather than enacted, and even Guernsey cows can be more than they first seem.
Half buried in the sand, uprooted stalks of kelp are like splashes of dark blood against the white quartzite, ground fine as talc. In the translucent shallows, tendrils of kelp flounce lazily as the water gradually turns to turquoise then a deep Prussian blue at the horizon. Behind the crescent of beach, matted tentacles of spongy pigface disguise accumulated detritus of crayfish, oyster, abalone and scallop shells, rubbish middens thousands of years in the making.
Known as Recherche Bay, this exquisite table-shaped body of water in the southeast corner of Tasmania was named by the French explorer Bruni D’Entrecasteaux who rested his ships Recherche and Esperance here in April and May 1792. Before the French arrived, this place was an important ritual site for the Nuenonne people, who journeyed in bark canoes from Bruny Island to meet with the Needwondee and Ninine people, who travelled overland from the west. For millennia they made this trip: the same seasonal migration; the same ritual feast. Not any more. Not since Ria Warrawah was loosed among them.
Wooredy, the last elder of the Nuenonne, saw it with his own eyes. In the cosmology of the original Tasmanians, Wooredy explained, Ria Warrawah was the intangible force of evil that could infest all things. Since the beginning of time, Ria Warrawah was held in check by the great ancestor who lived in the sky, maintaining the world in precarious balance until two avatars of evil fashioned as clouds pulling small islands floated into this very bay. As a small boy he had been transfixed by the sight of the French ships floating in from the ocean, and disgorging onto the land strange creatures just like the returned dead who had been drained of colour by the rigours of their journey. He watched as they walked about to collect water and make a fearsome sound with a stick that spat fire before returning to their floating islands.
He never saw those ships again, but when he was a young man on a hunting trip to the northern tip of Bruny Island, Wooredy observed two more such apparitions of evil float into the river estuary on the mainland opposite. This time the dead men came ashore and remained there, cutting down the trees to build huts and disturbing the ground all about. Plenty more of them arrived. And the Nuenonne began to die.
Thirty years after he watched the ships Lady Nelson and Ocean enter the estuary of the Derwent River, Wooredy was still hunting on his traditional country. He was by then a renowned warrior in his mid-forties who went about naked and wore his hair in the traditional fashion – long greased ringlets coloured with ochre that fell over his eyes like a mop. Wooredy was a cleverman, so knowledgeable in ritual and healing that the white men who came to his island called him the Doctor. Even he proved no match for the epidemic illness that between April and December of 1829 swept away nearly everyone of his clan.
Wooredy was not the last of the Nuenonne. That terrible distinction belonged to his second wife, Truganini, a woman whose name is vaguely familiar to most Australians, having achieved undesired celebrity as “the last of her race”.
An irresistible force
For most of my adult life I have been compelled by the story of Wooredy and Truganini, people who lived through a psychological and cultural transition more extreme than most human imagination could conjure. Both were witness and participant in a process of apocalyptic destruction without parallel in modern colonial history. Their experience has invariably been told through the prism of regretful colonial imperative, a rueful backward glance at the tragic collateral damage of inexorable historical forces. That is not a narrative I wish to perpetuate. Wooredy and Truganini compel my attention and emotional engagement because it is to them I owe a charmed existence in the temperate paradise where I now live and where my family has lived for generations.
My great-great-grandfather was fresh off the boat from England in 1829 when he was handed an unencumbered free land grant of over a thousand hectares of Nuenonne hunting grounds. On this land he prospered and put down deep roots, while the traditional owners were repaid with exile, anguish and despair.
Richard Pybus may have been the first white man granted freehold title to a large part of Bruny Island, but other grant holders followed soon enough. Next came George Augustus Robinson, an ambitious tradesman and self-styled missionary who threw over his successful business as a builder to become “conciliator” of the Indigenous Tasmanians. He had lofty ambitions that he could teach these ancient people to shuck off their savage ways and become good Christian serfs.
My ancestor’s neighbour was a most problematic fellow. Tempting though it is for me to despise the man, I remain immensely grateful for his voluminous daily journals that have given me a glimpse into the lived experience of Wooredy and Truganini, who were his close companions for 12 years as guides and intermediaries in the audacious project of conciliation that he called “the friendly mission”.
Heaven only knows what sort of excursion Wooredy and Truganini thought they had embarked upon on 29 January 1830 when Robinson took them from their island to sail to Recherche Bay for an overland trek to the west coast. Since the beginning of time the Nuenonne had taken this journey in their bark canoes, while nomadic treks through the southwest were part of the timeless, seasonal pattern of their traditional life. Such a journey encompassed return, a completion, in accordance with the natural cycles of the environment. A journey for the purpose of reaching a destination was entirely new. Not to return would have been unthinkable.
For more than 40 years, Wooredy had made trips to and from his island and knew Recherche Bay held the malevolent spirit of Ria Warrawah, embodied in a carved tree that was left by the French visitors. The day after their arrival, while hunting he came across a decayed body of a woman that showed no sign of violence. Ria Warrawah had caught her, he was sure of it. When the body was identified as a Ninine woman on a visit from the west coast who had become ill and been abandoned to die alone, Robinson was dismayed that his Tasmanian companions were strangely unmoved by this apparent callousness. It was yet another display of their belief “that no human means can avert the doom to which they are consigned”.
This stubborn fatalism about the irresistible force of Ria Warrawah deeply rankled him, even though Wooredy had given him a potent lesson in the awesome power of Ria Warrawah as they were sailing to this tranquil bay. During the trip Wooredy identified all the land that passed before his eyes as the country of three interconnected clans – the Mellukerdee of the Huon River, the Lyluequonny of Southport and the Needwondee of Cox’s Bight – all of them gone within the span of Wooredy’s adult life. This land was empty, he explained. Nobody left.
Plunging into the wild
Mid-morning on 3 February 1830, Robinson set out with his Tasmanian guides as well as a handful of convict retainers to walk overland to the west coast. The sun was shining and he estimated the distance to Port Davey to be about 60 miles, which would take them about three days. Truganini had relatives among the Ninine people of Port Davey and was anxious to get going but Wooredy was not so keen, displaying an inherent hostility toward the toogee – his collective name for people from the west coast — that Robinson found disturbing. It was an enmity he shared with the six other Tasmanian men in the party who were aliens in this country where they did not know the language or customs.
The steady, reliable Wooredy was considered by Robinson to be his “loyal and trusted companion”, and next he looked to the “respectful and compliant” Kickerterpoller, whose command of English and knowledge of European customs made him an ideal negotiator in Robinson’s eyes. This young man was from the Paredarererme clan from Oyster Bay, stolen from his people when he was about nine and given to a settler as a farmhand. As a youth he ran away to join in a guerrilla war before being captured in 1824 when he became a guide for the roving parties.
Kickerterpoller was very familiar with this kind of expedition and knew only too well the coercive, violent ways of white men. Although the mission was not a paramilitary organisation like the roving parties, and no one was openly armed, the convicts all carried guns and the brace of pistols Robinson had hidden in his knapsack told him it was not so friendly. Suspicion aside, Kickerterpoller had reason to cleave to Robinson, at least in the short term. Instead of being confined in a foetid gaol, the Tasmanians were at large in empty country where they could hunt freely. And no one was shooting at them.
No white man had ever attempted an overland route to the west coast, and Robinson knew nothing of the territory before him. Among the colonists, an enduring perception had taken hold that the southwest was a terrible place, a geographical extension of the inhuman horrors of the penal settlement in Macquarie Harbour. Everyone knew the stories of convicts driven beyond endurance by the cruelties of the penal system who had escaped into the hinterland never to be seen again. One convict bolter who survived his encounter with this terrible land was sustained throughout his ordeal by eating the companions he murdered. If the rigours of this hellish environment could drive a Christian white man to cannibal depravity, why would any white man willingly set foot upon it?
George Augustus Robinson was no ordinary white man. He had a hankering to venture into the heart of darkness and immerse himself in the challenges offered by the vast wilderness of the new world. He would reason to himself that his object in plunging into the wild was to shine the light of God into the darkness, while his wholehearted embrace of untamed nature revealed a passion for elemental experience much at odds with his evangelical posturing. All along the rugged way, his steps were driven by a voracious ambition to be feted and admired by the settler elite who had showered derision upon his enterprise. He was determined to return to their small world as a conquering hero.
Walking in single file, with the convicts bringing up the rear, the party followed the creek westward for a mile or so until they reached a flat plain that stretched for many miles, promising easy walking. To everyone’s dismay, they almost immediately sank into tepid water that rose to their calves. The pretty olive-and rust-coloured grasses that stretched as far as their eyes could see were growing in a porous layer of peat that sat on a hard quartzite base, trapping the voluminous rainfall into a watery bog. For hours the party pulled their legs through marshland that at times sucked them down to their knees. Reaching higher ground they were only slightly less dismayed to find an almost impenetrable belt of thick eucalypt scrub.
Just after dawn next day they located “the native track” that led to the south coast. The track had not been used for many months, and in places was completely swallowed up by rainforest – which meant clambering over fallen trees that were slippery with moss, sometimes crawling through on hands and knees, then a steep descent down a cliff face where almost every step caused a cascade of small boulders. After much slipping and stumbling they finally reached the shore, where they made camp just as huge heavy drops of rain began to fall, and persisted all through the night.
At sunrise, greatly disheartened and drenched to the bone, the expedition set off once more, climbing up and over rugged country covered with dense forest, punctuated by huge outcrops of barren rock with jagged edges sharp as knives. When they reached the coast they were sweating profusely under the baking sunshine as they walked for several hours along a wide arc of squeaky, shifting sand pounded by heavy surf. Lagging a mile or two behind Robinson and his guides, the burdened convicts stumbled and cursed. That night, camped at the bottom of a deep coastal ravine, Robinson was very apprehensive. They had covered no more that 20 miles, and supplies were running dangerously low. There were no people around to render assistance. Along the way they had passed many bark huts of the Needwondee, all deserted. Wooredy explained these people were snatched away by Ria Warrawah.
The fourth day involved negotiating a passage across a daunting mountain range that consisted of a series of polished quartz summits. Much of the time they progressed on hands and knees, clinging onto the wiry tufts of grass or pitiful, wind-stunted trees. After persevering all day in this unforgiving terrain without any food, the guides were at the point of total exhaustion. Truganini could barely walk. Kickerterpoller was no longer compliant, boldly remonstrating that this was not the way locals travelled. Even a roving party that moved through cleared country on level ground did not go at such a pace.
The indefatigable Wooredy was the only one not prone with exhaustion. Scanning the ragged, precipitous coastline his sharp eyes located the supply schooner lying offshore in a bay about six miles ahead. White men called this place Louisa Bay, but Wooredy knew it to be where the creator spirit Droemerdeener fell from the sky into the sea. Like Recherche Bay, it was once a ritual meeting place for all the clans of the south-east, and it held extensive shell middens and hidden rock paintings. Here was where his father and grandfather built the sturdy canoes they took to distant Maatsuyker Island to hunt for seals. There was no more hunting for seals on Maatsuyker. In a few short years the seal colony had been wiped out by the same rapacious white men who had stolen so many of the Nuenonne women.
Re-energised by the prospect of food, Robinson followed his guides in a headlong scramble down the mountainside, reaching Louisa Bay by late afternoon. Two hours later the shattered convicts arrived. Watching Truganini gleefully diving for crayfish, he ruefully acknowledged how perilously close they had come to starvation. The rigours of the journey convinced him that he would not survive the trip to Port Davey without reliance on Indigenous food supplies and local knowledge of the bush. He would have to defer to their way of doing things.
A hideous irony
For the next six weeks Robinson kept to the meandering, leisurely pace of the Tasmanians, for whom travel was subordinate to the requirements of hunting and gathering. He was growing increasingly frustrated at his failure to make contact with the elusive Ninine. Although evidence of their fires and their grass-covered huts were plentiful, the people kept well out of sight. Truganini knew how to find her relatives, but was in no hurry. Slyly deflecting Robinson’s pursuit, she spent her time diving for crayfish, oyster and abalone or collecting small wild plums, sweet red berries and edible roots. The men went hunting for wallaby, wild duck and an elusive animal somewhat bigger than a dog, with distinctive stripes on its back. It was a kind of hyena, Robinson thought.
As the food became more plentiful, the difficulties of the terrain got greater. Moving further westward toward Bathurst Harbour meant pushing into mountainous country covered with almost horizontal forest. Beset by mizzling rain that never let up, they were forced to crawl along precipices or wade for miles through thigh-high water. Impervious to the brutal terrain and the perpetual rain, Robinson found the experience excruciatingly uncomfortable, yet utterly exhilarating.
Robinson was sticking close to his guides, sleeping around their fires and sharing their provisions of abalone, crayfish and fresh wallaby meat, while the scornful convicts made camp a considerable distance away and spurned the Tasmanians’ fresh food in favour of their Christian food of spoiled potatoes and salted meat. Nor did they want any part of the heathen singing and dancing that went on every night at the Tasmanians’ camp, with Robinson as a fascinated participant. He listened attentively as Wooredy told of the exploits of the creator spirits who made man from the kangaroo, writing up copious notes in his journal.
As the stories were sung with a repeated, chanted chorus, Robinson cleverly inserted himself into these nightly rituals by joining in the chanting. And he played his flute, which was a great hit. The Tasmanians were all having a fine time. After years of terror and harassment they were back in the bush, reviving a traditional way of life that revolved around hunting and ritual. And Mister Robinson was there to make sure the surly white men with guns were kept a safe distance.
So began a system of mutual support and protection between Robinson and his Tasmanian guides that for Wooredy and Truganini lasted 12 years. They might not have properly comprehended Robinson’s intentions, but they understood that their relationship with him had undergone a profound change since leaving Louisa Bay. In contrast to his earlier behaviour, where his efforts had been to make them like himself, in the wilderness it seemed as if he was in the process of becoming one of them.
Wooredy took the lead in an overt effort to induct Robinson into the Tasmanians’ way of life, leading the nightly ritual re-enactments of how animal spirits formed the world, how they left their recognisable mark on the landscape and how they emerged in the form of man and other species to inhabit that landscape. In Wooredy’s spellbinding stories, and in their song and dance, the Tasmanians asserted the palpable reality of their world, as opposed to Robinson’s abstract talk of God, heaven and hell.
This reciprocal relationship between Robinson and his Tasmanian guides had all the elements of tragedy. In his detailed accounts of their interactions, Robinson revealed a genuine interest in Tasmanian culture and an affectionate regard for the people. He slept with them, sang with them, hunted with them, learnt their language and marvelled at their mental and physical adaptation to the natural world. The hideous irony was that despite the intense pleasure he took in this elemental experience, which caused his impoverished puritan spirit to soar, Robinson sought to ingratiate himself to secure their trust so he could use them to entice the remaining Indigenous population into his custody.
Fancying himself as an ethnographer, he was also making a study of the curious ways of the primitive Tasmanians in the wild for the book he intended to publish. His journal entries offer not a glimmer of awareness that his travel companions might think they were in a relationship of mutual obligation.
Robinson could invest his companions with fundamental human feelings of sadness and pleasure, even affection and loyalty, but to grant them complex reasoning and intricate social relationships would have destroyed the whole rationale of his activity. The idea that Wooredy and Truganini might have regarded themselves as equal partners in his enterprise would never have entered his head.
In the middle of March the party reached the vast waterway of Bathurst Harbour. They had been walking for six weeks without making contact. The inhabitants of the southwest proved no more accommodating than the savage landscape, “fleeing before my approach as the clouds flee before a tempest”, Robinson wrote with heavy exasperation. It was at Bathurst Harbour that one of the guides spotted a flag fluttering on the shore, causing Robinson to experience a surge of expectation. The flag was revealed to be a pathetic, desperate signal planted by three escaped convicts from the penitentiary at Sarah Island, many miles to the north. Their bleached skeletons, still wearing tatters of government-issue clothing, were an unsettling reminder of how inhospitable this place could be for white intruders.
Squatting on the ground to register this grim find, Wooredy suddenly pointed to smoke rising in the distance hills. The sight of smoke set Robinson’s heart racing all over again – at last the Ninine were in sight. Wooredy and Truganini set off in hot pursuit, and in the following days they made contact with the Ninine time and time again, but could persuade only two young women to come with them to meet Mister Robinson. The rest of the group simply melted away into the bush. These two women were entertained with the baubles Robinson gave to them and were also utterly beguiled by the sound of his flute, but it took days to persuade them to take him to their hiding place.
Pushing through tough scrub, Robinson followed the two women for a very long way, until they reached a hidden clearing. After several loud hoots, ten naked women emerged, with six children in tow, followed a little later by ten men, all of them standing over six feet tall, naked and carrying spears, with dead wallaby thrown over their shoulders. Wooredy told how he had walked all day to meet with them and how Robinson was constantly calling out gozee, meaning “make haste”, which caused great mirth. They kept repeating “gozee” to Robinson, then collapsing into gleeful laughter. Cautiously they sniffed at the biscuit he offered, before handing it back, then they amused themselves stroking and prodding his pale skin and meticulously examining the blue coat he was wearing.
These ten families made an impressive group, with everyone in excellent health and high spirits. This jocular band agreed to accompany Robinson back to his camp, laughing and shouting all along the way, until they breasted the hill above Kelly’s Basin. Suddenly they stopped in their tracks and fell silent. Coming toward them were a group of white men in a boat.
Robinson was livid with anger at the curious convicts who had disobeyed his order to stay out of sight. Knowing he had no hope of inducing the Ninine to take another step, he went alone to his camp. Early next morning he anxiously climbed the same hill and was distressed to see that the Ninine had slipped away. Wooredy and Truganini followed on their tracks for next two weeks, being led in a game of hide-and-seek, making sporadic contact with the Ninine, only to have them disappear at whim.
Palpably frustrated by his failure to effect “conciliation” with the local population, Robinson was equally perplexed by the attitude of his guides. He was alarmed when the Tasmanian men told him they could round up the Ninine for him if only he would give them his pistols. Alternatively, his convict retainers advised that alcohol would be the most effective weapon, explaining “it would only be necessary to make them drunk and you could take them anywhere”.
Robinson expected this kind of response from convicts, which is why he kept them far away from any possible contact, and he was alert to the potential antagonism from the men from other language groups, but it was beyond his comprehension that Wooredy should want to capture a people to whom he was closely related. Robinson began to suspect his loyal and trusted companion could be causing the extreme wariness of the Ninene, especially when he heard Truganini warn them that her husband “did not like toogee”.
It was a genuine shock to Robinson to realise that all his expedition team thought the purpose of their travail in this rugged, wet and wind-ravaged landscape was to capture the inhabitants. No one appeared to understand him when he reiterated that his friendly mission was merely to gain the confidence of the west-coast clans. Taking captives was never his intention, he insisted, oblivious as always to the implicit message he was giving. His Tasmanian guides were already captives. Captivity was the new order in which they lived and it was apparent to them that even the white men who carried the supplies were captives.
To what end had Robinson marched them across the island, his bemused companions might have wondered, if not capture and removal? What other motivation could there be for such an insane expedition through this barely penetrable wilderness?
I was driving down a main street in Canberra’s north a few weeks ago when a young boy fired a few shots at me from the back of his father’s bike. I didn’t see the gun at first. It was black and camouflaged in the shadow of an overhanging oak tree. But as I approached from behind, the bike rolled into the sunlight.
I blinked in surprise; people don’t have guns in Canberra. Why did this child have a gun?
The boy, no older than four, had seen me now. He turned the barrel towards me, twisting awkwardly in his seat. I saw his lips move before I heard him. “Bang, bang, bang” he yelled as he pulled the trigger, invisible bullets flying towards my windshield.
As I drove out of view a few minutes later, I was confronted by my disapproval at this boy’s behaviour. Children have long played at war, but when did it become a public nuisance that I could reasonably expect to avoid?
It seems to me that my response reflected something important about the entitlement that adults have assumed over public space in this country.
Over the last 100 or so years, children’s play has increasingly been moved off the streets and into fenced backyards, schoolyards, nurseries and playgrounds. Instead of sharing the privileges of the streets, children have suffered restrictions of movement and been prohibited from certain play activities in the urban space.
Losing the battle for the streets
Playing on the streets has always been an activity of negotiated boundaries.
Simon Sleight’s research shows that, before the outbreak of the first world war, children not only played a central role in the construction of the urban space, but also contested existing adult street cultures and found avenues to assert their agency.
Nevertheless, in the late 19th century and early 20th century the efforts of social critics to reimagine Australian young people as victims of urbanisation and social degradation increasingly forced children out of view – in some places more than others.
Concerns over the dangers of the streets intensified in the post-WWI period as the rise of car-related accidents fuelled debates over children’s safety. As a Nowra council member observed in 1924, boys playing on the streets was “really dangerous”.
The situation became so dire in Sydney in 1933 that the New South Wales lands minister, Ernst Buttenshaw, supported Leichhardt Council’s proposal to turn Balmain’s old burial ground into a recreational area for children, despite the state government’s previous lack of enthusiasm for the plan. It was essential, he said, that children should be kept from playing on the streets.
Young people continued to assert their shared ownership of the streets throughout the mid-20th century, but their resistance increasingly came at a cost: innocent games of marbles, football and billycarts ended with fatal collisions with cars, trams and trucks.
In 1951, Senior Constable Smith reported that of the 28 children under the age of six who died in traffic accidents in NSW the previous year, most had been playing on the streets.
Figures like these presented a powerful argument for increased restrictions of children’s mobility. City children felt the transformation of the streetscape far more keenly than many of their rural counterparts. Even today, street cricket and barefoot scootering are common in some small towns and suburbs.
Anxieties over war toys and militarised play
During the mid-20th century, anxieties over the traffic-related risks of street play also opened up space for education do-gooders and social theorists to voice their opinions about the civil and social benefits of play. Certain types of play, they argued, were especially dangerous for children.
Militarised play, of course, was at the top of their list. War toys “are making our children used to war”, declared Ruby Rich of the NSW section of the International Peace Council in 1937. Urging a boycott of war toys at a public meeting in Newcastle Town Hall, Rich said that these were teaching children “to love the things that may have killed their fathers in the last war”.
There have been times, however, when militarised play has been tolerated, even commended, in the public space.
In 1915, Melbourne boys between the ages of four and ten, armed with bamboo guns and wooden spears, were reported playing at soldiers in “local beauty spots”. The activity finally reached its “limit” when a group of boys dug up the lawns of Queen’s Park in Essendon to build a trench in anticipation of the German advance. One local newspaper acclaimed that it was unfortunate that they damaged the lawns because their bravery was “worthy of Australians”.
Debates over militarised play and war toys still have currency in Australia. In 2015, an image of a child holding a replica AK-47 rifle, his finger on the trigger, near Sydney’s Martin Place sparked calls for a complete ban on toy guns. The fact that a terrorist attack had recently taken place at the nearby Lindt Cafe only heightened public outrage over the incident.
When the ABC surveyed children about their views, there was widespread agreement that although toy guns shouldn’t be banned, they should only be played with at home. “I think toy guns should be allowed but not in public,” Pierce said. Jacob agreed, concluding that “our friend’s house is the best place to use them”.
Such responses reflect the way many children have internalised adult-centric street cultures.
For young boys to dig a trench and enact a foreign invasion in a Melbourne park, or to march around the streets of Hobart dressed as soldiers armed for war as they did in 1900, would today likely be considered a dangerous public nuisance.
The rise of the adult streetscape
Over the past century, urban streets have increasingly become thoroughfares, transient places of movement where all kinds of rules and codes of behaviours have made them less friendly to children and their imaginations.
The intense disapproval that I felt at being the target of an imaginary drive-by shooting in Canberra last month captures something of the shifting traditions of street play and ownership in Australia. The prerogative that my adult self has assumed over the streets, as well as the cultural sensibilities that have arisen surrounding particular types of play, led me to condemn the boy’s behaviour.
It was not that his behaviour was dangerous, though it might well have been, it was that it threatened the comfort I had come to expect of Canberra’s streets.
Soon after it became a British colony, New Zealand began shipping the worst of its offenders across the Tasman Sea. Between 1843 and 1853, an eclectic mix of more than 110 soldiers, sailors, Māori, civilians and convict absconders from the Australian penal colonies were transported from New Zealand to Van Diemen’s Land.
This little-known chapter of history happened for several reasons. The colonists wanted to cleanse their land of thieves, vagrants and murderers and deal with Māori opposition to colonisation. Transporting fighting men like Hōhepa Te Umuroa, Te Kūmete, Te Waretiti, Matiu Tikiahi and Te Rāhui for life to Van Diemen’s Land was meant to subdue Māori resistance.
Transportation was also used to punish redcoats (the British soldiers sent to guard the colony and fight opposing Māori), who deserted their regiments or otherwise misbehaved. Some soldiers were so terrified of Māori warriors that they took off when faced with the enemy.
Early colonial New Zealand had no room for reprobates. Idealised as a new sort of colony for gentlefolk and free labourers, New Zealanders aspired towards creating a utopia by brutally suppressing challenges to that dream. On 4 November 1841, the colony’s first governor, William Hobson, named Van Diemen’s Land as the site to which its prisoners would be sent. The first boatload arrived in Hobart in 1843 and included William Phelps Pickering, one of the few white-collar criminals transported across the Tasman. Pickering later lived as a gentleman after returning home.
In 1840s Van Diemen’s Land, convict labourers were sent to probation stations before being hired out. Many men transported from New Zealand were sent down the Tasman Peninsula, where labourers were needed at the time.
Ironically, those eventually allocated to masters or mistresses in larger centres like Hobart or Launceston would have enjoyed more developed living conditions than New Zealand’s fledgling townships. In those days, Auckland’s main street was rather muddy. Early colonial buildings were often constructed by Māori from local materials.
At least 51 redcoats were shipped to the penal island. Some committed crimes after being discharged from the military. But many faced charges related to desertion. Four of the six soldier convicts who arrived Van Diemen’s Land in June 1847 were court-martialled in Auckland the previous winter for “deserting in the vicinity of hostile natives”.
As Irish soldier convict Michael Tobin explained, the deserters had been returned to the colonists by “friendly natives”; that is, Māori who were loyal to the Crown during the New Zealand Wars. Perhaps as a form of insurance, Tobin had also struck Captain Armstrong, his superior. Several other soldiers also used violence against a superior – it was bound to ensure a sentence of transportation, removing them from the theatre of war.
Irish Catholic soldier Richard Shea, for instance, was a private in the 99th Regiment who used his firelock to strike his lieutenant while on parade. This earned him a passage on the Castor to Van Diemen’s Land. His three military companions on the vessel, William Lane, George Morris and John Bailey, all claimed to have been taken by Maori north of Auckland and kept prisoner for four months. But surviving records reveal that their military overlords thought that the three had instead deserted to join the ranks of a rebel chief.
In 1846, NZ governor George Grey proclaimed martial law across the Wellington region. When several Māori fighters were eventually captured and handed over to colonists by the Crown’s Indigenous allies, they were tried by court martial at Porirua, north of Wellington.
After being found guilty of charges that included being in open rebellion against Queen and country, five were sentenced to transportation for life in Van Diemen’s Land. The traditionally-clothed Māori attracted a lot of attention in Hobart, where colonists loudly disapproved of their New Zealand neighbours’ treatment of Indigenous people. This is ironic given the Tasmanians’ own near-genocidal war against Aboriginal people.
Grey had wanted the Māori warriors sent to Norfolk Island or Port Arthur and hoped they would write letters to their allies at home describing how harshly they were being treated. Instead, they were initially held in Hobart, where they were visited by media and other well-wishers. Colonial artist John Skinner Prout painted translucent watercolour portraits of them. Each of the fighters used pencil to sign his name to his likeness. William Duke created a portrait of Te Umuroa in oils.
Hobartians were worried that the Māori could become contaminated through contact with other convicts. Arrangements were made to send them to Maria Island off the island’s east coast, where they could live separately from the other convicts.
John Jennings Imrie, a man who previously lived in New Zealand and knew some Māori language, became their overseer. Their lives in captivity were as gentle as possible and involved Bible study, vegetable gardening, nature walks and hunting.
Following lobbying from Tasmanian colonists and a pardon from Britain, four of the men, Te Kūmete, Te Waretiti, Matiu Tikiahi, Te Rāhui, were sent home in 1848. Te Umuroa died in custody at the Maria Island probation station in July 1847. It was not until 1988 that his remains were repatriated to New Zealand.
Reducing crime through imposing exemplary sentences saw dozens of working-class men transported to Van Diemen’s Land. One such fellow was James Beckett, a sausage-seller transported for theft for seven years. The only woman sent from New Zealand, Margaret Reardon, was sentenced to seven years’ transportation for perjuring herself trying to protect her partner (and possibly herself) from murder charges. After being found guilty of murdering Lieutenant Robert Snow on Auckland’s North Shore in 1847, the following year Reardon’s former lover Joseph Burns became the first white man judicially executed in New Zealand.
At one stage, Reardon was sent to the Female Factory at Cascades on Hobart’s outskirts to be punished for a transgression. Eventually, she remarried and moved to Victoria where she died in old age.
In 1853, transportation to Van Diemen’s Land formally ended. New Zealand then had to upgrade its flimsy gaols so criminals could be punished within its own borders.
When Americans think of being at war, they might think of images of their fellow citizens suffering.
We count the dead and wounded. We follow veterans on their difficult journey of recovery from physical injuries and post-traumatic stress. We watch families grieve and mourn their dead.
But it was not always this way.
In fact, newspapers during Vietnam and earlier wars gave little space to portraying individual American service members. Journalists almost never spoke with grieving relatives. I learned this by researching depictions of American war dead in newspapers and textbooks.
Today, as the U.S. again escalates its 16-year war in Afghanistan, it is important to understand how Vietnam set a pattern for finding honor in inconclusive or lost wars.
Anonymous Vietnam War dead
I found that from 1965 to 1975, The New York Times mentioned the names of only 726 of the 58,220 American military personnel killed in Vietnam. Reading through every New York Times article from those years with the word “Vietnam” in it, I found biographical information was included about only 16 dead service members, and photos of 14.
There are just five references to the reactions of the families of the dead, and only two articles mention the suffering of injured American service members. Two other articles discuss the funerals or burials of the dead. This restrained coverage is far different from that of The New York Times or any other media outlet during the Afghanistan and Iraq wars.
The U.S. military encouraged this change. As the Vietnam War dragged on there were mounting casualties, ever less prospect of victory and ever more reports of atrocities committed by American service members. In response, U.S. commanders searched for new ways to find honor in their troops’ struggles.
One way the military changed was the way it honored its members through medals. Medals have always been used by officers to reward and identify behaviors they want their troops to emulate. Before Vietnam, the Medal of Honor – the highest award given by the U.S. – usually went to those who lost or risked their lives by going on the offensive to kill enemy fighters. But during Vietnam, I found, the criteria for the Medal of Honor changed. More and more, those who served were recognized for defensive acts that saved the lives of fellow American troops, rather than for killing communist fighters.
Toward the end of the war and in all wars since, nearly all Medals of Honor were given for actions that got fellow American service members home alive, rather than helping win a war.
This shift echoed changes in the broader American culture of the 1960s and 1970s – a shift toward celebrating individual autonomy and self-expression. As a growing fraction of Americans achieved a level of wealth unprecedented in world history and unparalleled elsewhere in the world, claims that people deserved emotional fulfillment at school and work became increasingly salient.
Another way the military adjusted its approach was to loosen its grip on discipline. The military responded to insubordination within its ranks by allowing expressions of dissent. This aligned the military with the culture of individual expression in the civilian world from which its volunteers and draftees came. Civilians saw this new attitude in news photos of service members in Vietnam wearing buttons saying “Love” or “Ambushed at Credibility Gap.” This celebration of the individual, even in a disciplined military, made the life of each service member seem even more precious, and the effort to save such lives ever more praiseworthy.
Troops’ families also became a focus of attention in two ways.
First, the military replaced the practice of sending telegrams to dead service members’ survivors with visits from casualty assistance calls officers who delivered the news in person. This practice has continued in every war since.
Second, prisoners of war became objects of repeated attention from President Richard Nixon. Nixon used POWs as props to unfairly, in my view, attack the antiwar movement as insufficiently concerned with American troops. Journalists spoke with the prisoners’ wives and children, bringing attention for the first time to the emotional suffering of service members’ families.
The military’s focus on individual service members in the late years of Vietnam has created a permanent legacy. Since Vietnam, Americans’ tolerance for casualties has sharply declined. A majority of Americans turned against the Vietnam War only when the number of U.S. dead exceeded 20,000. In Iraq it took just 2,000 dead for a majority of Americans to oppose the war.
The U.S. now fights wars in ways designed to minimize casualties and avoid any troops being taken prisoner. Such casualty avoidance, through the use of high altitude bombing, drones and heavily armored vehicles, increases civilian casualties. It also limits interaction between civilian and American troops – making it more difficult to win over the support of locals in places like Iraq and Afghanistan.
Vietnam did not make Americans into pacifists, but it did make U.S. civilians far more concerned with the well being and lives of their country’s troops. At the same time, the end of the draft and shift to an all-volunteer force required the U.S. military to treat its recruits with greater respect. These factors ensure military service members will continue to be honored most highly for protecting each other’s lives, even when those actions occur during lost or inconclusive wars like Afghanistan and Iraq.
Editor’s Note: This piece has been updated to reflect the correct number of troops who died in the Vietnam War – 58,220, not 58,267.
Flailing devil-horn brows; cross-eyed glare; hooked nose; unkempt beard; angular cheek bones; reckless hair, and hands bound in the dark corners of canvas. That’s John Brown, as depicted by Ole Peter Hansen Balling in earthy oil-paint tones, circa 1872.
It was my fourth visit to the National Portrait Gallery in Washington DC. Airy, flush green olive trees, showery water fountains, golden shards of light; it beat the stuffy DC summer streets and the even stuffier Library of Congress newspaper reading room. But on this occasion, I paid particular attention to the opening line of the box of text beneath Balling’s portrait:
There were those who noted a touch of insanity in abolitionist John Brown …
It reminded me of a display I had seen at the Gettysburg Battlefield Museum just a few weeks earlier. A bold, capital-lettered, mega-font question was emblazoned on the wall, next to a rusty pike and a picture of Brown:
JOHN BROWN. MARTYR OR MADMAN?
And here I was, back at the gallery, staring at the same man, asking myself that same question, like thousands before me.
Born in Torrington, Connecticut in 1800, Brown remains, over a century and a half after his death, one of the most fiercely debated and contested figures in 19th-century American history.
On the evening of October 16, 1859, just months before the American Civil War fully ignited, Brown led a band of raiders into the small town of Harper’s Ferry, Virginia, in a bid to instigate a slave rebellion. Brown’s plan was to seize federal ammunition supplies and arm slaves with rifles, pikes and weaponry in order to strike fear into slave-holding Virginians, and catalyse further revolts in the south.
Greatly outnumbered by local militia and government marines, he was swiftly captured and sentenced to hang, which he did on December 2, 1859.
A symbolic man
While some abolitionists immediately labelled Brown a heroic martyr, others more cautiously warned against his violent approach. Southern newspapers, on the other hand, expressed disgust at how this violent madman could ever be deemed heroic.
Since the 1860s, Brown has been a symbolic cultural resource for interest groups to draw upon, define, explain, or galvanise a course of action or belief. Depending on one’s point of view, he has variously been claimed a heroic martyr for African Americans, one of the greatest Americans of all time, a cold-blooded killer and even America’s first terrorist. But is there a historical “truth” to whether he was actually (partisan bias aside) madman or martyr?
The very idea of martyrdom tends to proliferate during periods of social change and historical action. Martyr stories are also marked by personal quests, violence, institutional execution, and dramatic final actions that heroically demonstrate a commitment to a cause with disregard for one’s own life.
Brown’s violent raid at Harper’s Ferry at the dawn of the Civil War, his theologically-infused commitment to ending slavery, and institutional hanging fit perfectly into these historical patterns of socio-religious martyrdom. So why the “madman” moniker?
The Oxford English Dictionary defines the term “madman” as:
A man who is insane; a lunatic. Also more generally (also hyperbolically): a person who behaves like a lunatic, a wildly foolish person.
Problematically, the first part of the definition – “insane” – connotes a mentally ill male, unable to fully control their physical and mental faculties.
But Brown was committed to his final act, and recognised violence, imprisonment and sacrifice as a forum for abolitionism. Consequently, it might be argued that his actions suggest a form of heightened (rather than lack of) self-control, something you’d expect of a martyr.
However, the second part – to behave like a “lunatic” or “wildly foolish” person – more aptly describes Brown’s personality. There is certainly a case for considering Brown’s final act at Harper’s Ferry to be “wildly foolish”. Even his most famous supporters such as Frederick Douglass described it as cold-blooded, if well intended.
Even if a present-day medical, neurobiological, or psychological analysis of Brown was possible, however, his actions would surely be considered outside the realms of what psychologists call a healthy “clinical population”. That is, his class of behaviours stretched beyond the limits – psychological, mental, physical – of the normative masses. Which begs the bigger question: is it not a streak “madness” that always makes a martyr?
So, the futile question of whether Brown was madman or martyr is irrelevant: Brown was, and will continue to be, both.
The Second Amendment is one of the most frequently cited provisions in the American Constitution, but also one of the most poorly understood.
The 27 words that constitute the Second Amendment seem to baffle modern Americans on both the left and right.
Ironically, those on both ends of our contemporary political spectrum cast the Second Amendment as a barrier to robust gun regulation. Gun rights supporters – mostly, but not exclusively, on the right – seem to believe that the Second Amendment prohibits many forms of gun regulation. On the left, frustration with the lack of progress on modern gun control leads to periodic calls for the amendment’s repeal.
Both of these beliefs ignore an irrefutable historical truth. The framers and adopters of the Second Amendment were generally ardent supporters of the idea of well-regulated liberty. Without strong governments and effective laws, they believed, liberty inevitably degenerated into licentiousness and eventually anarchy. Diligent students of history, particularly Roman history, the Federalists who wrote the Constitution realized that tyranny more often resulted from anarchy, not strong government.
Consider these five categories of gun laws that the Founders endorsed.
Today American gun rights advocates typically oppose any form of registration – even though such schemes are common in every other industrial democracy – and typically argue that registration violates the Second Amendment. This claim is also hard to square with the history of the nation’s founding. All of the colonies – apart from Quaker-dominated Pennsylvania, the one colony in which religious pacifists blocked the creation of a militia – enrolled local citizens, white men between the ages of 16-60 in state-regulated militias. The colonies and then the newly independent states kept track of these privately owned weapons required for militia service. Men could be fined if they reported to a muster without a well-maintained weapon in working condition.
The American colonies inherited a variety of restrictions that evolved under English Common Law. In 18th-century England, armed travel was limited to a few well-defined occasions such as assisting justices of the peace and constables. Members of the upper classes also had a limited exception to travel with arms. Concealable weapons such as handguns were subject to even more stringent restrictions. The city of London banned public carry of these weapons entirely.
The American Revolution did not sweep away English common law. In fact, most colonies adopted common law as it had been interpreted in the colonies prior to independence, including the ban on traveling armed in populated areas. Thus, there was no general right of armed travel when the Second Amendment was adopted, and certainly no right to travel with concealed weapons. Such a right first emerged in the United States in the slave South decades after the Second Amendment was adopted. The market revolution of the early 19th century made cheap and reliable hand guns readily available. Southern murder rates soared as a result.
In other parts of the nation, the traditional English restrictions on traveling armed persisted with one important change. American law recognized an exception to this prohibition for individuals who had a good cause to fear an imminent threat. Nonetheless, by the end of the century, prohibiting public carry was the legal norm, not the exception.
#3: Stand-your-ground laws
Under traditional English common law, one had a duty to retreat, not stand your ground. Deadly force was justified only if no other alternative was possible. One had to retreat, until retreat was no longer possible, before killing an aggressor.
The use of deadly force was justified only in the home, where retreat was not required under the so-called castle doctrine, or the idea that “a man’s home is his castle.” The emergence of a more aggressive view of the right of self-defense in public, standing your ground, emerged slowly in the decades after the Civil War.
#4: Safe storage laws
Although some gun rights advocates attempt to demonize government power, it is important to recognize that one of the most important rights citizens enjoy is the freedom to elect representatives who can enact laws to promote health and public safety. This is the foundation for the idea of ordered liberty. The regulation of gun powder and firearms arises from an exercise of this basic liberty.
In 1786, Boston acted on this legal principle, prohibiting the storage of a loaded firearm in any domestic dwelling in the city. Guns had to be kept unloaded, a practice that made sense since the black powder used in firearms in this period was corrosive. Loaded guns also posed a particular hazard in cases of fire because they might discharge and injure innocent bystanders and those fighting fires.
#5: Loyalty oaths
One of the most common claims one hears in the modern Second Amendment debate is the assertion that the Founders included this provision in the Constitution to make possible a right of revolution. But this claim, too, rests on a serious misunderstanding of the role the right to bear arms played in American constitutional theory.
Gun regulation and gun ownership have always existed side by side in American history. The Second Amendment poses no obstacle to enacting sensible gun laws. The failure to do so is not the Constitution’s fault; it is ours.
The recent easing of the public sector pay cap suggests that the government is beginning to respond to widespread concerns about the social and economic costs of austerity. Yet despite this turn, the proposed rises remain below real-terms inflation. Plus, the need for continued austerity is justified in terms of being “fair” to those who must pay for wage increases as well to as those who will receive them.
Despite increasing opposition, austerity remains a potent force in politics today. This should not surprise us. The modern narrative of austerity has a long cultural history, which we can trace from medieval religious writers to 20th century philosophers.
Part of austerity’s appeal is that it justifies present suffering through the promise of future prosperity. No matter what the arguments against austerity, from past and present economists, the huge cost for public services is somehow seen as a price worth paying. Philip Hammond, chancellor of the exchequer, insists that “we must hold our nerve … and maintain our focus resolutely on the prizes that are so nearly within reach”.
This language is telling. It is part of an ongoing narrative about how restraint and self-denial are good for you. This perceived moral value is not without precedent. Historically, there have been numerous cultural manifestations of austerity that shed light on its enduring appeal and the rhetoric associated with it.
Austerity is closely related to the ancient concept of asceticism, the art of abstinence practiced by Greek and Roman philosophers, continued by medieval religious writers, and made famous by the theorist Max Weber in his 1922 book Economy and Society. Asceticism has many definitions, usually equating a simple life to a moral one. It is often seen as religious, an ideology based on the fact that present self-denial will enable future liberation from want.
Biblical scholar Richard Valantasis puts this in very positive terms, calling asceticism the “dream of being a better person” in his book on the subject, The Making of the Self. But Weber extends the religious and philosophical dimensions of asceticism to economics when he argues that capitalism is inherently ascetic, suggesting that it thrives through self-restraint and hard work.
Weber equates asceticism and rationality; austerity, he says, is both sensible and logical, and it provides the individual with inward fulfilment. Thus, when governments pursue austerity policies and accuse their opponents of being selfish and wasteful, they draw on a cultural narrative that views self-denial as ethically, morally, and even spiritually, correct.
This is certainly the language that former chancellor of the exchequer George Osborne, used in June 2010 when austerity was first introduced in the UK. His emergency budget valorised austerity as moral:
It pays for the past. And it plans for the future. It supports a strong enterprise-led recovery. It rewards work … Yes, it is tough; but it is also fair.
Promise and purpose
The idea that austerity is “tough” but good for you echoes ascetic ideals clearly. Asceticism is a formative process as it shapes an individual through hard work (in Weber’s view) and gruelling self-denial (in the view of medieval writers). The fourth-century bishop Athanasius of Alexandria – a father of the Christian church – characterised the moral life as one of renunciation and suffering. He also praised discipline and labour as virtues that will lead to pleasing God, and ultimately to the rewards of heaven.
The story of present suffering leading to future prosperity therefore weaves concerns about one’s current struggles into a grander narrative of purpose. It gives an unstable life meaning through discipline, and according to the cultural critic Geoffrey Galt Harpham, leads to understanding of oneself, one’s community, and one’s place in the world.
These ascetic ideals remain imbued in Western cultural thinking and suggest why the narrative of modern economic austerity has stuck for so long. Austerity provides a sense of purpose, of striving for achievement, and of self-control. This is evident in the way that austerity is sold to the public – hence Hammond’s comment:
After seven long and tough years, the high-wage, high-growth economy for which we strive is tantalisingly close to being within our grasp. It would be easy to take our foot off the pedal. But instead we must hold our nerve.
By using the language of shared experience, shared struggle, and shared results, austerians attempt to construct a collective identity that unites people in their vision. The fact that austerity affects people in drastically different ways is secondary to creating the sense that we are striving for a common good. In the Middle Ages it was promoted to give spiritual meaning to physical deprivation. Today it does the same for economic hardship.
There is nothing wrong with the ideals of asceticism per se. Self-control and self-restraint are admirable qualities and have been praised throughout history. The problem is when these qualities are evoked on a national scale to justify economic self-harm.
The Conservatives’ loss of their majority in the most recent election suggests that those experiencing austerity might be beginning to turn against it. But those for whom austerity provides a powerful sense of rational order, a coherent narrative that makes constancy out of instability, and an economic purpose with the allure of morality, are unwilling to abandon it.
The narrative of austerity resonates strongly because of its history. We now require a powerful counter-narrative to promote the positive benefits of investing in public services and communities.
Twelve thousand years ago everybody lived as hunters and gatherers. But by 5,000 years ago most people lived as farmers.
This brief period marked the biggest shift ever in human history with unparalleled changes in diet, culture and technology, as well as social, economic and political organisation, and even the patterns of disease people suffered.
While there were upsides and downsides to the invention of agriculture, was it the greatest blunder in human history? Three decades ago Jarred Diamond thought so, but was he right?
Agriculture developed worldwide within a single and narrow window of time: between about 12,000 and 5,000 years ago. But as it happens it wasn’t invented just once but actually originated at least seven times, and perhaps 11 times, and quite independently, as far as we know.
Farming was invented in places like the Fertile Crescent of the Middle East, the Yangzi and Yellow River Basins of China, the New Guinea highlands, in the Eastern USA, Central Mexico and South America, and in sub-Saharan Africa.
And while its impacts were tremendous for people living in places like the Middle East or China, its impacts would have been very different for the early farmers of New Guinea.
The reasons why people took up farming in the first place remain elusive, but dramatic changes in the planet’s climate during the last Ice Age — from around 20,000 years ago until 11,600 years ago — seem to have played a major role in its beginnings.
The invention of agriculture thousands of years ago led to the domestication of today’s major food crops like wheat, rice, barley, millet and maize, legumes like lentils and beans, sweet potato and taro, and animals like sheep, cattle, goats, pigs, alpacas and chickens.
It also dramatically increased the human carrying capacity of the planet. But in the process the environment was dramatically transformed. What started as modest clearings gave way to fields, with forests felled and vast tracts of land turned over to growing crops and raising animals.
In most places the health of early farmers was much poorer than their hunter-gatherer ancestors because of the narrower range of foods they consumed alongside of widespread dietary deficiencies.
At archaeological sites like Abu Hereyra in Syria, for example, the changes in diet accompanying the move away from hunting and gathering are clearly recorded. The diet of Abu Hereyra’s occupants dropped from more than 150 wild plants consumed as hunter-gatherers to just a handful of crops as farmers.
In the Americas, where maize was domesticated and heavily relied upon as a staple crop, iron absorption was consequently low and dramatically increased the incidence of anaemia. While a rice based diet, the main staple of early farmers in southern China, was deficient in protein and inhibited vitamin A absorption.
There was a sudden increase in the number of human settlements signalling a marked shift in population. While maternal and infant mortality increased, female fertility rose with farming, the fuel in the engine of population growth.
The planet had supported roughly 8 million people when we were only hunter-gatherers. But the population exploded with the invention of agriculture climbing to 100 million people by 5,000 years ago, and reaching 7 billion people today.
People began to build settlements covering more than ten hectares – the size of ten rugby fields – which were permanently occupied. Early towns housed up to ten thousand people within rectangular stone houses with doors on their roofs at archaeological sites like Çatalhöyük in Turkey.
By way of comparison, traditional hunting and gathering communities were small, perhaps up to 50 or 60 people.
Crowded conditions in these new settlements, human waste, animal handling and pest species attracted to them led to increased illness and the rapid spread of infectious disease.
Today, around 75% of infectious diseases suffered by humans are zoonoses, ones obtained from or more often shared with domestic animals. Some common examples include influenza, the common cold, various parasites like tapeworms and highly infectious diseases that decimated millions of people in the past such as bubonic plague, tuberculosis, typhoid and measles.
In response, natural selection dramatically sculpted the genome of these early farmers. The genes for immunity are over-represented in terms of the evidence for natural selection and most of the changes can be timed to the adoption of farming. And geneticists suggest that 85% of the disease-causing gene variants among contemporary populations arose alongside the rise and spread of agriculture.
In the past, humans could only tolerate lactose during childhood, but with the domestication of dairy cows natural selection provided northern European farmers and pastoralist populations in Africa and West Asia the lactase gene. It’s almost completely absent elsewhere in the world and it allowed adults to tolerate lactose for the first time.
Starch consumption is also feature of agricultural societies and some hunter-gatherers living in arid environments. The amylase genes, which increase people’s ability to digest starch in their diet, were also subject to strong natural selection and increased dramatically in number with the advent of farming.
Another surprising change seen in the skeletons of early farmers is a smaller skull especially the bones of the face. Palaeolithic hunter-gatherers had larger skulls due to their more mobile and active lifestyle including a diet which required much more chewing.
Smaller faces affected oral health because human teeth didn’t reduce proportionately to the smaller jaw, so dental crowding ensued. This led to increased dental disease along with extra cavities from a starchy diet.
Living in densely populated villages and towns created for the first time in human history private living spaces where people no longer shared their food or possessions with their community.
These changes dramatically shaped people’s attitudes to material goods and wealth. Prestige items became highly sought after as hallmarks of power. And with larger populations came growing social and economic complexity and inequality and, naturally, increasing warfare.
Inequalities of wealth and status cemented the rise of hierarchical societies — first chiefdoms then hereditary lineages which ruled over the rapidly growing human settlements.
Eventually they expanded to form large cities, and then empires, with vast areas of land taken by force with armies under the control of emperors or kings and queens.
This inherited power was the foundation of the ‘great’ civilisations that developed across the ancient world and into the modern era with its colonial legacies that are still very much with us today.
No doubt the bad well and truly outweighs all the good that came from the invention of farming all those millenia ago. Jarred Diamond was right, the invention of agriculture was without doubt the biggest blunder in human history. But we’re stuck with it, and with so many mouths to feed today we have to make it work better than ever. For the future of humankind and the planet.
Darren Curnoe, Associate Professor and Chief Investigator, ARC Centre of Excellence for Australian Biodiversity and Heritage, University of New South Wales, UNSW
13. I do set my bow in the cloud, and it shall be for a token of a covenant between me and the earth. 14. And it shall come to pass, when I bring a cloud over the earth, that the bow shall be seen in the cloud: 15. And I will remember my covenant, which is between me and you and every living creature of all flesh; and the waters shall no more become a flood to destroy all flesh. 16. And the bow shall be in the cloud; and I will look upon it, that I may remember the everlasting covenant between God and every living creature of all flesh that is upon the earth.