Category Archives: article

What Malcolm Turnbull might have learned from Alfred Deakin


File 20170814 28430 92x45z
In jettisoning Alfred Deakin, the Liberals made a great mistake and showed the thinness of their historical memory.
National Library of Australia

Judith Brett, La Trobe University

Australia’s federal Liberal Party began not with Robert Menzies in 1945, but with Alfred Deakin’s Commonwealth Liberal Party in 1909, and before that with his Liberal Protectionists.

As a leadership party, the Liberals have always needed heroes. But in the 1980s, as Liberals embraced deregulation, they turned against Deakin and the policies he championed.

In his brilliantly succinct description of Australian settlement, Paul Kelly identified the core policies of the early Commonwealth with Deakin, and compulsory arbitration and the basic wage with his Liberal colleague, Henry Bournes Higgins.

Deakin’s support for protection and for state paternalism were his key sins in the eyes of the Liberal Party as it rehabilitated the free-trade legacy of New South Wales Liberal premier George Reid. Reid is not a well-known figure, so this left the Liberals with only Robert Menzies for their hero, although he has now been joined by John Howard.

In jettisoning Deakin, the Liberals made a great mistake and showed the thinness of their historical memory. The party and its traditions did not begin with Menzies, but stretched back to the nation-building of the new Commonwealth, and into the optimism and democratic energies of the 19th-century settlers.

Indeed, Deakin was one of Menzies’ heroes. The Menzies family came from Ballarat, where Deakin was the local member, and his Cornish miner grandfather was a great fan.

Accepting his papers at the Australian National Library just before his retirement, Menzies described Deakin as “a remarkable man” who laid Australia’s foundational policies. It must be remembered that in 1965, Menzies supported all these policies the Liberals were later to discard.

When it came to choosing a name for the new non-labour party being formed from the wreckage of the United Australia Party, it was to the name of Deakin’s party that Menzies turned, so that the party would be identified as “a progressive party, willing to make experiments, in no sense reactionary”.

Alfred Deakin In England, 1907.
National Library of Australia

This is a direct invocation of Deakin and his rejection of those he called “the obstructionists”, the conservatives and nay-sayers, who put their energies into blocking progressive policies rather than pursuing positive initiatives of their own.

In June this year, Turnbull quoted these words of Menzies, in his struggle with the conservatives of the party. Clearly Turnbull wants to be a strong leader of a progressive party, rather than the front man for a shambolic do-nothing government. He does have some superficial resemblances to Deakin: he is super-smart, urbane, charming and a smooth talker who looks like a leader. But as we all now know, he lacks substance.

When I first began thinking about this piece I was going to call it “What Malcolm Turnbull could learn from Alfred Deakin”. But I fear it may now too late for him to save his government, and might be more accurately called “What Malcolm Turnbull might have learned from Alfred Deakin”.

First, he could learn the courage of his convictions.

Deakin too was sometimes accused of lacking substance. He was not only a stirring platform orator, but he was quick with words in debate, and could shift positions seamlessly when the need arose. But he had core political commitments from which he never wavered. The need for a tariff to protect Australia’s manufacturers and so provide employment and living wages for Australian workers was one.

One may now disagree with this policy, but there was never any doubt that Deakin would fight for it.

Federation was another. In the early 1890s, after the collapse of the land boom and the bank crashes of the early 1890s, Deakin thought of leaving politics altogether. What kept him there was the cause of federation, and he did everything he could to bring it about.

He addressed hundreds of meetings and persuading Victoria’s majoritarian democrats that all would be wrecked if they did not compromise with the smaller states over the composition of the Senate.

Deakin had a dramatic sense of history. He knew that historical opportunities were fleeting, that the moment could pass and history move on, as it did for Australian republicans when they were outwitted by Howard in 1999.

In March 1898, the prospects for federation were not good. The politicians had finalised the Constitution that was to be put to a referendum of the people later in the year, but the prospects were not good. There was strong opposition in NSW and its premier, George Reid, was ambivalent.

Alfred Deakin at Point Lonsdale front beach, 1910.
Brookes family and Deakin University library

In Victoria, David Syme and The Age were hostile and threatening to campaign for a “No” vote. If the referendum were lost in NSW and Victoria, federation would not be achieved.

Knowing this, Deakin made a passionate appeal to the men of the Australian Natives Association, who were holding their annual conference in Bendigo. Delivered without notes, this was the supreme oratorical feat of Deakin’s life and it turned the tide in Victoria. Although there were still hurdles to cross, Deakin’s speech saved the federation.

The second lesson Turnbull could have learnt is to have put the interests of the nation ahead of the interests of the party and the management of its internal differences.

Deakin always put his conception of the national interest before considerations of party politics or personal advantage. And he fiercely protected his independence.

He too was faced with the challenges of minority government, but it is inconceivable that he would have made a secret deal with a coalition partner to win office. Or that he would have abandoned core beliefs, such as the need for action on climate change, just to hold on to power.

As the Commonwealth’s first attorney-general, and three times prime minister, Deakin had a clear set of goals: from the legislation to establish the machinery of the new government, or the fight to persuade a parsimonious parliament to establish the High Court, to laying the foundations for independent defence, and, within the confines of imperial foreign policy, establishing the outlines of Australia’s international personality.

Party discipline and party identification were looser in the early 20th century than they were to become as Labor’s superior organisation and electoral strength forced itself on its opponents.

But as the contemporary major parties fray at the edges, and their core identities hollow out, Australians are crying out for leaders with Deakin’s clear policy commitments, and his skills in compromise and negotiation.

Had Turnbull had the courage to crash through or crash on the differences within his party on the causes we know he believes in, he too might have become a great leader and an Australian hero.


The ConversationJudith Brett’s new book The Enigmatic Mr Deakin is published by Text.

Judith Brett, Emeritus Professor of Politics, La Trobe University

This article was originally published on The Conversation. Read the original article.


A short history of the office



File 20170811 1202 9p7a0j
In the seventeenth century lawyers, civil servants and other new professionals began to work from offices in Amsterdam, London and Paris.
British Museum/Flickr

Agustin Chevez, Swinburne University of Technology and DJ Huppatz, Swinburne University of Technology

For centuries people have been getting up, joining a daily commute or retreating to a room, to work. The office has become inseparable from work.

Its history illustrates not only how our work has changed but also how work’s physical spaces respond to cultural, technological and social forces.

The origins of the modern office lie with large-scale organisations such as governments, trading companies and religious orders that required written records or documentation. Medieval monks, for example, worked in quiet spaces designed specifically for sedentary activities such as copying and studying manuscripts. As depicted in Botticelli’s St Augustine in His Cell, these early “workstations” comprised a desk, chair and storage shelves.

Sandro Botticelli St Augustin dans son cabinet de travail or St Augustine at Work.
Wikipedia Commons

Another of Botticelli’s paintings of St Augustine at work is now in Florence’s Uffizi Gallery. This building was originally constructed as the central administrative building of the Medici mercantile empire in 1560.

It was an early version of the modern corporate office. It was both a workplace and a visible statement of prestige and power.

But such spaces were rare in medieval times, as most people worked from home. In Home: The Short History of an Idea, Witold Rybczynski argues that the seventeenth century represented a turning point.

Lawyers, civil servants and other new professionals began to work from offices in Amsterdam, London and Paris. This led to a cultural distinction between the office, associated with work, and the home, associated with comfort, privacy and intimacy.

Despite these early offices, working from home continued. In the nineteenth century, banking dynasties such as the Rothschilds and Barings operated from luxurious homes so as to make clients feel at ease. And, even after the office was well established in the 1960s, Hugh Hefner famously ran his Playboy empire from a giant circular bed in a bedroom of his Chicago apartment.

A police station office in the 1970s.
Dave Conner/Flickr, CC BY

But these were exceptions to the general rule. Over the course of the nineteenth and twentieth centuries, increasingly specialised office designs – from the office towers of Chicago and New York to the post-war suburban corporate campuses – reinforced a distinction between work and home.

Managing the office

Various management theories also had a profound impact on the office. As Gideon Haigh put it in The Office: A Hardworking History, the office was “an activity long before it was a place”.

Work was shaped by social and cultural expectations even before the modern office existed. Monasteries, for example, introduced timekeeping that imposed strict discipline on monks’ daily routines.

Later, modern theorists understood the office as a factory-like environment. Inspired by Frank Gilbreth’s time-motion studies of bricklayers and Fredrick Taylor’s Principles of Scientific Management, William Henry Leffingwell’s 1917 book, Scientific Office Management, depicted work as a series of tasks that could be rationalised, standardised and scientifically calculated into an efficient production regime. Even his concessions to the office environment, such as flowers, were intended to increase productivity.

Technology in the office

Changes in technology also influenced the office. In the nineteenth and early twentieth century, Morse’s telegraph, Bell’s telephone and Edison’s dictating machine, revolutionised both concepts of work and office design. Telecommunications meant offices could be separate from factories and warehouses, separating white and blue collar workers. Ironically, while these new technologies suggested the possibility of a distributed workforce, in practice, American offices in particular became more centralised.

IBM Selectric typewriter marked a change in office technology.
Christine Mahler/Flickr, CC BY-NC-SA

In 1964, when IBM introduced a magnetic-card recording device into a Selectric typewriter, the future of the office, and our expectations of it, changed forever. This early word processor could store information, it was the start of computer-based work and early fears of a jobless society due to automation.

Now digital maturity seems to be signalling the end of the office. With online connectivity, more people could potentially work from home.

But some of the same organisations that promoted and enabled the idea of work “anywhere, anytime” – Yahoo and IBM, for example – have cancelled work from home policies to bring employees back to bricks and mortar offices.

Why return to the office?

Anthropological research on how we interact with each other and how physical proximity increases interactions highlights the importance of being together in a physical space. The office is an important factor in communicating the necessary cues of leadership, not to mention enabling collaboration and communication.

Although employers might be calling their employees back to the physical space of the office again, its boundaries are changing. For example, recent “chip parties”, celebrate employees getting a radio-frequency identification implant that enables employers to monitor their employees. In the future, the office may be embedded under our skin.

The ConversationWhile this might seem strange to us , it’s probably just as strange as the idea of making multiple people sit in cubicles to work would have seemed to a fifteenth-century craftsman. The office of the future may be as familiar as home, or even our neighbour’s kitchen table, but only time will tell.

Agustin Chevez, Adjunct Research Fellow, Centre For Design Innovation, Swinburne University of Technology and DJ Huppatz, Senior Lecturer, Swinburne University of Technology

This article was originally published on The Conversation. Read the original article.


Powerful and ignored: the history of the electric drill in Australia


Tom Lee, University of Technology Sydney and Berto Pandolfo, University of Technology Sydney

Portable electric drills didn’t always look like oversized handguns.

Before Alonzo G. Decker and Samuel D. Black intervened in the 1910s, the machines typically required the use of both hands. The two men, founders of the eponymous American company Black & Decker, developed a portable electric drill that incorporated a pistol grip and trigger switch, apparently inspired by Samuel Colt’s pistol.

We are documenting a collection of more than 50 portable electric drills made roughly between 1930 and 1980.

Seen as part of a history of technology, they have a lot to teach us about function and form, masculine values and the history of Australian craft.


Read more: Reengineering elevators could transform 21st-century cities


The collection also represents an important chapter in Australian manufacturing, and includes drills produced by local companies such as Sher, KBC and Lightburn that have since disappeared. It also features models made by Black & Decker, which once had manufacturing operations in Australia.

The CP2 manufactured by Black & Decker in Croydon, Victoria. There is evidence of this model being on the market from 1963 to 1966, although we suspect it was available earlier and for much longer.
Berto Pandolfo, Author provided

Design historians and collectors have paid little attention to the electric drill. It’s seen as an object of work, unlike domestic items such as the tea kettle, which can be statements of taste and luxury.

But the device deserves our attention. It’s considered the first portable electric power tool, and arguably helped to democratise the industry, putting construction in the hands of everyone from labourers to hobbyists.

The electric drill in Australia

Australia once played a significant role in producing the portable electric drill.

Ken Bowes & Co. Ltd, known as KBC, was a South Australian manufacturing company founded in 1936. Although it produced domestic appliances such as the bean slicer, die casting of military components such as ammunition parts (shell and bomb noses) and tank attack guns kept the company busy during World War II.

It appears that KBC entered the hardware market in 1948 with its first portable electric drill, designed for the cabinet maker and general handyman. The body of the drill was made from die-cast zinc alloy and it had a unique removable front plate on the handle to allow the user easy access to the connection terminals.

KBC drill and label (note the lack of integration between handle and body), circa 1950s.
Berto Pandolfo, Author provided

In 1956, Black & Decker established an Australian manufacturing plant in Croydon, Victoria, where drills such as the CP2 were manufactured.

Between 1960 and 1982, many power tool brands had a media presence. KBC sponsored a radio program called, appropriately enough, That’s The Drill. Wolf power tools were awarded as prizes on the television program Pick-A-Box.

Black & Decker ran advertisements that appeared during popular television programs and used endorsements by sporting celebrities such as cricketer Dennis Lillee.

While the popularity of portable power drills has endured, the manufacture of these objects in Australia more or less vanished by the end of the 20th century.

Why we value some objects and not others

The portable electric drill has been poorly documented by designers, historians and museums.

Obvious repositories for their collection, such as museums of technology or innovation, are increasingly challenged by space and funding pressures. Apart from a few token examples, many everyday objects have not managed to establish a museum presence.

The Museum of Applied Arts and Sciences in Sydney holds at least two vintage portable electric drills: one is a Desouthers, made in England, and another drill of unknown origin. Museums Victoria has one example of a Black & Decker electric drill from the 1960s in its digital archive.

The crude utility of the portable drill is part of the reason why it has escaped much academic scrutiny.

The Black and Decker U-500 drill. The first drill to be completely manufactured in Australia at the Crodyon factory in Victoria.
Berto Pandolfo, Author provided

Design studies and collections tend to focus on luxury objects such as Ferrari sports cars and Rolex wristwatches. Even kitchen and home appliances get more attention, especially those designs associated with high-end companies such as Alessi and designers such as Dieter Rams and Jasper Morrison.

By contrast, the electric drill remains a B-grade object. It is a stock weapon in horror films, although even there it lacks the status reserved for the more sublimely threatening implements of violence such as swords, spears and guns.

The case for the drill

Hard yakka and aesthetics have not typically been happy bedfellows. However, labour and its associated objects can provide a compelling look at contemporary life.

Like the laptop computer, the shape of which is tied to the “macho mystique” of the briefcase, the pistol form of the portable drill seems to be significantly influenced by ideas of power and masculinity.

The symbolic association with the pistol is also practical, and would have no doubt eased the burden for those early users struggling with the device’s weight.


Read More: Apple’s goodbye to the MP3 player reminds us why the iPod became an instant classic


A recent turn towards the everyday as a site for design anthropology will hopefully shift focus towards inconspicuous yet important technologies like portable electric drills.

These objects are part of a rich history that will be forgotten if institutions focus exclusively on luxury items, big name designers and cultures of display and ornament.

The ConversationEven our most anonymous objects are sources of cultural expression, and they should not be overlooked.

Tom Lee, Lecturer, Faculty of Design and Architecture Building, University of Technology Sydney and Berto Pandolfo, Director Industrial Design, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.


What happened to the French army after Dunkirk



File 20170713 11780 dtkewf
French POWs being led away from the battlefield in May 1940.
Wikimedia Commons, CC BY-SA

Nina Wardleworth, University of Leeds

The evacuation of the British Expeditionary Force (BEF) in May 1940 from Dunkirk by a flotilla of small ships has entered British folklore. Dunkirk, a new action film by director Christopher Nolan, depicts the events from land, sea and air and has revived awe for the plucky courage of those involved.

But the story of the French army after Dunkirk is altogether less glorious, and perhaps because of that, less widely remembered. Of the 340,000 allied soldiers evacuated by boat from Dunkirk, 123,000 were French – but thousands more were not rescued and were taken prisoner by the Germans.

French media coverage of the premiere of Nolan’s film has presented the events as a British story in which French soldiers were involved, not a shared wartime narrative.

Operation Dynamo (the code name for the Dunkirk evacuation) took place between May 26 and June 4, 1940. The Germans entered Paris on June 14, but fighting continued in the east of France until June 24. General Charles De Gaulle made his now famous radio broadcast, calling on the French public not to accept defeat, on the BBC on June 18 from London, but very few of his compatriots are likely to have heard it on that date.

It is estimated that between 50,000 and 90,000 soldiers of the French army were killed in the fighting of May and June 1940. In addition to the casualties, 1.8m French soldiers, from metropolitan France and across the French empire, were captured during the Battle of France and made prisoners of war (POWs).

That early summer of 1940 in France was also marked by a mass exodus. At least six million civilians took to the roads to escape the advancing German troops, with frightening World War I stereotypes of German brutality at the forefront of their minds. They moved south and west through France, although most returned home following the June 22 armistice with Nazi Germany.

Such mass population movement both helped and hindered the French army. It made moving men and equipment much more difficult on crowded roads and railways. However, for the ordinary soldier who could procure civilian clothes, it allowed them to slip away from their units and rejoin their families.

Colonial troops massacred

The French army of 1940 included soldiers from across its empire in north, west and central Africa, the French West Indies and Indochina. These troops found it more difficult to disappear into the crowds. There were numerous massacres of west and central African troops in eastern France by the German army, who after separating them from their white officers, shot them.

There were 120,000 colonial prisoners of war captured during the Battle of France. They were housed in different camps from their white, metropolitan French counterparts, all on French soil and French run, because of Nazi racial fears of them mixing with German civilians.

Colonial POWs from the French empire under guard by German soldiers, June 1940.
RaBoe/Wikipedia

French POWs were sent to camps in Germany where they were quickly set to work on farms, in industry, mines and on the railways, to replace German men away fighting. The POWs lived and worked alongside the German population, leading to both tensions and friendships. The fate of these POWs became central to the propaganda of the French collaborationist government, based in Vichy.

Numerous government programmes tried to encourage young French men and women to sign up for work in Germany in exchange for the return of a POW to France. But, most prisoners – about one million – only returned to France following the end of the war in May 1945. They were often greeted by widespread indifference, even sometimes hostility because of their supposed links and sympathies to the Vichy regime. In reality, they were no more pro-Vichy than many other parts of French society.

A difficult history for France

The very swift German victory in May and June 1940 and the humiliating armistice that followed, meant that post-war French society and the state sought to minimise and forget the defeat, preferring to concentrate on more glorious stories of the Resistance and the Free French. There was an unsuccessful campaign in the French press in 2015 for a state commemorative event and memorial to honour the war dead from France and its then empire, who the campaign labelled as “the first Resistance fighters”. Former French president, François Hollande, increased the number of state commemorative events for key moments from France’s 20th century history, but still ignored the events of 1940.

Despite official silences, the fighting of the summer of 1940 has been the subject of French novels and films ever since. Robert Merle’s 1949 novel Weekend at Dunkirk was adapted into a successful feature film, with an audience of three million on its release in 1964. The protagonist, Julien, is a French soldier desperate to make it onto one of the boats of the British evacuation in a town shattered by bombing. Claude Simon’s 1960 novel, The Flanders Road, painted a picture of an outdated French army, ground down by months of a phoney war, fighting against a much better equipped, more modern German enemy.

The ConversationFor French POWs, Dunkirk and those battles of May and June 1940 marked the beginning of five years of humiliation and hardship, before many returned to a country that wanted to forget them and their fighting experiences.

Nina Wardleworth, Teaching Fellow in French, University of Leeds

This article was originally published on The Conversation. Read the original article.


Australia: A History of Massacres


The link below is to an article that takes a look at the history of Aboriginal massacres in colonial Australia.

For more visit:
https://www.theguardian.com/australia-news/2017/jul/05/map-of-massacres-of-indigenous-people-reveal-untold-history-of-australia-painted-in-blood


North Korean POWs seeking last chance to return home after decades in exile



File 20170628 25839 1mj9l93
At the United Nations’ prisoner-of-war camp at Pusan, North Korean and Chinese prisoners are assembled in one of the camp compounds.
Wkimedia/Larry Gahn/US State Department, CC BY-ND

Hea-Jin Park, Victoria University of Wellington

More than six decades after the Korean War, a small group of North Korean prisoners of war who made a new life in South America may get a chance to return home as part of a documentary film.

Last weekend marked the anniversary of the last major war in the Korean peninsula. The 1950–1953 Korean War or, in the words of Tessa Morris-Suzuki, the great “hot war” within the Cold War, started when North Korean troops crossed the arbitrarily established 38th parallel and forced their way south. The United Nations Command (UNC), composed of forces from 16 nations, including Australia and New Zealand, joined the South Korean military effort to halt the North Korean advance.

As the war unfolded, both sides soon faced the complicated task of handling prisoners of war (POWs), whose numbers were rapidly expanding. The UNC established several POW camps around South Korea, with the largest on Geoje-do (or Geoje) Island. It is said the camp was a little city within the island where around 170,000 North Korean and Chinese POWs waited, uneasy and fearful.

Negotiating POWs fate

The POWs’ repatriation was indeed a point of fierce debate in the negotiations of the armistice that started a year after the outbreak of the war. Accordingly, the UNC position was to allow North Korean POWs to decide between staying in the south or returning to the north, while North Korea insisted on the return of all POWs.

The Neutral Nations Repatriation Commission (NNRC), with India as umpire, chairman and executive agent, supervised the repatriation of POWs from both sides. Statistics shows that under the operations Little Switch and Big Switch eventually around 83,000 POWs were repatriated to the north, while around 22,000 preferred to remain in the south.

There were, however, 88 POWs — 76 North Korean and 12 Chinese — who declined either option and went to India instead, and then later to Argentina and Brazil.

Decades later, Korean filmmaker Cho Kyeong-duk is trying to preserve their memories in a documentary that reverses their trip, taking them from South America back home to North Korea.

Stripped of their weapons, North Korean prisoners line up in Seoul on Oct. 10, 1950.
Frank Noel/flickr, CC BY-ND

New start a world away

In 2007, I met one of the surviving North Korean POWs who lives in Buenos Aires, Argentina. Kim Kwan-ok was born and raised in Pyongyang. He was 21 years old when the South Korean Army captured him in North Chungcheong province and transferred him to the UN POW camp on Geoje-do. Upon the ceasefire in 1953, Kim decided he could not return to the north, as he feared for his life, yet he could not stay in the south either, because it was not his homeland.

Finding himself without a family, relatives or friends, he decided to leave Korea and restart life elsewhere. Kim remembered sobbing endlessly as the Astoria, the ship that took him and other POWs to India, slowly departed Incheon harbour on 9 February 1954. At that point, he thought his connection to his motherland was truly over.

While in Madras (now Chennai), Kim learnt poultry farming, took a course in photography and practised some sports. Although “free”, he remembered that there was not much to do for the POWs in India. Yet the issue that troubled them more than boredom was their uncertain future.

As the wait became longer, the POWs grew anxious and one day they all marched to remind the authorities of their existence — only to be confronted by guards.

Eventually, a few POWs decided to settle in India. Others returned to North Korea and three went to South Korea. According to Kim, however, most wished to emigrate to the United States. When the option became unlikely, many chose Mexico instead, hoping to remigrate to the US at a later date.

Unfortunately, Mexico declined their request, but Brazil and Argentina agreed to accept Korean POWs. Almost two years after their arrival in India, 55 North Korean POWs embarked to Brazil to start life anew, and in the next year or so 12 followed suit to Argentina.

When the then stateless Kim arrived in Argentina, all he possessed was his youth. With the help of a local Catholic organisation, he found shelter and a job, slowly making his way through a new life.

Consequences of war

When the first South Korean immigrants arrived in Argentina almost a decade later, Kim was at the port to welcome them and helped them get settled. He even served as the first president of the Korean Association in Argentina.

A few other North Korean POWs, especially those in Brazil, took a similar initiative, even when South Korean newcomers tagged them as “the prisoners” or “the communists”. Yet many POWs preferred to quietly blend in to local society and slowly disappear from the eyes and memories of all. They wanted to get away from the trauma of the war and the atrocities witnessed at Geoje-do POW camp. The POWs sought to live free of ideologies and prejudices.

The ConversationWhether or not the POW participants of this project complete their return home, it is a reminder that the human consequences of any war are carried in the hearts and memories of the people who fought, wherever they end up living.

Hea-Jin Park, Postdoctoral Fellow in Asian Studies, Victoria University of Wellington

This article was originally published on The Conversation. Read the original article.


When image trumps ideology: How JFK created the template for the modern presidency



File 20170525 23279 1d42o23
President John F. Kennedy watches as planes conduct anti-sub operations during maneuvers off the North Carolina coast in April 1962.
Associated Press

Steven Watts, University of Missouri-Columbia

Even at John F. Kennedy’s centennial on May 29, 2017, the 35th president remains an enigma. We still struggle to come to a clear consensus about a leader frozen in time – a man who, in our mind’s eye, is forever young and vigorous, cool and witty.

While historians have portrayed him as everything from a nascent social justice warrior to a proto-Reaganite, his political record actually offers little insight into his legacy. A standard “Cold War liberal,” he endorsed the basic tenets of the New Deal at home and projected a stern, anti-Communist foreign policy. In fact, from an ideological standpoint, he differed little from countless other elected officials in the moderate wing of the Democratic Party or the liberal wing of the Republican Party.

Much greater understanding comes from adopting an altogether different strategy: approaching Kennedy as a cultural figure. From the beginning of his career, JFK’s appeal was always more about image than ideology, the emotions he channeled than the policies he advanced.

Generating an enthusiasm more akin to that of a popular entertainer than a candidate for national office, he was arguably America’s first “modern” president. Many subsequent presidents would follow the template he created, from Republicans Ronald Reagan and Donald Trump to Democrats Bill Clinton and Barack Obama.

A cultural icon

JFK pioneered the modern notion of the president as celebrity. The scion of a wealthy family, he became a national figure as a young congressman for his good looks, high-society diversions and status as an “eligible bachelor.”

He hobnobbed with Hollywood actors such as Frank Sinatra and Tony Curtis, hung out with models and befriended singers. He became a fixture in the big national magazines – Life, Look, Time, The Saturday Evening Post – which were more interested in his personal life than his political positions.

Later, Ronald Reagan, the movie actor turned politician, and Donald Trump, the tabloid fixture and star of “The Apprentice,” would translate their celebrity impulses into electoral success. Meanwhile, the saxophone-playing Bill Clinton and the smooth, “no drama” Obama – ever at ease on the talk show circuit – teased out variations of the celebrity role on the Democratic stage.

After Kennedy, it was the candidate with the most celebrity appeal who often triumphed in the presidential sweepstakes.

A master of the media

Kennedy also forged a new path with his skillful utilization of media technology. With his movie-star good looks, understated wit and graceful demeanor, he was a perfect fit for the new medium of television.

He was applauded for his televised speeches at the 1956 Democratic convention, and he later prevailed in the famous television debates of the 1960 presidential election. His televised presidential press conferences became media works of art as he deftly answered complex questions, handled reporters with aplomb and laced his responses with wit, quoting literary figures like the Frenchwoman Madame de Staël.

Two decades later, Reagan proved equally adept with television, using his acting skills to convey an earnest patriotism, while the lip-biting Clinton projected the natural empathy and communication skills of a born politician. Obama’s eloquence before the cameras became legendary, while he also became an early adopter of social media to reach and organize his followers.

Trump, of course, emerged from a background in reality television and adroitly employed Twitter to circumvent a hostile media establishment, generate attention and reach his followers.

The vigorous male

Finally, JFK reshaped public leadership by exuding a powerful, masculine ideal. As I explore in my book, “JFK and the Masculine Mystique: Sex and Power on the New Frontier,” he emerged in a postwar era colored by mounting concern over the degeneration of the American male. Some blamed the shifting labor market for turning men from independent, manual laborers into corpulent, desk-bound drones within sprawling bureaucracies. Others pointed to suburban abundance for transforming men into diaper-changing denizens of the easy chair and backyard barbecue. And many thought that the advancement of women in the workplace would emasculate their male coworkers.

John F. Kennedy smokes a cigar and reads The New York Times on his boat off the coast of Hyannisport.
U.S. National Archives and Records Administration

Enter Jack Kennedy, who promised a bracing revival of American manhood as youthful and vigorous, cool and sophisticated.

In his famous “New Frontier” speech, he announced that “young men are coming to power – men who are not bound by the traditions of the past – young men who can cast off the old slogans and delusions and suspicions.”

In a Sports Illustrated article titled “The Soft American,” he advocated a national physical fitness crusade. He endorsed a tough-minded realism to shape the counterinsurgency strategies that were deployed to combat Communism, and he embraced the buccaneering style of the CIA and the Green Berets. He championed the Mercury Seven astronauts as sturdy, courageous males who ventured out to conquer the new frontier of space.

JFK’s successors adopted many of these same masculine themes. Reagan positioned himself as a manly, tough-minded alternative to a weak, vacillating Jimmy Carter. Clinton presented himself as a pragmatic, assertive, virile young man whose hardscrabble road to success contrasted with the privileged, preppy George H.W. Bush. Obama impressed voters as a vigorous, athletic young man who scrimmaged with college basketball teams – a contrast to the cranky, geriatric John McCain and a stiff, pampered Mitt Romney.

More recently, of course, Trump’s outlandish masculinity appealed to many traditionalists unsettled by a wave of gender confusion, women in combat, weeping millennial “snowflakes” and declining numbers of physically challenging manufacturing jobs in the country’s post-industrial economy. No matter how crudely, the theatrically male businessman promised a remedy.

So as we look back at John F. Kennedy a century after his birth, it seems ever clearer that he ascended the national stage as our first modern president. Removed from an American political tradition of grassroots electioneering, sober-minded experience and bourgeois morality, this youthful, charismatic leader reflected a new political atmosphere that favored celebrity appeal, media savvy and masculine vigor. He was the first American president whose place in the cultural imagination dwarfed his political positions and policies.

The ConversationJust as style made the man with Kennedy, it also remade the American presidency. It continues to do so today.

Steven Watts, Professor of History, University of Missouri-Columbia

This article was originally published on The Conversation. Read the original article.


Pickett’s Charge: What modern mathematics teaches us about Civil War battle



File 20170606 3668 13bnlwy
A Civil War re-enactment at Gettysburg, Pa., on the 150th anniversary of the battle in 2013.
(AP Photo/Matt Rourke)

Michael J. Armstrong, Brock University

The Battle of Gettysburg was a turning point in the American Civil War, and Gen. George Pickett’s infantry charge on July 3, 1863, was the battle’s climax. Had the Confederate Army won, it could have continued its invasion of Union territory. Instead, the charge was repelled with heavy losses. This forced the Confederates to retreat south and end their summer campaign.

Pickett’s Charge consequently became known as the Confederate “high water mark.” Countless books and movies tell its story. Tourists visit the battlefield, re-enactors refight the battle and Civil War roundtable groups discuss it. It still reverberates in ongoing American controversies over leaders statues, Confederate flags and civil rights.

Workers prepare to take down the statue of Robert E. Lee, former general of the Confederacy, in New Orleans in May.
(AP Photo/Gerald Herbert)

Why did the charge fail? Could it have worked if the commanders had made different decisions? Did the Confederate infantry pull back too soon? Should Gen. Robert E. Lee have put more soldiers into the charge? What if his staff had supplied more ammunition for the preceding artillery barrage? Was Gen. George Meade overly cautious in deploying his Union Army?

Politicians and generals began debating those questions as soon as the battle ended. Historians and history buffs continue to do so today.

Data from conflict used to build model

That debate was the starting point for research I conducted with military historian Steven Sondergren at Norwich University. (A grant from Fulbright Canada funded my stay at Norwich.) We used computer software to build a mathematical model of the charge. The model estimated the casualties and survivors on each side, given their starting strengths.

We used data from the actual conflict to calibrate the model’s equations. This ensured they initially recreated the historical results. We then adjusted the equations to represent changes in the charge, to see how those affected the outcome. This allowed us to experiment mathematically with several different alternatives.

The first factor we examined was the Confederate retreat. About half the charging infantry had become casualties before the rest pulled back. Should they have kept fighting instead? If they had, our model calculated that they all would have become casualties too. By contrast, the defending Union soldiers would have suffered only slightly higher losses. The charge simply didn’t include enough Confederate soldiers to win. They were wise to retreat when they did.

We next evaluated how many soldiers the Confederate charge would have needed to succeed. Lee put nine infantry brigades, more than 10,000 men, in the charge. He kept five more brigades back in reserve. If he had put most of those reserves into the charge, our model estimated it would have captured the Union position. But then Lee would have had insufficient fresh troops left to take advantage of that success.

Ammunition ran out

We also looked at the Confederate artillery barrage. Contrary to plans, their cannons ran short of ammunition due to a mix-up with their supply wagons. If their generals had better coordinated those supplies, the cannons could have fired twice as much. Our model calculated that this improved barrage would have been like adding one more infantry brigade to the charge. That is, the supply mix-up hurt the Confederate attack, but was not decisive by itself.

Finally, we considered the Union Army. After the battle, critics complained that Meade had focused too much on preparing his defences. This made it harder to launch a counter-attack later. However, our model estimated that if he had put even one less infantry brigade in his defensive line, the Confederate charge probably would have succeeded. This suggests Meade was correct to emphasize his defense.

The stuffed head of Gen. George Meade’s horse, Old Baldy, hangs in a case at the Civil War and Underground Rail Road Museum in Philadelphia.
(AP Photo/Justin Maxon)

Pickett’s Charge was not the only controversial part of Gettysburg. Two days earlier, Confederate Gen. Richard Ewell decided against attacking Union soldiers on Culp’s Hill. He instead waited for his infantry and artillery reinforcements. By the time they arrived, however, it was too late to attack the hill.

Was Ewell’s Gettysburg decision actually wise?

Ewell was on the receiving end of a lot of criticism for missing that opportunity. Capturing the hill would have given the Confederates a much stronger position on the battlefield. However, a failed attack could have crippled Ewell’s units. Either result could have altered the rest of the battle.

A study at the U.S. Military Academy used a more complex computer simulation to estimate the outcome if Ewell had attacked. The simulation indicated that an assault using only his existing infantry would have failed with heavy casualties. By contrast, an assault that also included his later-arriving artillery would have succeeded. Thus, Ewell made a wise decision for his situation.

Both of these Gettysburg studies used mathematics and computers to address historical questions. This blend of science and humanities revealed insights that neither specialty could have uncovered on its own.

The ConversationThat interdisciplinary approach is characteristic of “digital humanities” research more broadly. In some of that research, scholars use software to analyze conventional movies and books. Other researchers study digital media, like computer games and web blogs, where the software instead supports the creative process.

Michael J. Armstrong, Associate professor of operations research, Brock University

This article was originally published on The Conversation. Read the original article.


Gold Rush Victoria was as wasteful as we are today



File 20170605 31044 9qy1xs
Gold Rush garbage.
S.Hayes. Artefact is part of Heritage Victoria’s collection.

Sarah Hayes, La Trobe University

Australians are some of the biggest producers of waste in the world. Our wasteful ways and “throw away” culture are firmly entrenched. We have a hard time curbing our habits.

To understand why, we might turn our attention to the great social and economic transformation that occurred after the discovery of gold (by Europeans) in Victoria in 1851. Archaeological excavations across Melbourne have uncovered masses of rubbish dating back to the Gold Rush era of the 1850s and 1860s.

Artefacts recovered from sites within Melbourne show that the city’s Gold Rush era occupants were incredibly wasteful. You might think that 150 years ago, Victorians would have been thrifty and mended their belongings or sold them on secondhand. But the evidence suggests otherwise.

Working-class people living in Melbourne’s CBD were throwing out so much stuff that the weekly rubbish collections couldn’t manage all their trash. Residents were stockpiling rubbish under floorboards, in hidden corners of the backyard or digging holes specifically for it.

Cesspits (old-fashioned long drop toilets) were closed across the city in the early 1870s, leaving large empty holes in the ground. Residents took the opportunity to fill them with their surplus rubbish. Many of these rubbish dumps remain under current city buildings and have been found and recorded in cultural heritage management excavations.

Excavation of a cesspit in Little Lonsdale Street.
Green Heritage Compliance and Research

There were also larger rubbish dumps. At Viewbank homestead, on the outskirts of Melbourne, the tip was so big that archaeologists ran out of time to excavate it. Excavations at the Carlton Gardens have also uncovered a substantial amount of household rubbish dumped in the area by opportunistic city residents and night cart men.

Analysing the contents of all these rubbish dumps, it’s clear that people were discarding dinner sets and replacing them with more fashionable designs, buying and chucking out junk jewellery, and throwing out glass bottles in vast numbers in spite of industrial-scale local recycling operations. Sound familiar? They were even using “disposable” clay pipes, a Gold Rush era equivalent of our disposable coffee cups.

This plate was part of a large set discarded in the tip at Viewbank Homestead, likely because it was no longer in fashion.
S.Hayes. Artefact is part of Heritage Victoria’s collection.

Another surprising find was a rubbish pit dug in the backyard of a draper shop and filled with piles of seemingly perfectly good clothes and shoes. Perhaps they had gone out of fashion? Excess, it seems, is in Melbourne’s bones.

You are what you own

The discovery of gold brought a massive increase in population, new wealth, unprecedented access to a global network of consumer goods and great opportunities for social mobility. No one could be sure of your social background in the chaos of this rapid change. The old working, middle and upper class hierarchy became less relevant and it was possible to move up the social ladder.

How, then, did people communicate their status? Through stuff. Cultural capital refers to how people play the “culture game”: their accent, their clothes, their possessions, their manners, their interests. The argument goes that status is determined by the expression of cultural values and particular behaviours rather than wealth alone.

Dress Circle boxes Queens Theatre. Lucky Diggers in Melbourne 1853.
S.T.Gill. State Library of Victoria.

Everyday choices of consumer goods became powerful in carving out a new position and a better life in the new city. Your home, your furniture, your tableware, your drinking glasses, your clothes, all became vital markers of your place in society. You were no longer constrained by your situation of birth.

Melbourne society was reinvented and a new, much larger and more diverse middle class emerged. One that had a new system for determining status based largely on what they bought.

Why do we buy and why can’t we stop?

As a globalised world grapples with the problem of fast fashion, fast consumerism and a throw away culture, with massive landfills and climate change, the question of why we consume is more important than ever.

You might want to consume and waste less. But old habits die hard and it’s important to understand why we consume before we are able to make significant changes to our wasteful habits.

Social mobility might not have the currency that it did in the gold rush era, but we are still purchasing to communicate something. What we buy announces our position in the world and our values. Our possessions place us within one group and distance us from another just as they did in the Gold Rush era.

The ConversationAs the slow movement, anti-consumerism and concerns over sustainability gather pace, a new brand of cultural capital may emerge. A cheap polyester jumper and a disposable coffee cup may become a sign of inappropriate excess. A minimal wardrobe of ethically produced clothes and a reusable coffee cup could become the ultimate marker of status.

Sarah Hayes, Research Fellow in Archaeology and History, La Trobe University

This article was originally published on The Conversation. Read the original article.


Æthelflæd: the Anglo-Saxon iron lady



File 20170619 12400 1jwfhd1
Aethelflaed.

Philip Morgan, Keele University; Andrew Sargent, Keele University; Charles Insley, University of Manchester, and Morn Capper, University of Chester

The UK now has a female prime minister and Elizabeth II has been queen for more than six decades, but few would associate Anglo-Saxon England with powerful women. Nearly 1,100 years ago, however, Æthelflæd, “Lady of the Mercians”, died in Tamworth – as one of the most powerful political figures in tenth-century Britain.

Although she has faded from English history, and is often seen as a bit-part player in the story of the making of England, Æthelflæd was in fact a hugely important figure before her death in 918, aged around 50. Indeed, the uncontested succession of her daughter, Ælfwynn, as Mercia’s leader was a move of successful female powerplay not matched until the coronation of Elizabeth I after the death of her half-sister Mary in 1558. So, while Bernard Cornwell’s novels and the BBC series The Last Kingdom are cavalier with the historical facts, perhaps they are right to give Æthelflæd a major role.

Æthelflæd was born in the early 870s. Her father, Alfred “the Great” had become King of the West Saxons in 871, while her mother, Eahlswith, may have been from Mercian royal kindred. At the time, Anglo-Saxon “England” was made up of a series of smaller kingdoms, including Wessex in the south, Mercia in the Midlands and Northumbria in the far north. All faced encroachment by Viking forces that were growing in strength and ambition, as outlined in Charles Insley’s article The Strange End of the Mercian Kingdom and Mercia and the Making of England by Ian Walker.

Æthelflæd spent most of her life in the Kingdom of Mercia married to its de facto ruler, Æthelred. Mercia had seen some dark days by the time of her marriage. In the eighth and early ninth centuries, the Mercian kings had had good cause to consider themselves the most powerful rulers in southern Britain. But by the 870s, the kingdom had suffered dramatically from the Viking assaults which had swept across England.

One king, Burgred, had fled to Rome, and his successor, Ceolwulf II, was seen as a mere puppet by the West-Saxon compiler of the Anglo-Saxon Chronicle and disappeared between 878 and 883. Soon, the East Midlands were ruled by Scandinavians – what became known as the “Danelaw” – and so the kingdom ruled by Æthelflæd and Æthelred was by then just the western rump of the old Mercia.

Nevertheless, Æthelflæd and Æthelred together engaged in massive rebuilding projects at Gloucester, Worcester, Stafford and Chester, overseeing the refounding of churches, new relic collections and saints’ cults. Famously, in 909, the relics of the seventh-century saint, Oswald were moved from Bardney, deep in Scandinavian-controlled Lincolnshire, to a new church at Gloucester. Perhaps appropriately, for a couple facing the Vikings, Æthelflæd and her husband had a great attachment to the saint, a warrior king and Christian martyr. Æthelred was buried alongside Oswald in 911, and Æthelflæd joined him seven years later.

Powerplay and politics

At the time, Athelred and Æthelflæd did not call themselves king or queen, nor do the official documents or coins refer to them as such. Instead, they used the title “Lord/Lady of the Mercians”, because Alfred had extended his authority over Mercia and styled himself “King of the Anglo-Saxons”.

But they acted like rulers. Æthelflæd, with her husband and her brother Edward the Elder, King of the Anglo-Saxons, launched a series of military campaigns in the early tenth century. These brought all of England south of the Humber and Mersey river under Anglo-Saxon control and rolled up the Scandinavian lordships which had been established in the East Midlands and East Anglia.

Vikings: bane of Anglo-Saxon England.
Shutterstock

These advances were backed up by an energetic programme of fortification, with burhs (fortified towns) built in places such as Bridgnorth, Runcorn, Chester and Manchester.

But while she called herself a “lady”, outsiders, especially the Welsh and Irish, saw Æthelflæd as a “queen” and she surely wasn’t just her husband’s subservient wife. As Alfred the Great’s daughter, the role Mercia and the Mercians would play in the kingdom of the Anglo-Saxons was at stake.

A potent widow

But Æthelflæd really came into her own following her husband’s death in 911, although it seems that he had been in poor health for the best part of the previous decade. The Mercian Register in the Anglo-Saxon Chronicle, certainly celebrates her deeds from 910 onwards.

In 915, she successfully campaigned against the Welsh and the major Welsh kings, and in England she began further to expand her kingdom. In 917-8, her army took control of Viking-occupied Derby and Leicester, and just before her death, the “people of York” – that is, the Scandinavian lords of southern Northumbria – also agreed to submit to her.

For a brief moment, she had authority not just over her own territory in Mercia, but over the Welsh, the Scandinavian East Midlands and possibly part of Northumbria, making her perhaps one of the three most important rulers in mainland Britain – the others being her brother Edward king of the Anglo-Saxons and Constantin II macAeda, King of the Scots.

This made her a major political actor in her own right, but also a respected and feared figure. Even more remarkably, she passed her authority on to her daughter, Ælfwynn, who was around 30 when her mother died. The rule of Ælfwynn in Mercia, which attracts virtually no comment at all from historians, lasted about six months before her uncle Edward launched a coup d’état, deprived her of all authority and took her into Wessex.

The ConversationÆthelflæd’s legacy is enigmatic, wrapped up in the “making of England”. But she was a ruler of consequence in an era defined by male authority. Indeed, her project to rebuild the kingdom of Mercia and the Mercians might have placed midland England at the heart of later history.

Philip Morgan, Senior Lecturer, Keele University; Andrew Sargent, Lecturer in Medieval History, Keele University; Charles Insley, Senior lecturer, University of Manchester, and Morn Capper, Lecturer in Archaeological Heritage, University of Chester

This article was originally published on The Conversation. Read the original article.


%d bloggers like this: