Tag Archives: history

History of the two-day weekend offers lessons for today’s calls for a four-day week



The leisure industry led one of many campaigns to free people from working on Saturday afternoons.

Brad Beaven, University of Portsmouth

The idea of reducing the working week from an average of five days to four is gaining traction around the world. Businesses and politicians have been considering a switch to fewer, but more productive hours spent working. But the idea has also been derided.

As a historian of leisure, it strikes me that there are a number of parallels between debates today and those that took place in the 19th century when the weekend as we now know it was first introduced. Having Saturdays as well as Sundays off work is actually a relatively modern phenomenon.

Throughout the 19th century, government legalisation reduced working hours in factories and prescribed regular breaks. But the weekend did not simply arise from government legislation – it was shaped by a combination of campaigns. Some were led by half-day holiday movements, others by trade unions, commercial leisure companies and employers themselves. The formation of the weekend in Britain was a piecemeal and uneven affair that had to overcome unofficial popular traditions that punctured the working week during the 19th century.

‘Saint Monday’

For much of the 19th century, for example, skilled artisan workers adopted their own work rhythms as they often hired workshop space and were responsible for producing items for their buyer on a weekly basis. This gave rise to the practice of “Saint Monday”. While Saint Monday mimicked the religious Saint Day holidays, it was in fact an entirely secular practice, instigated by workers to provide an extended break in the working week.

They worked intensively from Tuesday to finish products by Saturday night so they could then enjoy Sunday as a legitimate holiday but also took Mondays off to recover from Saturday night and the previous day’s excesses. By the mid-19th century, Saint Monday was a popular institution in British society. So much so that commercial leisure – like music halls, theatres and singing saloons – staged events on this unofficial holiday.

The Victorian period spawned a number of music halls, such as Canterbury Hall in London.
People Play

Workers in the early factory system also adopted the tradition of Saint Monday, despite manufacturers consistently opposing the practice, as it hurt productivity. But workers had a religious devotion to the unofficial holiday, which made it difficult for masters to break the habit. It continued to thrive into the 1870s and 1880s.

Nonetheless, religious bodies and trade unions were keen to instil a more formal holiday in the working week. Religious bodies argued that a break on Saturday would improve working class “mental and moral culture”. For example, in 1862 Reverend George Heaviside captured the optimistic tone of many religious leaders when, writing in the Coventry Herald newspaper, he claimed a weekend would allow for a refreshed workforce and greater attendance at church on Sundays.

Trade unions, meanwhile, wanted to secure a more formalised break in the working week that did not rely on custom. Indeed, the creation of the weekend is still cited as a proud achievement in trade union history.

In 1842 a campaign group called the Early Closing Association was formed. It lobbied government to keep Saturday afternoon free for worker leisure in return for a full day’s work on Monday. The association established branches in key manufacturing towns and its membership was drawn from local civic elites, manufacturers and the clergy. Employers were encouraged to establish half-day Saturdays as the Early Closing Association argued it would foster a sober and industrious workforce.

Half-day Saturdays were seen as a way to improve productivity.
Shutterstock

Trades unions and workers’ temperance groups also saw the half-day Saturday as a vehicle to advance working class respectability. It was hoped they would shun drunkenness and brutal sports like cock fighting, which had traditionally been associated with Saint Monday.

For these campaigners, Saturday afternoon was singled out as the day in which the working classes could enjoy “rational recreation”, a form of leisure designed to draw the worker from the public house and into elevating and educational pursuits. For example, in Birmingham during 1850s, the association wrote in the Daily News newspaper that Saturday afternoons would benefit men and women who could:

Take a trip into the country, or those who take delight in gardening, or any other pursuit which requires daylight, could usefully employ their half Saturday, instead of working on the Sabbath; or they could employ their time in mental or physical improvements.

Business opportunity

Across the country a burgeoning leisure industry saw the new half-day Saturday as a business opportunity. Train operators embraced the idea, charging reduced fares for day-trippers to the countryside on Saturday afternoons. With increasing numbers of employers adopting the half-day Saturday, theatres and music halls also switched their star entertainment from a Monday to Saturday afternoon.

Perhaps the most influential leisure activity to help forge the modern week was the decision to stage football matches on Saturday afternoon. The “Football Craze”, as it was called, took off in the 1890s, just as the new working week was beginning to take shape. So Saturday afternoons became a very attractive holiday for workers, as it facilitated cheap excursions and new exciting forms of leisure.

The well-attended 1901 FA Cup final.
Wikimedia Commons

The adoption of the modern weekend was neither swift nor uniform as, ultimately, the decision for a factory to adopt the half-day Saturday rested with the manufacturer. Campaigns for an established weekend had begun in the 1840s but it did not gain widespread adoption for another 50 years.

By the end of the 19th century, there was an irresistible pull towards marking out Saturday afternoon and Sunday as the weekend. While they had their different reasons, employers, religious groups, commercial leisure and workers all came to see Saturday afternoon as an advantageous break in the working week.

This laid the groundwork for the full 48-hour weekend as we now know it – although this was only established in the 1930s. Once again, it was embraced by employers who found that the full Saturday and Sunday break reduced absenteeism and improved efficiency.The Conversation

Brad Beaven, Professor of Social and Cultural History, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.


History repeats itself. That’s bad news for the 2020s



When there are too many elites in a society, competition for power makes existing problems worse.
Francisco Goya / Wikimedia

David Baker, Macquarie University

What will happen in the 2020s? If history is any guide (and there’s good reason to think it is), the outlook isn’t great.

Here are some big-picture predictions: stagnant real wages, faltering standard of living for the lower and middle classes, worsening wealth inequality, more riots and uprisings, ongoing political polarisation, more elites competing for limited positions of power, and elites co-opting radical movements.

Thanks to globalisation, all this won’t just happen in one country but in the majority of countries in the world. We will also see geopolitical realignment, dividing the world into new alliances and blocs.

There is also a low to moderate chance of a “trigger event” – a shock like an environmental crisis, plague, or economic meltdown – that will kick off a period of extreme violence. And there is a much lower chance we will see a technological breakthrough on par with the industrial revolution that can ease the pressure in the 2020s and reverse the trends above.

These aren’t just guesses. They are predictions made with the tools of cliodynamics, which uses dozens of case studies of civilisations over the past 5,000 years to look for mathematical patterns in human history.




Read more:
Cliodynamics: can science decode the laws of history?


Cycles of growth and decline

One area where cliodynamics has borne fruit is “demographic-structural theory”, which explains common cycles of prosperity and decline.

Here’s an example of a full cycle, taken from Roman history. After the second Punic war in 201 BCE, the Roman republic enjoyed a period of extreme growth and prosperity. There was a relatively small divide between the richest and poorest, and fewer members of elites.

As the population grew, smallholders had to sell off their farms. Land coalesced into larger plantations run by elites mostly with slave labour. Elite numbers ballooned, wealth inequality became extreme, the common people felt pinched, and numerous wealthy people found themselves shut out of power.

The assassination of Julius Caesar was a key event in the decline of the Roman republic.
Jean-Leon Gerome

The rich resisted calls for land reform, and eventually the elites split into two factions called the Optimates and the Populares. The following century involved slave revolts and two massive civil wars.

Stability only returned when Augustus defeated all other rivals in 30 BCE – and ended the republic, making himself emperor. So began a new cycle of growth.

Booms and busts

Demographic-structural theory looks at things like the economic and political strength of the state, the ages and wages of the population, and the size and wealth of the elite to diagnose a society’s health – and work out where it’s heading.

Historically, some things we see today are bad signs: shrinking real wages, a growing gap between the richest and the poorest, rising numbers of wealthy and influential people who are becoming more competitive and factionalised.

Another bad sign is if previous generations witnessed periods of growth and plenty. It might mean that your society is about to hit a wall – unless a great deal of innovation and good policy relieves the pressure once again.

We are living in an unprecedented period of global growth. History says it won’t last.
SRC / IGBP / F Pharand Deschenes

The modern global system has experienced a period of growth unprecedented in human history since 1945, often referred to as the “Great Acceleration”. Yet in country after country today, we see stagnant wages, rising inequality, and wealthy elites jousting for control.

Historically, periods of strain and “elite overpopulation” are followed by a crisis (environmental or economic), which is in turn followed by years of sociopolitical instability and violence.

Elite competition makes crises worse

Factional warring after a disaster in a top-heavy society makes things much worse. It can keep the population low for decades after the initial catastrophe, and may only end when elites are exhausted or killed off.

This underlying cycle fed the Wars of the Roses between the Lancastrians and Yorkists in 15th century England, the struggle between the Optimates and Populares in the Roman Republic, and countless other conflicts in history.




Read more:
Computer simulations reveal war drove the rise of civilisations


In a period of growth and expansion these dynastic, political, and religious animosities would be less pronounced – as there is more of everything to go around – but in a period of decline they become incendiary.

In different regions and time periods, the factions vary widely, but the ideological merits or faults of any particular faction have literally no bearing on the pattern.

We always massacre each other on the downward side of a cycle. Remember that fact as we embark on the pattern again in the 2020s, and you find yourself becoming blindingly angry while watching the news or reading what someone said on Twitter.

A connected world

Because the world’s societies and economies are more unified than ever before, the increasing political division we see in Australia or the United States also manifests itself around the world.

Violence between the Bharatiya Janata Party (BJP) and Trinamool Congress in Bengal, political polarisation in Brazil following the election of Jair Bolsonaro, and less public conflicts within China’s ruling party are all part of a global trend.

Trigger events

We can expect this decline to continue steadily in the next decade, unless a trigger event kicks off a crisis and a long period – perhaps decades – of extreme violence.

Here’s a dramatic historical example: in the 12th century, Europe’s population was growing and living standards were rising. The late 13th century ushered in a period of strain. Then the Great Famine of 1315–17 set off a time of strife and increasing violence. Next came an even bigger disaster, the Black Death of 1347–51.

After these two trigger events, elites fighting over the wreckage led to a century of slaughter across Europe.

From my own studies, these “depression phases” kill an average of 20% of the population. On a global scale, today, that would mean 1.6 to 1.7 billion people dead.

There is, of course, only a low to moderate probability that such a trigger event will occur in the 2020s. It may happen decades later. But the kindling for such a conflagration is already being laid.




Read more:
Big gods came after the rise of civilisations, not before, finds study using huge historical database


Technology to the rescue?

One thing that could reverse this cycle would be a major technological breakthrough. Innovation has temporarily warded off decline in the past.

In mid-11th century Europe, for example, new land-clearing and agricultural methods allowed a dramatic increase in production which led to relative prosperity and stability in the 12th century. Or in the mid-17th century, high-yield crops from the Americas raised carrying capacities in some parts of China.

In our current situation, something like nuclear fusion – which could provide abundant, cheap, clean energy – might change the situation drastically.

The probability of this occurring in the 2020s is low. Nevertheless, innovation remains our best hope, and the sooner it happens the better.

This could be a guiding policy for public and private investment in the 2020s. It is a time for generous funding, monumental projects, and bold ventures to lift humanity out of a potential abyss.

Sunlit uplands of the distant future

If you look far enough ahead, our prospects become brighter.
Shutterstock

Cheer up. All is not lost. The further we project into the future the brighter human prospects become again, as great advances in technology do occur on a long enough timescale.

Given the acceleration of the frequency of such advances over the past 5,000 years of history, we can expect something profound on the scale of the invention of agriculture or the advent of heavy industry to occur within the next 100 years.

That is why humanity’s task in the 2020s – and much of the 21st century – is simply to survive it.The Conversation

David Baker, Lecturer in Big History, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


The History of Alcohol



Mormons and money: An unorthodox and messy history of church finances



There was something fishy about this $3 bill.
Everett Historical/Shutterstock.com

John Turner, George Mason University

The Church of Jesus Christ of Latter-day Saints has allegedly amassed US$100 billion in purportedly charitable assets since 1997 without ever giving any money away – a possible breach of federal tax laws.

This estimate of the size of its investment vehicle known as Ensign Peak Advisors became public knowledge when David A. Nielsen, a former employee and a member of the church, blew the whistle.

Together with his twin brother Lars, a former church member, Nielsen gave the Internal Revenue Service evidence he claims proves the church mishandled funds.

According to the Nielsens, Ensign Peak Advisors has invested the church’s annual surplus member contributions to build up a $100 billion portfolio. But the Nielsens say they could find no evidence that Ensign Peak Advisors spent a dime of this money for religious, charitable, educational or other “public” purposes as IRS rules require under most circumstances. They also allege that it diverted tax-exempt funds to finance some for-profit projects, which could also violate IRS rules banning such transactions in some situations.

If the IRS determines that the investment fund failed to act as a charity even though it benefited from tax breaks, it might find that Ensign Peak Advisors broke tax laws. If that happens, and the IRS collects back taxes, David Nielsen could receive a cut as a reward.

If the numbers are accurate, Ensign is the nation’s largest charitable endowment, with as much money as Harvard University and the Bill and Melinda Gates Foundation have at their disposal, combined, if not more.

Church leaders deny that they have violated any laws that regulate tax-exempt institutions. The church “complies with all applicable law governing our donations, investments, taxes and reserves,” said the three-member council headed by church president Russell M. Nelson.

From my vantage point as a historian of Mormonism, this news marks a new twist on an old story. For nearly two centuries, the church has conducted its finances in ways that defy the expectations Americans have for religious organizations.

Lars Nielsen, brother of whistleblower David Nielsen, explains how Ensign Peak Advisors allegedly operates.

A church-owned ‘anti-bank’

Consider what happened in the summer of 1837, when the fledgling church teetered on the brink of collapse.

At the time, Joseph Smith and many church members lived in Kirtland, a small town in northeastern Ohio. The Smith family had moved there in the early 1830s, seeking a safer gathering place for church members in the face of persecution in New York state.

Joseph Smith’s followers built this temple in Kirtland, Ohio before most of them moved westward.
Library of Congress

Smith and his followers began building a temple in Kirtland. The Saints dedicated their temple in 1836, but the project left Smith and others deep in debt. Like many communities in antebellum America, Mormon Kirtland was land-rich and cash-poor. A lack of hard currency hampered commerce.

Smith and his associates decided to start their own bank to solve their financial woes. The circulation of bank notes, they thought, would boost Kirtland’s economic prospects and make it easier for church leaders to satisfy their creditors.

Lots of currency

The idea of Mormon leaders printing their own money wasn’t as crazy as it sounds in 2019. The United States still lacked a uniform currency. A host of institutions of varying integrity – chartered banks, unchartered banks, other businesses and even counterfeiting rings – issued notes whose acceptance depended on the confidence of citizens who might accept or refuse them.

Mormon leaders bought engraving plates for printing bank notes and asked the Ohio state legislature to charter their bank. The Mormon proposal went nowhere in the legislature.

Joseph Smith: Latter-day Saints movement founder and, for a time, currency creator.
AP Photo/Douglas C. Pizac

At this point, church leaders took a more fateful and dubious step.

They had collected money from investors and had already begun printing notes of the “Kirtland Safety Society Bank.” Instead of shutting down the operation when the charter failed to come through, they doubled down. Worried about the legal risk of running an unchartered bank, church leaders altered the notes to read “anti-Banking-Co.”

A brief boom

For a while, all went well. “Kirtland bills are as safe as gold,” one church member wrote in January 1837. The town enjoyed a short-lived boom.

Soon, however, the anti-bank proved anything but safe. Non-Mormons questioned the society’s ability to redeem its notes, and church leaders could not keep it afloat. The Kirtland Safety Society’s struggles were not unusual. Scores of banks, including some of the nation’s largest, failed in what became the Panic of 1837. Real estate speculators lost their fortunes, and workers lost their jobs.

What made Kirtland different was the bank’s ownership. Many church members lost not only confidence in the society’s banknotes, but faith in the prophet who had signed them.

The crisis divided the church. At one point that summer, church members wielding pistols and bowie knives fought with each other in the temple. Smith and one of his top associates were convicted of issuing banknotes without a charter and fined $1,000 each. They soon fled the courts and their creditors, taking refuge with fellow church members in Missouri.

After anti-Mormon mobs forced the Latter-day Saints out of Missouri and then Illinois, Smith’s successor, Brigham Young, led thousands of church members to what became the Utah Territory.

From a railroad to a shopping mall

The church has never stopped blending commerce and religion.

In the late 1860s, Mormons built the Utah Central Railroad, which connected Salt Lake City with Ogden – a stop along the transcontinental railroad. Church leaders controlled the railway until 1878, when Union Pacific bought it.

Beginning in 1868, the church also operated the Zion’s Cooperative Mercantile Institution, a department store designed to put the squeeze on non-Mormon businesses.

The church sold the store in 1999, but in many ways its commercial interests have become more grandiose since its frontier days of railroading and retailing.

In 2003, the church’s for-profit real estate division purchased the land on which the store had stood. Nine years later, the estimated $1.5 billion City Creek Center development opened to the public, including a glitzy mall.

The Mormon Church’s commercial real estate arm built the lavish City Creek Center shopping mall in Salt Lake City.
AP Photo/Rick Bowmer

At the time, church officials asserted that they had not used any tithing money on the City Creek project. The church explains that tithing – the contribution of 10% of its 16 million members’ annual income – is for the construction and maintenance of church buildings, local congregational activities and the church’s educational programs. The church’s for-profit divisions handle commercial projects, including real estate and publishing.

The Nielsen brothers allege that Ensign Peak Advisors diverted $1.4 billion in tithing funds to pay for the development, a possible violation of the IRS rules that govern tax-exempt institutions.

It is impossible to confirm the accusation without greater transparency on the part of the church, which has told Religion Unplugged, a nonprofit media outlet, that it “does not provide information about specific transactions or financial decisions.”

According to Samuel Brunson, a tax law professor, the church was more open about its ledger sheet and business arrangements during the first half of the 20th century.

Then, in the mid- to late 1950s, it lost approximately $10 million in municipal bond investments. The resulting embarrassment was one factor in the church’s decision to become less forthcoming about its finances.

In this respect, the church is not unique. U.S. laws do not require churches to disclose their financial information in much detail. While some churches do so voluntarily, others – including the Catholic Church – keep their financial and commercial interests shrouded from public view.

Saving for a ‘rainy decade’

It remains to be seen whether Ensign Peak Advisors is going to become the subject of IRS investigations.

There are, of course, ethical and moral questions in addition to legal ones. For example, should the church amass so much money? And might the church use more of its excess funds and investment gains for humanitarian purposes or to make the tuition at church-owned Brigham Young University even more affordable?

What’s also at stake is confidence in the church’s leaders. Sen. Mitt Romney, the Republican Party’s 2012 presidential nominee and the nation’s most politically influential Mormon, professed to be “happy that they’ve not only saved for a rainy day, but for a rainy decade.”

Romney’s perspective makes some historical sense, given that the most obvious problem in Kirtland, Ohio, was that Joseph Smith’s financial stewardship was decidedly unwise. At least today’s church leaders earn good returns on their investments.

[ Deep knowledge, daily. Sign up for The Conversation’s newsletter. ]The Conversation

John Turner, Professor of American Religion, George Mason University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


History of the French Language



A History of Christmas



Friday essay: a short, sharp history of the bayonet



A British Pattern 1907 bayonet with leather scabbard.
Wikimedia Commons

Peter Monteath, Flinders University

Even the sound of a bayonet could be frightening. The audible whetting of blades in the enemy’s trenches could puncture a night’s rest with premonitions of steely death. The sight of gleaming blades, too, turned the stomach of many a soldier. For all the sheer, witless terror it could produce in those who heard, saw and perhaps felt its cold steel, there was no weapon more visceral than the bayonet.

It might have been a moment of inspired panic that brought the bayonet into existence. The bearer of a musket – maybe a soldier, maybe a hunter – having fired his weapon and missed his target, found himself at the mercy of a fast-approaching assailant.

With no time to reload, he plunged the handle of a dagger into the muzzle, converting it from firearm to elongated knife or pike. Perhaps he had missed his target altogether and expected to be assaulted at any moment, or perhaps his wounded quarry had disappeared into a thicket and needed to be chased at speed.

As time was of the essence, it could not be squandered in the cumbersome act of reloading. Shoved snugly inside the muzzle of a firearm, even a short dagger could deliver a lethal strike.

From its first use somewhere in southwestern France sometime in the first half of the 17th century, the genius of the invention spread far and wide. History has it that the first acknowledged military use of the bayonet was at Ypres in 1647. It also reveals that, for all its genius, the days of the “plug bayonet” were numbered. While the wooden handle was plugged in the musket, the weapon could not be fired. Worse than that, over-vigorous use might damage the barrel, or the blade might break while wedged firmly inside.

A Russian grenadier with bayonet in 1732.
Wikimedia Commons

Over time, ways were found to attach blades to the outside of barrels, whether running alongside, on top or beneath them. The blades could be short and dagger-like. Or they could be as long as swords, so that when attached to long-barrelled weapons they could deliver their bearer the advantage of reach. In cross-section, they might be broad and thin like a carving knife, round like a stiletto, or star-shaped.

In their countless variations, bayonets appeared on many a battlefield in Europe and other parts of the world, until in the last decades of the 19th century they appeared to have met their match. The American Civil War and the Franco–Prussian War seemed to teach one incontrovertible lesson – that advances in military technology had rendered the humble bayonet obsolete. In the face of machine-gun fire or a bombardment of artillery, the infantryman with a fixed bayonet might never see his killer, let alone plunge the cold steel into him.

Yet while machine-guns, mortars and artillery might serve to mow down the serried ranks of the enemy or blow them apart, ultimately even positions strewn with corpses had to be occupied and claimed. It remained the infantrymen’s vital role to make contested territory their own. If the very sight of fixed bayonets did not persuade any surviving defenders to surrender, then the bayonets might still have work to do.

The War for the Union, 1862 – A Bayonet Charge (Harper’s Weekly, Vol. VII)
Wikimedia Commons

A 20th century revival

The 20th century proved that declarations of the bayonet’s demise had been premature. It remained standard issue for infantrymen all over the world, even if its shape and use varied.

A German bayonet from the first world war.
Wikimedia Commons

The Russians clung fanatically to their faith in the socket bayonet. The Japanese reintroduced a sword bayonet in 1897, inspired by a French weapon. Where stealth was of the essence, as it was in night attacks in the Russo–Japanese War, the bayonet delivered silent death. Americans, too, insisted that their infantry carry long bayonet blades – an intimidating 40 centimetres – on their belts, ready to be fixed when the need arose. In time and with experience, though, the Germans opted for shorter knife bayonets of 25 or 30 centimetres.

In Britain, and all her Dominions, the so-called “Pattern 1907” bayonet was preferred. Over the centuries, the fundamentals of the bayonet had barely changed, and the Pattern, too, consisted of a blade, a guard with crosspiece and muzzle ring, and a wooden hilt. Along much of the length of the blade ran a groove, a fuller. It reduced the weight of the weapon and also allowed air to pass into the wound, making it easier to extract the blade.

While most of the standard weapons of the British Empire’s armies were manufactured in Britain, Australia, like India, manufactured its own Pattern 1907 bayonets in both wars.

In the first world war they were made in a factory in Lithgow, while those from the second world war were stamped with 13 (for Orange Arsenal) or 14 (for Munitions Australia). The wooden grips were stamped with “SLAZ”, an abbreviation of their British maker, Slazenger, active in the sporting goods business back to the 1880s.

Kept normally in a scabbard attached to the soldier’s belt, when fixed to the standard-issue Short Magazine Lee Enfield rifle, the Pattern 1907 extended the soldier’s reach by more than 40 centimetres.

Australian soldiers guard the jetty in Bowen during world war one.
Wikimedia Commons

Australia’s willing killers

Bayonets were standard equipment in the first world war, even as the accelerated development of military technology enforced the trend to mechanised, industrial killing. Australians earned themselves a reputation for using their bayonets with relish. Well trained and drilled in their use, they plunged, parried and stabbed with great vigour at Gallipoli and on the Western Front. The Australians, as the historian Bill Gammage has put it:

by reputation and probably in fact, were among the most willing to kill. They had an uncomplicated attitude towards the Hun, conditioned largely by propaganda and hardly at all by contact, and they hated him with a loathing paralleled, at least in the British Army, only by some other colonial troops. Accordingly many killed their opponents brutally, savagely, and unnecessarily.

Australian infantry in the trenches with bayonets during World War One.
Frank Hurley/Wikimedia Commons

It was not only the Germans who became acquainted with the Pattern 1907. At Gallipoli Albert Jacka won Australia’s first VC of the war by shooting five Turks and bayonetting two others. Another Australian, Nigel Ellsworth, noted that in advance of a night attack on Turkish lines:

one can’t buy a place in the main firing trench, and men are known to have refused for their positions during the fighting. They stand up in the trenches &; yell out ‘Come on, we’ll give you Allah’ & … let some Turks actually get into our Trenches then tickle them up with the bayonet.

‘Steel has an unearthly terror’

Archie Barwick, a farmer from New South Wales, spoke of being transported into a state of “mad intoxication” when he took to the Turks with fixed bayonet.

I can recollect driving the bayonet into the body of one fellow quite clearly, & he fell right at my feet & when I drew the bayonet out, the blood spurted from his body.

A New Zealand officer writing home from Gallipoli claimed that the Turks “redoubled” their fire over the New Zealanders’ positions at night. It was “the one hope of deterring the dreaded bayonets of our men … steel has an unearthly terror for them”.

In a similar vein, another Australian wrote boastfully to his family of the short work he made of Germans:

They get it too right where the chicken gets the axe … I … will fix a few more before I have finished. It’s good sport father, when the bayonet goes in their eyes bulge out like a prawns.

If there was a danger in the over-zealous use of the bayonet, it was that the weapon might be driven so far and firmly into the opponent’s body that it was difficult to extract it. The Queenslander Hugh Knyvett recalled a case where a fellow Australian drove his bayonet through a German and into a hardwood beam, from which it could not be withdrawn. The blade had to be released from the rifle, “leaving the German stuck up there as a souvenir of his visit”.

By the latter stages of the first world war, the Australians’ skill had manifested in the use of a particular lethal movement with the bayonet known as the “throat jab”.

It is well illustrated in William Longstaff ’s iconic painting Night Attack by 13th Brigade at Villers- Bretonneux, which shows an Australian holding aloft his Lee Enfield, bayonet attached, and thrusting it into a German’s exposed throat.

Night attack by 13th Brigade at Villers- Bretonneux.
Australian War Memorial

In recalling his own role in that battle in the night from 24 April to Anzac Day, Walter Downing wrote:

Bayonets passed with ease through grey-clad bodies, and were withdrawn with a sucking noise … Many had tallies of twenty and thirty and more, all killed with the bayonet, or bullet, or bomb. Some found chances in the slaughter to light cigarettes, then continued the killing.

Still, in reality the bayonet’s role in the first world war was more prominent in the telling than on the battlefield. Sober analysis showed that the vast majority of deaths and casualties were put down to machine-guns and artillery. As for the Australians themselves, more than half of those admitted to field hospitals in France suffered injuries from shells and shell-shock, and more than a third from bullets. The combined tally from bombs, grenades and bayonets was just over 2%.

The fear of cold steel

After the war, even former combatants voiced their awareness of the bayonet’s shortcomings. It might have been helpful for certain mundane tasks like opening tins, chopping firewood or perhaps roasting meat over a fire, but in a charge across open land in the sights of German machine-gunners, it was at best an unwelcome burden.

In close quarters, too, it had its drawbacks. Fixed in readiness to the end of a Lee Enfield and lugged along a trench, its most likely victim was a comrade in arms, who might receive a prod to the buttocks or a poke in the eye.

A Pattern 1907 bayonet with hooked quillon.
Australian War Memorial

Nonetheless, by 1939, the bayonet still had its place in every army. The true value of the bayonet was in the soldier’s mind, not at the end of his rifle.

That was true in two ways. While the greatest threat to the 20th century soldier was the bomb or the bullet delivered anonymously from afar, the most animating of fears was that of “cold steel” inserted into his body in a mortal duel, the most intimate form of combat death.

The most feared weapons in war are not necessarily the most dangerous. One reason why field hospitals counted relatively few casualties caused by bayonet wounds may well have been that many a soldier turned and ran before taking his chances against a surging line of men, bayonets glistening, and in all likelihood adorning their advance with the kinds of cries or yells designed to curdle blood.

In those circumstances, only in the rarest cases would bayonet steel clash with steel. Unlike the arrival of the bullet or the shell, the bayonet’s advent was seen, possibly heard, and with judicious retreat was probably avoidable. As one soldier of the second world war put it, “If I was that close to a Jerry, where we could use bayonets, one of us would have already surrendered!”

More crucial, though, than the psychological effect of the bayonet on the enemy was its impact on the men who wielded it. To take the lives of fellow human beings required not just weapons, but a mentality that tolerated the act of killing and even facilitated it.

In this war, as in the last, at military training schools across the world, instructor sergeants taught their charges to lunge, thrust and parry. Bayonets in hand, recruits were exhorted to plunge their weapons into swinging sacks of sawdust or bags of straw, aiming for those parts marked as weak and vulnerable.

British soldiers practising with bayonets in the first world war.
Wikimedia Commons

To ramp up the level of realism, some British recruits practised “in abattoirs, with warm animal blood thrown in their faces as they plunged home their bayonets”.

Confidence in the use of the bayonet, it was believed, would give infantry the courage to advance from their positions and confront the enemy directly. They developed was what some called “the spirit of the bayonet”, l’esprit de la baïonnette. More crudely, it was a “lust for blood”. Although the statistics insisted it was unlikely that the bayonet would be the cause of death, it was crucial because it engendered in its bearer the desire to advance and to kill.

A mental reflex

Ideally the effect of such training, then, was not just to acquire the strength and skills akin to those of a fencer or swordsman. It was to develop a mental reflex perhaps best understood as the form of associative learning that psychologists term “classical conditioning”.

Just as Pavlov’s dog was conditioned to salivate on the appearance of a metronome – an artefact the dog had been trained to associate with the presentation of food – so in the mind of the infantryman the command to fix bayonets would trigger a hyper-aggressive state.

At that point it might even have seemed to the soldier that all agency had shifted to his bayonet, which would tug him into wild acts of violence, as if he had “no choice but to go along with its spirit”. As one infantryman put it, the “shining things leap from the scabbards and flash in the light … They seem alive and joyous; they turn us into fiends, thirsty for slaughter.”

If any soldiers in the second world war were entitled to the view that the march of military technology had rendered the bayonet obsolete, it was the parachutists and mountain troops Hitler sent to invade the island of Crete in May 1941.

Superbly trained and equipped, they had proved to themselves and the world that warfare had entered a new era. Germany’s armed forces, the Wehrmacht, had demonstrated that in the modern age, death could be delivered anonymously and at a distance, above all from the skies. The age of intimate killing was over.

The Australian army’s rising sun badge.
Wikimedia Commons

Or so it seemed. In Crete they were to confront Australians and New Zealanders who, like their fathers, were deeply familiar with the spirit of the bayonet. On the upturned brims of their slouch hats, the Australians displayed their allegiance to a powerful tradition in the form of the Rising Sun badge, a semi-circle of glistening bayonets radiating from a crown.

Like the Anzacs of the Great War, the Anzacs of 1941 were well trained in the use of the Pattern 1907 – they could lunge and stab with all the skill and deadliness of their forebears. When the order was given to fix bayonets, these Anzacs of 1941, too, would be expected to spill blood.

NB: Bayonets were used in charges as recently as in the Falklands War, the Second Gulf War and in Afghanistan. In many parts of the world to this day, training for infantrymen introduces them to the “spirit of the bayonet”.

This is an edited extract from Battle on 42nd Street – War in Crete and the Anzacs’ bloody last stand by Peter Monteath (NewSouth Books).The Conversation

Peter Monteath, , Flinders University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


The Zoroastrians



The History of Canada



The History of France



%d bloggers like this: