Tag Archives: health

Scarabs, phalluses, evil eyes — how ancient amulets tried to ward off disease



An Egyptian winged scarab amulet (circa 1070 –945 BC).

Marguerite Johnson, University of Newcastle

Throughout antiquity, from the Mediterranean to Egypt and today’s Middle East, people believed that misfortune, including accidents, diseases, and sometimes even death, were caused by external forces.

Be they gods or other types of supernatural forces (such as a daimon), people — regardless of faith — sought magical means of protection against them.

While medicine and science were not absent in antiquity, they competed with entrenched systems of magic and the widespread recourse to it. People consulted professional magicians and also practised their own forms of folk magic.




Read more:
Spells, charms, erotic dolls: love magic in the ancient Mediterranean


Possibly derived from the Latin word “amoliri”, meaning “to drive away” or “to avert”, amulets were believed to possess inherent magical qualities. These qualities could be naturally intrinsic (such as the properties of a particular stone) or imbued artificially with the assistance of a spell.

Not surprisingly the use of amulets was an integral part of life. From jewellery and embellishments on buildings, to papyri inscribed with spells, and even garden ornaments, they were deemed effective forms of protection.

Amulets have been around for thousands of years. Amber pendants from Denmark’s Mesolithic age (10,000-8,000 BC) seem to have been worn as a form of generic protection.

Jewellery and ornaments referencing the figure of the scarab beetle were also popular all-purpose amulets in Egypt, dating from the beginning of the Middle Kingdom (2000 BC).

A solar scarab pendant from the tomb of Tutankhamen.
Wikimedia Commons



Read more:
Michelle Obama’s necklace and the power of political jewellery — from suffragettes to a secretary of state


Two of the most common symbols of protection are the eye and the phallus. One or both amulet designs appear in many contexts, providing protection of the body (in the form of jewellery), a building (as plaques on exterior walls), a tomb (as an inscribed motif), and even a baby’s crib (as a mobile or crib ornament).

In Greece and the Middle East, for example, the evil eye has a history stretching back thousands of years. Today the image adorns the streets, buildings and even trees of villages.

A tree adorned with the evil eye symbol in a Turkish village.
Marguerite Johnson

The magic behind the evil eye is based on the belief that malevolence can be directed towards an individual through a nasty glare. Accordingly, a “fake” eye, or evil eye, absorbs the malicious intention in place of the target’s eye.

Wind chimes

Greek ‘herm’ (circa sixth century BC).

The phallus was a form of magical protection in ancient Greece and Rome. The Greek sculpture known as a “herm” in English functioned as apotropaic magic (used to fend off evil). Such artefacts, featuring a head and torso atop a pediment — often in the shape of a phallus and, if not, definitely featuring a phallus — were used as boundary markers to keep trespassers out.

The implicit threat is that of rape; come near a space that is not your own, and you may suffer the consequences. This threat was intended to be interpreted metaphorically; namely, a violation of another’s property would entail some form of punishment from the supernatural realm.

The phallus amulet was also popular in ancient Italian magic. In Pompeii, archaeologists have uncovered wind chimes called tintinnabulum (meaning “little bell”). These were hung in gardens and took the form of a phallus adorned with bells.

This phallic shape, often morphing into bawdy forms, presented the same warning as the herm statues in Greece. However, the comic shapes in combination with the tinkling of bells also revealed a belief in the protective power of sound. Laughing was believed to ward off evil forces, as was the sound of chimes.

Tintinnabulum from Pompeii (circa first century AD).
Author provided

One scholarly view of magic is that it functions as the last recourse for the desperate or dispossessed. In this sense, it presents as a hopeful action, interpreted by some modern commentators as a form of psychological release from stress or a sense of powerlessness.

Contemporary ‘magical thinking’

In the context of “magical thinking”, amulets may be dismissed by critical thinkers of all persuasions, but they remain in use throughout the world.

Often combined with science and common sense, but not always, amulets have made a resurgence during the COVID-19 pandemic. The amulets are equally as diverse, coming in all shapes and sizes, and promoted by politicians, religious leaders and social influencers.

A traditional form of adornment and protection in Javanese culture, now popular with tourists, “burnt root” bracelets, known as “akar bahar”, have been sold by community shamans. Indonesia’s Agriculture Minister Syahrul Yasin Limpo, meanwhile, has promoted an aromatherapy necklace containing a eucalyptus potion touted as a preventative against COVID (useless in terms of science but perhaps less dangerous than hydroxychloroquine).

This necklace prompts the question: where does alternative medicine end and magic begin? It is not a new question, since there has been an intersection between magical lore and medical knowledge for thousands of years.




Read more:
A murky cauldron – modern witchcraft and the spell on Trump


In Babylon, circa 2000-1600 BC, a condition known as “kuràrum disease” (identified as a ringworm, symptoms of which include facial pustules), was responded to by both magicians and doctors. And in one text there is a “healer” who appears to perform the role of magician and doctor simultaneously.

Other ancient cultures also practised medical magic through amulets. In Greece, magicians prescribed amulets to heal the wandering womb, a condition whereby the womb was believed to dislodge and travel throughout a woman’s body, thus causing hysteria.

These amulets could take the form of jewellery on which a spell was inscribed. Amulets were also used to prevent pregnancy, as evidenced in a recipe written in Greek from around the second century BC, which instructed women to: “take a bean with a bug inside it and fasten it to yourself as an amulet.”

In a contemporary religious context, written amulets replace spells with prayers. In Thailand, for example, Phisutthi Rattanaphon, an Abbot at Wat Theraplai Temple in Suphan Buri, has issued people with orange paper inscribed with protective words and pictures.

Designed to ward off COVID-19, the papers represent the crossover between magic and religion; a paradigm as entrenched as the blurring of magic and medicine in numerous historical and cultural contexts. Thankfully, face masks and hand sanitiser are also available at the temple.The Conversation

Marguerite Johnson, Professor of Classics, University of Newcastle

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Coronavirus vaccine: lessons from the 19th-century smallpox anti-vaxxer movement



English physician and scientist, who was the pioneer of smallpox vaccine, Edward Jenner sees off the anti-vaccinators.
Wikimedia/Wellcome Collection

Steven King, Nottingham Trent University

There is hope a coronavirus vaccine might be ready by the end of the year. But for it to eliminate COVID-19 a critical mass of people must be vaccinated. And if the protective benefits of a COVID-19 vaccine fall off rapidly (as seems to happen with naturally acquired antibodies) maintaining immunity will require multiple vaccinations. So unless people keep renewing their jabs, the critical mass will decline quickly.

How will politicians ensure critical mass and renewal? For UK prime minister Boris Johnson (who labels those who oppose vaccination as “nuts”) and others, vaccination is a matter of duty. There is a logical case (we know people who have died or suffered badly from COVID-19) and a moral case (to protect others if not yourself).

Yet anti-vaccination sentiment focused on the rights of citizens not to act is clear. A recent poll of 2,000 people across the UK found that 14% would refuse to take a vaccine.

The rights of citizens not to act mean that compulsory vaccination cannot be (and has not been) ruled out. The history of other vaccination programmes, particularly the first truly national campaign against smallpox, shows how difficult the balancing of rights and duties will be.

A disappearing act

The 19th-century invention of vaccination created a new national imperative for the UK to combat endemic smallpox. The risk of dying from smallpox for those who contracted it was substantially higher than that for COVID-19 today. Survivors gained immunity but often at the cost of physical scarring and long-term health problems.

Vaccination and subsequent elimination should have been a no-brainer. Yet local and regional outbreaks persisted across the 19th century.

Governments of this period assumed (sometimes incorrectly) that the middle-classes would realise the value of vaccination. The poor and marginal were different. For them, mass compulsory vaccination awaited.

The result was an explosive atmosphere. Rumours of deaths after vaccination and of the rounding up of the poor like animals generated a sustained popular backlash, with some organising under the umbrella of the National Anti-Vaccination League.

19th century cartoon of people marching in protest
An attack on smallpox vaccination and the Royal College of Physicians’ advocation of it, 1812.
Wikimedia/Wellcome Collection

Yet even after vaccination became compulsory in 1853, there were many ways in which, by accident or design, ordinary people citizens avoided the jab. Some people simply disappeared from the records or failed to appear when asked. Those most prone to doing so (those in crowded households or immigrants, for example) were also the groups most susceptible to disease.

Census data consistently undercounts the national population. Undercounting in the 1800s may have missed around 10% of some communities. Even for the 2011 census, around 6.1% of the population is believed to have been missed. Achieving vaccination critical mass is difficult where you do not know the true size of the mass and the most vulnerable are the least detectable.

The poor also “clogged up” the vaccination system. Sometimes they agreed to participate and then did not turn up, a common feature for systems of compulsion where there is no ultimate sanction. On other occasions, as for instance at Keighley in 1882, people would supplement this activity with the sending of anonymous hate mail in an attempt to disrupt the work of local vaccinators.

Fight for their rights

Taking advantage of local tensions was also a useful avoidance technique. “Smallpox riots” in the face of attempts at crude compulsion were frequent and sustained.

Sometimes organised by local agitators, and sometimes spurred on by instances of children dying after vaccination, such unrest varied on a spectrum from small and localised to community-wide and sustained. Riots at Ipswich, Henley, Leicester and Newcastle were particularly notable.

Nor should we forget that vaccination opponents spread rumours about and caricatured vaccines and vaccinators, undermining the credibility of the system in the public imagination. These included one cartoon from the 1880s in which helpless children are shovelled into the mouth of a diseased cow while, at the other end, a doctor portrayed as the devil incarnate shovels dead children excreted by the cow into a cart bound for mass graves.

In July 2020 public figures stand accused of using Twitter to the same effect for COVID-19 vaccination.

Cartoon of children being fed to a disease-ridden cow creature, representing vaccination.
Children are fed to a disease-ridden cow creature, representing vaccination.
Wikimedia/Wellcome Collection

Most forcefully, while politicians used the law in order to force vaccination, the law could also be turned against them. Penalties against parents for failing to vaccinate children, introduced in 1853 and strengthened in 1867, were routinely ignored by courts. Compulsory child vaccination was removed in 1898 and the freedom to refuse introduced.

Long-standing opposition to vaccination by some scientists as well as ordinary people crystallised in 1885 with a huge demonstration at Leicester (ironically the recent focus of a British local lockdown). This and ongoing smaller protests across the country forced the government to introduce a Royal Commission to reflect on the whole question of compulsion. The verdict ultimately fell on the side of the rights of the individual.

It is not hard to imagine the 2021 human rights case in which a court must decide on the balance of the legal and collective duty of citizens to get vaccinated against COVID-19 nd the individual right to choose.

Our political and medical elites believe that people will accept moral responsibility: “get vaccinated”. Yet little thought has gone into how a mass vaccination programme works.

We will see some of the lessons of 20th-century vaccination schemes repeated, with public information campaigns and elements of coercion via vaccination programmes in schools and care homes. Nonetheless, the lack of serious credence given to anti-vaccination “nuts” and the resistance that a vaccination programme may generate feels oh so 19th-century.The Conversation

Steven King, Professor of Economic and Social History, Nottingham Trent University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Border closures, identity and political tensions: how Australia’s past pandemics shape our COVID-19 response


Susan Moloney, Griffith University and Kim Moloney, Murdoch University

Tensions over border closures are in the news again, now states are gradually lifting travel restrictions to all except Victorians.

Prime Minister Scott Morrison says singling out Victorians is an overreaction to Melbourne’s coronavirus spike, urging the states “to get some perspective”.

Federal-state tensions over border closures and other pandemic quarantine measures are not new, and not limited to the COVID-19 pandemic.

Our new research shows such measures are entwined in our history and tied to Australia’s identity as a nation. We also show how our experiences during past pandemics guide the plans we now use, and alter, to control the coronavirus.




Read more:
National and state leaders may not always agree, but this hasn’t hindered our coronavirus response


Bubonic plague, federation and national identity

In early 1900, bubonic plague broke out just months before federation, introduced by infected rats on ships.

When a new vaccine was available, the New South Wales government planned to inoculate just front-line workers.

Journalists called for a broader inoculation campaign and the government soon faced a “melee” in which:

…men fought, women fainted and the offices [of the Board of Health] were damaged.

Patients and contacts were quarantined at the North Head Quarantine Station. Affected suburbs were quarantined and sanitation commenced.

The health board openly criticised the government for its handling of the quarantine measures, laying the groundwork for quarantine policy in the newly independent Australia.

Quarantine then became essential to a vision of Australia as an island nation where “island” stood for immunity and where non-Australians were viewed as “diseased”.

Public health is mentioned twice in the Australian constitution. Section 51(ix) gives parliament the power to quarantine, and section 69 requires states and territories to transfer quarantine services to the Commonwealth.

The Quarantine Act was later merged to form the Immigration Restriction Act, with quarantine influencing immigration policy.

Ports then became centres of immigration, trade, biopolitics and biosecurity.

Spanish flu sparked border disputes too

In 1918, at the onset of the Spanish flu, quarantine policy included border closures, quarantine camps (for people stuck at borders) and school closures. These measures initially controlled widespread outbreaks in Australia.

However, Victoria quibbled over whether NSW had accurately diagnosed this as an influenza pandemic. Queensland closed its borders, despite only the Commonwealth having the legal powers to do so.




Read more:
This isn’t the first global pandemic, and it won’t be the last. Here’s what we’ve learned from 4 others throughout history


When World War I ended, many returning soldiers broke quarantine. Quarantine measures were not coordinated at the Commonwealth level; states and territories each went their own way.

Quarantine camps, like this one at Wallangarra in Queensland, were set up during the Spanish flu pandemic.
Aussie~mobs/Public Domain/Flickr

There were different policies about state border closures, quarantine camps, mask wearing, school closures and public gatherings. Infection spread and hospitals were overwhelmed.

The legacy? The states and territories ceded quarantine control to the Commonwealth. And in 1921, the Commonwealth created its own health department.

The 1990s brought new threats

Over the next seven decades, Australia linked quarantine surveillance to national survival. It shifted from prioritising human health to biosecurity and protection of Australia’s flora, fauna and agriculture.

In the 1990s, new human threats emerged. Avian influenza in 1997 led the federal government to recognise Australia may be ill-prepared to face a pandemic. By 1999 Australia had its first influenza pandemic plan.




Read more:
Today’s disease names are less catchy, but also less likely to cause stigma


In 2003, severe acute respiratory syndrome (or SARS) emerged in China and Hong Kong. Australia responded by discouraging nonessential travel and started health screening incoming passengers.

The next threat, 2004 H5N1 Avian influenza, was a dry run for future responses. This resulted in the 2008 Australian Health Management Plan for Pandemic Influenza, which included border control and social isolation measures.

Which brings us to today

While lessons learned from past pandemics are with us today, we’ve seen changes to policy mid-pandemic. March saw the formation of the National Cabinet to endorse and coordinate actions across the nation.

Uncertainty over border control continues, especially surrounding the potential for cruise and live-export ships to import coronavirus infections.




Read more:
Coronavirus has seriously tested our border security. Have we learned from our mistakes?


Then there are border closures between states and territories, creating tensions and a potential high court challenge.

Border quibbles between states and territories will likely continue in this and future pandemics due to geographical, epidemiological and political differences.

Australia’s success during COVID-19 as a nation, is in part due to Australian quarantine policy being so closely tied to its island nature and learnings from previous pandemics.

Lessons learnt from handling COVID-19 will also strengthen future pandemic responses and hopefully will make them more coordinated.




Read more:
4 ways Australia’s coronavirus response was a triumph, and 4 ways it fell short


The Conversation


Susan Moloney, Associate Professor, Paediatrics, Griffith University and Kim Moloney, Senior Lecturer in Global Public Administration and Public Policy, Murdoch University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Coronavirus is taking English pubs back in time



A tapster delivers a frothing tankard to seated alehouse customers in this 1824 etching.
British Museum, CC BY-NC-SA

James Brown, University of Sheffield

The announcement by Boris Johnson, the UK prime minister, that pubs in England will be allowed to resume trading from July 4 was greeted with rousing cheers from some. But having a pint in the pandemic era will be slightly different. While two-metre social distancing rules are being relaxed to one metre to ensure economic viability for publicans, to maintain the safety of customers and staff, pubs will where practical be restricted to “table service”.

Standing at the bar is one of the most cherished rituals of the British pub experience – and many people are worried that the new rules could be the beginning of the end of a tradition that dates back centuries. Except, it doesn’t – the bar as we now know it is of relatively recent vintage and, in many respects, the new regulations are returning us to the practices of a much earlier era.

Before the 19th century, propping up the bar would have been an unfamiliar concept in England’s dense network of alehouses, taverns and inns. Alehouses and taverns in particular were seldom purpose-built, but were instead ordinary dwelling houses made over for commercial hospitality. Only their pictorial signboards and a few items of additional furniture distinguished them from surrounding houses. In particular, there was no bar in the modern sense of a fixed counter over which alcohol could be purchased and served.


Check out: Intoxicating spaces


Instead, beverages were ferried directly to seated customers from barrels and bottles in cellars and store rooms by the host and, in larger establishments, drawers, pot-boys, tapsters and waiters. The layout of Margaret Bowker’s large Manchester alehouse in 1641 is typical: chairs, stools and tables were distributed across the hall, parlours, and chambers, while drink was stored in “hogsheads”, “barrels”, and “rundlets” in her cellar.

Five customers receive table service from a tapster in this woodcut illustration from a late 17th-century ballad.
English Broadside Ballad Archive

The bar as we know it didn’t emerge organically from these arrangements, but rather from the introduction of a new commodity in the 18th century: gin.
Originally it was imported from the Netherlands and distilled in large quantities domestically from the later decades of the 17th century, but the emergence of a mass market for gin in the 1700s gave rise to the specialised gin or dram shop. Found mainly in London – especially in districts such as the East End and south of the river – an innovation of these establishments was a large counter that traversed their width.

Along with a lack of seating, this maximised serving and standing space and encouraged low-value but high-volume turnover from a predominantly poor clientele. The flamboyant gin palaces of the later 18th and early 19th century – described by caricaturist and temperance enthusiast George Cruikshank as “gaudy, gold be-plastered temples” – retained the bar, along with other features drawn from the retail sector such as plate-glass windows, gas lighting, elaborate wrought iron and mahogany fittings, and displays of bottles and glasses. While originally regarded as alien to local drinking cultures, by the 1830s these architectural elements started making their way into all English pubs, with the bar literally front and centre.

An 1808 aquatint after Thomas Rowlandson, showing human and canine customers standing at the bar in a gin shop.
Metropolitan Museum of Art

As architectural historian Mark Girouard has pointed out, the adoption of the bar was a “revolutionary innovation” – a “time-and-motion breakthrough” that transformed the relationship between customers and staff. It brought unprecedented efficiencies that were especially important in the expanding and industrialising cities of the early 1800s.

In particular, a fixed counter with taps, cocks and pumps connected to spirit casks and beer barrels was more efficient than employees scurrying between cellars, storerooms and drinking areas. This was especially the case for “off-sales” – customers purchasing drinks to take home – which had always been a large component of the drinks trade and still accounted for an estimated one-third of takings into the 19th century.

An 1833 lithograph depicting an ‘obliging bar-maid’ using a beer engine.
Wellcome Collection, CC BY-NC

Posterity has paid little attention to the armies of service staff who kept the world of the tavern spinning on its axis before the age of the bar. But they are occasionally glimpsed in historical sources – such as Margaret Sephton, who was “drawing beer” at Widow Knee’s Chester alehouse in 1629, when she gave evidence about a theft of linen. While skilled – one tapster at a Chester tavern styled himself rather grandly in 1640 as a “drawer and sommelier of wine” – drink work was poorly paid. Staff were often paid in kind with food and lodgings and the work was usually undertaken by people who were young, poor, or new to the community.

The lack of a bar made the job especially challenging. It was physically demanding – in 1665 a young tapster at a Cheshire alehouse described how during her shift she was “called to and fro in the house and to other company, testifying to the constant back and forth. The fact that drinks were not poured in front of patrons made staff more vulnerable to accusations of adulteration and short measure – sometimes with good reason – and close physical proximity to customers when serving and collecting payment meant such disputes could more readily turn violent. For female employees, the absence of the insulating layer of material and space later provided by the bar meant they were much more exposed to sexual abuse from male patrons.

What can the historical record teach proprietors of any newly bar-less pubs? There are, of course, modern advantages such as apps and other digital tools – plus the example of European and North American establishments, where table service was never fully displaced. But there are practical lessons to be learned from the past all the same. Publicans today might streamline the range of drinks on offer and encourage the use of jugs for refills. Landlords could develop careful zoning for their staff – in larger alehouses and taverns tapsters were allocated specific booths and rooms. Most importantly they need to establish and enforce clear rules about behaviour towards staff – especially in terms of physical contact. Better to have premodern pubs than no pubs at all, after all.The Conversation

James Brown, Research Associate & Project Manager (UK), University of Sheffield

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Comets, omens and fear: understanding plague in the Middle Ages



A comet depicted in medieval times in the Bayeux tapestry.
Bayeux Museam, Author provided

Marilina Cesario, Queen’s University Belfast and Francis Leneghan, University of Oxford

On August 30 2019, a comet from outside our solar system was observed by amateur astronomer Gennady Borisov at the MARGO observatory in Crimea. This was only the second time an interstellar comet had ever been recorded. Comet 19 or C/2019 Q4 , as it is now known, made its closest approach to the sun on December 8 2019, roughly coinciding with the first recorded human cases of COVID-19.

While we know that this is merely coincidence, in medieval times authorities regarded natural phenomena such as comets and eclipses as portents of natural disasters, including plagues.

One of the most learned men of the early Middle Ages was the Venerable Bede, an Anglo-Saxon monk who lived in Northumbria in the late seventh and early eighth centuries. In chapter 25 of his scientific treatise, De natura rerum (On the Nature of Things) , he describes comets as “stars with flames like hair. They are born suddenly, portending a change of royal power or plague or wars or winds or heat”.

Plagues and natural phenomena

Outbreaks of the bubonic plague were recorded long before the Black Death of the 14th century. In the 6th century, a plague spread from Egypt to Europe and lingered for the next 200 years. At the end of the seventh century, the Irish scholar Adomnán, Abbot of Iona wrote in book 42 of his Life of St Columba of “the great mortality which twice in our time has ravaged a large part of the world”. The effects of this plague were so severe in England that, according to Bede, the kingdom of Essex reverted to paganism.

The Anglo-Saxon Chronicle records that 664 “the sun grew dark, and in this year came to the island of Britain a great plague among men (‘micel man cwealm’ in Anglo Saxon)”. The year 664 held great significance for the English and Irish churches: a great meeting (or synod) was held in Whitby in Northumbria to decide whether the English church should follow the Irish or Roman system for calculating the date of Easter. By describing the occurrence of an eclipse and plague in the same year as the synod, Bede makes this important event in the English Church more memorable and meaningful.

In the Middle Ages, comets like 2019’s C/2019 Q4 signalled a calamitous event on earth to come.
NASA, ESA & D. Jewitt (UCLA), CC BY

Plague and medieval religion

In the Middle Ages, occurrences like plague and disease were thought of as expressions of God’s will. In the Bible, God uses natural phenomena to punish humankind for sin. In the Book of Revelation 6:8, for example, pestilence is described as one of the signs of Judgement Day. Medieval scholars were aware that some plagues and diseases were spread through the air, as explained by the seventh-century scholar Isidore of Seville in chapter 39 of his De natura rerum (On the Nature of Things):

Pestilence is a disease spreading widely and infecting by its contagion whatever it touches. When plague (‘plaga’) smites the earth because of mankind’s sins, then from some cause, that is, either the force of drought or of heat or an excess of rain, the air is corrupted.

Bede based his On the Nature of Things on this work by Isidore. In a discussion of plague in the Old English version of Bede’s Ecclesiastical History we find a reference to the “an-fleoga”, meaning something like “the one who flies” or “solitary flier”. This same idea of airborne disease is a feature of Anglo-Saxon medicine. One example comes from an Old English poem we call a metrical charm, which combines ancient Germanic folklore with Christian prayer and ritual. In the Nine Herbs Charm, the charmer addresses each herb individually and invokes its power over disease:

This is against poison, and this is against the one who flies,

this is against the loathsome one that travels throughout the land …

if any poison come flying from the east,

or any come from the north,

or any from the west over the nations of men,

Christ stood over the disease of every kind.

As well as fearing plague, medieval scholars attempted to pinpoint its origins and carefully recorded its occurrence and effects. Like us, they used whatever means they could to protect themselves from disease. But it is clear medieval chroniclers presented historical events as part of a divine plan for humankind by linking them with natural phenomena like plagues and comets.The Conversation

Marilina Cesario, Senior Lecturer, School of Arts, English and Languages, Queen’s University Belfast and Francis Leneghan, Associate Professor of Old English, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Uprisings after pandemics have happened before – just look at the English Peasant Revolt of 1381



In this 1470 illustration, the radical priest John Ball galvanizes the rebels.
The British Library

Susan Wade, Keene State College

As a professor of medieval Europe, I’ve taught the bubonic plague, and how it contributed to the English Peasant Revolt of 1381. Now that America is experiencing widespread unrest in the midst of its own pandemic, I see some interesting similarities to the 14th-century uprising.

The death of George Floyd has sparked protests fueled by a combination of brutal policing, a pandemic that has led to the loss of millions of jobs and centuries of racial discrimination and economic inequality.

“Where people are broke, and there doesn’t appear to be any assistance, there’s no leadership, there’s no clarity about what is going to happen, this creates the conditions for anger, rage, desperation and hopelessness,” African American studies scholar Keeanga-Yamahtta Taylor told The New York Times.

Medieval England may seem far removed from modern America. And sure, American workers aren’t tied to employers by feudal bonds, which meant that peasants were forced to work for their landowners. Yet the Peasant Revolt was also a reaction brought on by centuries of oppression of society’s lowest tiers.

And like today, the majority of wealth was held by an elite class that comprised about 1% of the population. When a deadly disease started to spread, the most vulnerable and powerless were asked the pick up the most slack, while continuing to face economic hardship. The country’s leaders refused to listen.

Eventually, the peasants decided to fight back.

Clamoring for higher wages

Surviving letters and treatises express feelings of fear, grief and loss; the death tolls from the 14th-century plague were catastrophic, and it’s estimated that between one-third to one-half of the European population died during the its first outbreak.

The massive loss of life created an immense labor shortage. Records from England describe untilled fields, vacant villages and untended livestock roaming an empty countryside.

The English laborers who survived understood their newfound value and began to press for higher wages. Some peasants even began to seek more lucrative employment by leaving feudal tenancy, meaning the peasants felt free to leave the employment of their landowning overlords.

Rather than accede to the demands, King Edward III did just the opposite: In 1349, he froze wages at pre-plague levels and imprisoned any reaper, mower or other workman in service to an estate who left his employment without cause. These ordinances ensured that elite landowners would retain their wealth.

Edward III enacted successive laws intended to ensure laborers wouldn’t increase their earning power. As England weathered subsequent outbreaks of the plague, and as labor shortages continued, workers started to clamor for change.

Enough is enough

The nominal reason for the Peasant Revolt was the announcement of a third poll tax in 15 years. Because poll taxes are a flat tax levied on every individual, they affect the poor far more than the wealthy. But similar to the protests that have erupted in the wake of Floyd’s death, the Peasant Revolt was really the result of dashed expectations and class tensions that had been simmering for more than 30 years.

Things finally came to a head in June 1381, when, by medieval estimates, 30,000 rural laborers stormed into London demanding to see the king. The cohort was led by a former yeoman soldier named Wat Tyler and an itinerant, radical preacher named John Ball.

Ball was sympathetic to the Lollards, a Christian sect deemed heretical by Rome. The Lollards believed in the dissolution of the sacraments and for the Bible to be translated into English from Latin, which would make the sacred text equally accessible to everyone, diminishing the interpretive role of the clergy. Ball wanted to take things even further and apply the ideas of the Lollards to all of English society. In short, Ball called for a complete overturn of the class system. He preached that since all of humanity constituted the children of Adam and Eve, the nobility could not prove they were of higher status than the peasants who worked for them.

With the help of sympathetic laborers in London, the peasants gained entry to the city and attacked and set fire to the Palace of Savoy, which belonged to the Duke of Lancaster. Next they stormed the Tower of London, where they killed several prominent clerics, including the archbishop of Canterbury.

A bait and switch

To quell the violence, Edward’s successor, the 14-year-old Richard II, met the irate peasants just outside of London. He presented them a sealed charter declaring that all men and their heirs would be “of free condition,” which meant that the feudal bonds that held them in service to landowners would be lifted.

The Peasant Revolt was one of Richard II’s first tests.
Westminster Abbey

While the rebels were initially satisfied with this charter, things didn’t end well for them. When the group met with Richard the next day, whether by mistake or intent, Wat Tyler was killed by one of Richard’s men, John Standish. The rest of the peasants dispersed or fled, depending on the report of the medieval chronicler.

For the authorities, this was their chance to pounce. They sent judges into the countryside of Kent to find, punish and, in some cases, execute those who were found guilty of leading the uprising. They apprehended John Ball and he was drawn and quartered. On Sept. 29, 1381, Richard II and Parliament declared the charter freeing the peasants of their feudal tenancy null and void. The vast wealth gap between the lowest and highest tiers of society remained.

American low-wage laborers obviously have rights and freedoms that medieval peasants lacked. However, these workers are often tied to their jobs because they cannot afford even a brief loss of income.

The meager benefits some essential workers gained during the pandemic are already being stripped away. Amazon recently ended the additional US$2 per hour in hazard pay it had been paying workers and announced plans to fire workers who don’t return to work for fear of contracting COVID-19. Meanwhile, between mid-March and mid-May, Amazon CEO Jeff Bezos added $34.6 billion dollars to his wealth.

It appears that the economic disparities of 21st-century capitalism – where the richest 1% now own more than half of the world’s wealth – are beginning to resemble those of 14th-century Europe.

When income inequalities become so jarring, and when these inequalities are based in long-term oppression, perhaps the sort of unrest we’re seeing on the streets in 2020 is inevitable.

[You need to understand the coronavirus pandemic, and we can help. Read The Conversation’s newsletter.]The Conversation

Susan Wade, Associate Professor of History, Keene State College

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Australia and the Spanish Flu/COVID-19 Pandemics



Lockdowns, second waves and burn outs. Spanish flu’s clues about how coronavirus might play out in Australia



National Museum of Australia

Jeff Kildea, UNSW

In a remarkable coincidence, the first media reports about Spanish flu and COVID-19 in Australia both occurred on January 25 – exactly 101 years apart.

This is not the only similarity between the two pandemics.

Although history does not repeat, it rhymes. The story of how Australia – and particular the NSW government – handled Spanish flu in 1919 provides some clues about how COVID-19 might play out here in 2020.

Sign up to The Conversation

Spanish flu arrives

Australia’s first case of Spanish flu was likely admitted to hospital in Melbourne on January 9 1919, though it was not diagnosed as such at the time. Ten days later, there were 50 to 100 cases.

Commonwealth and Victorian health authorities initially believed the outbreak was a local variety of influenza prevalent in late 1918.

Consequently, Victoria delayed until January 28 notifying the Commonwealth, as required by a 1918 federal-state agreement designed to coordinate state responses.




Read more:
Fleas to flu to coronavirus: how ‘death ships’ spread disease through the ages


Meanwhile, travellers from Melbourne had carried the disease to NSW. On January 25, Sydney’s newspapers reported that a returned soldier from Melbourne was in hospital at Randwick with suspected pneumonic influenza.

Shutdown circa 1919: libraries, theatres, churches close

The NSW government quickly imposed restrictions on the population when Spanish flu first arrived.
National Library of Australia

Acting quickly, in late January, the NSW government ordered “everyone shall wear a mask,” while all libraries, schools, churches, theatres, public halls, and places of indoor public entertainment in metropolitan Sydney were told to close.

It also imposed restrictions on travel from Victoria in breach of the federal-state agreement.

Thereafter, each state went its own way and the Commonwealth, with few powers and little money compared with today, effectively left them to it.

Generally, the restrictions were received with little demur. But inconsistencies led to complaints, especially from churches and the owners of theatres and racecourses.

People were allowed to ride in crowded public transport to thronged beaches. But masked churchgoers, observing physical distancing, were forbidden to assemble outside for worship.

Later, crowds of spectators would be permitted to watch football matches while racecourses were closed.

Spanish flu subsides

Nevertheless, NSW’s prompt and thorough application of restrictions initially proved successful.

During February, Sydney’s hospital admissions were only 139, while total deaths across the state were 15. By contrast, Victoria, which had taken three weeks before introducing more limited restrictions, recorded 489 deaths.

At the end of February, NSW lifted most restrictions.

Even so, the state government did not escape a political attack. The Labor opposition accused it of overreacting and imposing unnecessary economic and social burdens on people. It was particularly critical that the order requiring mask-wearing was not limited to confined spaces, such as public transport.

There was also debate about the usefulness of closing schools, especially in the metropolitan area.

But then it returns

In mid-March, new cases began to rise. Chastened by the criticism of its earlier measures, the government delayed reimposing restrictions until early April, allowing the virus to take hold.

This led The Catholic Press to declare

the Ministry fiddled for popularity while the country was threatened with this terrible pestilence.

Sydney’s hospital capacity was exceeded and the state’s death toll for April totalled 1,395. Then the numbers began falling again. After ten weeks the epidemic seemed to have run its course, but as May turned to June, new cases appeared.

The resurgence came with a virulence surpassing the worst days of April. This time, notwithstanding a mounting death toll, the NSW cabinet decided against reinstating restrictions, but urged people to impose their own restraints.

The government goes for “burn out”

After two unsuccessful attempts to defeat the epidemic – at great social and economic cost – the government decided to let it take its course.

It hoped the public by now realised the gravity of the danger and that it should be sufficient to warn them to avoid the chances of infection. The Sydney Morning Herald concurred, declaring

there is a stage at which governmental responsibility for the public health ends.

The second wave’s peak arrived in the first week of July, with 850 deaths across NSW and 2,400 for the month. Sydney’s hospital capacity again was exceeded. Then, as in April, the numbers began to decline. In August the epidemic was officially declared over.

Cases continued intermittently for months, but by October, admissions and deaths were in single figures. Like its predecessor, the second wave lasted ten weeks. But this time the epidemic did not return.




Read more:
How Australia’s response to the Spanish flu of 1919 sounds warnings on dealing with coronavirus


More than 12,000 Australians had died.

While Victoria had suffered badly early on compared to NSW, in the end, NSW had more deaths than Victoria – about 6,000 compared to 3,500. The NSW government’s decision not to restore restrictions saw the epidemic “burn out”, but at a terrible cost in lives.

That decision did not cause a ripple of objection. At the NSW state elections in March 1920, Spanish flu was not even a campaign issue.

The lessons of 1919

In many ways we have learned the lessons of 1919.

We have better federal-state coordination, sophisticated testing and contact tracing, staged lifting of restrictions and improved knowledge of virology.

Australia’s response to coronavirus has seen sophisticated testing and contact tracing.
Dean Lewis/AAP

But in other ways we have not learned the lessons.

Despite our increased medical knowledge, we are struggling to find a vaccine and effective treatments. And we are debating the same issues – to mask or not, to close schools or not.

Meanwhile, inconsistencies and mixed messaging undermine confidence that restrictions are necessary.

Yet, we are still to face the most difficult question of all.

The Spanish flu demonstrated that a suppression strategy requires rounds of restrictions and relaxations. And that these involve significant social and economic costs.

With the federal and state governments’ current suppression strategies we are already seeing signs of social and economic stress, and this is just round one.

Would Australians today tolerate a “burn out”?

The Spanish flu experience also showed that a “burn out” strategy is costly in lives – nowadays it would be measured in tens of thousands. Would Australians today abide such an outcome as people did in 1919?

It is not as if Australians back then were more trusting of their political leaders than we are today. In fact, in the wake of the wartime split in the Labor Party and shifting political allegiances, respect for political leaders was at a low ebb in Australia.

Australians today may not tolerate the large numbers of deaths we saw in 1919.
James Gourley/AAP

A more likely explanation is that people then were prepared to tolerate a death toll that Australians today would find unacceptable. People in 1919 were much more familiar with death from infectious diseases.

Also, they had just emerged from a world war in which 60,000 Australians had died. These days the death of a single soldier in combat prompts national mourning.

Yet, in the absence of an effective vaccine, governments may end up facing a “Sophie’s Choice”: is the community willing and able to sustain repeated and costly disruptions in order to defeat this epidemic or, as the NSW cabinet decided in 1919, is it better to let it run its course notwithstanding the cost in lives?




Read more:
Coronavirus is a ‘sliding doors’ moment. What we do now could change Earth’s trajectory


The Conversation


Jeff Kildea, Adjunct Professor Irish Studies, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Before epidemiologists began modelling disease, it was the job of astrologers



Women, representing nature, argue the influence of the zodiac with scholars in this undated 17th century engraving.
Wellcome Collection, CC BY

Michelle Pfeffer, The University of Queensland

The internet is awash with comparisons between life during COVID-19 and life during the Bubonic plague. The two have many similarities, from the spread of misinformation and the tracking of mortality figures, to the ubiquity of the question “when will it end?”

But there are, of course, crucial differences between the two. Today, when looking for information on the incidence, distribution, and likely outcome of the pandemic, we turn to epidemiologists and infectious disease models. During the Bubonic plague, people turned to astrologers.

Exploring the role played by astrologers in past epidemics reminds us that although astrology has been debunked, it was integral to the development of medicine and public health.

The flu, written in the stars

Before germ theory, the Scientific Revolution and then the Age of Enlightenment, it was common for medical practitioners to use astrological techniques in their everyday practice.

Hans Holbein’s Danse Macabre woodcut (1523-25).
Wikimedia

Compared to the simplistic horoscopes in today’s magazines, premodern astrology was a complex field based on detailed astronomical calculations. Astrologers were respected health authorities who were taught at the finest universities throughout Europe, and hired to treat princes and dukes.

Astrology provided physicians with a naturalistic explanation for the onset and course of disease. They believed the movements of the celestial bodies, in relation to each other and the signs of the Zodiac, governed events on earth. Horoscopes mapped the heavens, allowing physicians to draw conclusions about the onset, severity, and duration of illness.

The impact of astrology on the history of medicine can still be seen today. The term “influenza” was derived from the idea that respiratory disease was a product of the influence of the stars.




Read more:
Altered mind this morning? Hehe, just blame the planets


Public health and plague

Astrologers were seen as important authorities for the health of communities as well as individuals. They offered public health advice in annual almanacs, which were some of the most widely read literature in the premodern world.

Almanacs provided readers with tables for astrological events for the coming year, as well as advice on farming, political events, and the weather.

The publications were also important disseminators of medical knowledge. They explained basic medical principles and suggested remedies. They made prognostications about national health, using astrology to predict when an influx of venereal disease or plague was likely to arise.

These public health predictions were often based on the astrological theory of conjunctions. According to this theory, when certain planets seem to approach each other in the sky from our perspective on earth, great socio-cultural events are bound to occur.

When Bubonic plague hit France in 1348, the King asked the physicians at the University of Paris to account for its origins. Their answer was that the plague was caused by a conjunction of Saturn, Mars, and Jupiter.




Read more:
How medieval writers struggled to make sense of the Black Death


Predictions from above

Astrological accounts of plague remained popular into the 17th century. In this period, astrology was increasingly attacked as superstitious, so some astrologers tried to set their field on a more scientific grounding.

In an effort to make astrology more scientific, the English astrologer John Gadbury produced one of the earliest epidemiological studies of disease.

In London’s Deliverance Predicted (1655), Gadbury claimed his contemporaries couldn’t explain when plagues would arrive, or how long they’d last.

Gadbury proposed that if planets caused plagues, then planets also stopped plagues. Studying astrological events would therefore allow one to predict the course of an epidemic.

He gathered data from the previous four great London plagues (in 1593, 1603, 1625, and 1636), scouring the Bills of Mortality for weekly plague death rates, and compiling A Table shewing the Increase and Abatement of the Plague. Gadbury also used planetary tables to locate the planets’ positions throughout the epidemics. He then compared his data sets, looking for correlations.

Gadbury found a correlation between intensity of plague and the positions of Mars and Venus. Plague deaths increased sharply in July 1593, at which point Mars had moved into an astrologically significant position. Deaths then abated in September, when Venus’s position became more significant. Gadbury concluded that the movement of “the fiery Planet Mars” was the origin of pestilence and the “cause of its raging”, while the influence of the “friendly” Venus helped abate it.

Gadbury then applied his findings to the pestilence plaguing London at the time. He was able to correlate the beginnings of the plague in late 1664 and its growing intensity in June 1665 with recent astrological events.

He predicted the upcoming movement of Venus in August would see a fall in plague deaths. Then the movement of Mars in September would make the plague deadlier, but the movements of Venus in October, November, and December would halt the death rate.

The black death in London, circa 1665. Creator unknown.
The black death in London

Looking for patterns

Unfortunately for Gadbury, plague deaths increased dramatically in August. However, he was right in predicting a peak in September followed by a steep decrease at the end of the year. If Gadbury had accounted for other correlates – such as the coming of winter – his study might have been received more favourably.

The medical advice in Gadbury’s book certainly doesn’t stand up today. He argued the plague was not contagious, and that isolating at home only caused more deaths. Yet his attempt to find correlations with fluctuating mortality rates offers an early example of what we now call epidemiology.

While we may discredit Gadbury’s astrological assumptions, examples such as this illustrate the important role astrology played in the history of medicine, paving the way for naturalist explanations of infectious disease.The Conversation

Michelle Pfeffer, Postdoctoral Research Fellow in History, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Stay alert, infodemic, Black Death: the fascinating origins of pandemic terms



Shutterstock

Simon Horobin, University of Oxford

Language always tells a story. As COVID-19 shakes the world, many of the words we’re using to describe it originated during earlier calamities – and have colourful tales behind them.

In the Middle Ages, for example, fast-spreading infectious diseases were known as plagues – as in the Bubonic plague, named for the characteristic swellings (or buboes) that appear in the groin or armpit. With its origins in the Latin word plaga meaning “stroke” or “wound”, plague came to refer to a wider scourge through its use to describe the ten plagues suffered by the Egyptians in the biblical book of Exodus.




Read more:
Lost in translation: five common English phrases you may be using incorrectly


An alternative term, pestilence, derives from Latin pestis (“plague”), which is also the origin of French peste, the title of the 1947 novel by Albert Camus (La Peste, or The Plague) which has soared up the bestseller charts in recent weeks. Latin pestis also gives us pest, now used to describe animals that destroy crops, or any general nuisance or irritant. Indeed, the bacterium that causes Bubonic plague is called Yersinia pestis.

The bacterium Yersinia pestis, which causes Bubonic plague.
Shutterstock

The Bubonic plague outbreak of the 14th century was also known as the Great Mortality or the Great Death. The Black Death, which is now most widely used to describe that catastrophe, is, in fact, a 17th-century translation of a Danish name for the disease: “Den Sorte Død”.

Snake venom, the original ‘virus’

The later plagues of the 17th century led to the coining of the word epidemic. This came from a Greek word meaning “prevalent”, from epi “upon” and demos “people”. The more severe pandemic is so called because it affects everyone (from Greek pan “all”).

A more recent coinage, infodemic, a blend of info and epidemic, was introduced in 2003 to refer to the deluge of misinformation and fake news that accompanied the outbreak of SARS (an acronym formed from the initial letters of “severe acute respiratory syndrome”).

The 17th-century equivalent of social distancing was “avoiding someone like the plague”. According to Samuel Pepys’s account of the outbreak that ravaged London in 1665, infected houses were marked with a red cross and had the words “Lord have mercy upon us” inscribed on the doors. Best to avoid properties so marked.

The current pandemic, COVID-19, is a contracted form of Coronavirus disease 2019. The term for this genus of viruses was coined in 1968 and referred to their appearance under the microscope, which reveals a distinctive halo or crown (Latin corona). Virus comes from a Latin word meaning “poison”, first used in English to describe a snake’s venom.

The word vaccine comes from the Latin ‘vacca’, meaning ‘cow’.
Shutterstock

The race to find a vaccine has focused on the team at Oxford University’s Jenner Institute, named for Edward Jenner (1749-1823). It was his discovery that contact with cowpox resulted in milkmaids becoming immune to the more severe strain found in smallpox. This discovery is behind the term vaccine (from the Latin vacca “cow”) which gives individuals immunity (originally a term certifying exemption from public service). Inoculation was initially a horticultural term describing the grafting of a bud into a plant: from Latin oculus, meaning “bud” as well as “eye” (as in binoculars “having two eyes”).

Although we are currently adjusting to social distancing as part of the “new normal”, the term itself has been around since the 1950s. It was initially coined by sociologists to describe individuals or groups deliberately adopting a policy of social or emotional detachment.




Read more:
A history of English … in five words


Its use to refer to a strategy for limiting the spread of a disease goes back to the early 2000s, with reference to outbreaks of flu. Flu is a shortening of influenza, adopted into English from Italian following a major outbreak which began in Italy in 1743. Although it is often called the Spanish flu, the strain that triggered the pandemic of 1918 most likely began elsewhere, although its origins are uncertain. Its name derives from a particularly severe outbreak in Spain.

To the watchtower

Self-isolation, the measure of protection which involves deliberately cutting oneself off from others, is first recorded in the 1830s – isolate goes back to the Latin insulatus “insulated”, from insula “island”. An extended mode of isolation, known as quarantine, is from the Italian quarantina referring to “40 days”. The specific period derives from its original use to refer to the period of fasting in the wilderness undertaken by Jesus in the Christian gospels.

Lockdown, the most extreme form of social containment, in which citizens must remain in their homes at all times, comes from its use in prisons to describe a period of extended confinement following a disturbance.

Many governments have recently announced a gradual easing of restrictions and a call for citizens to “stay alert”. While some have expressed confusion over this message, for etymologists the required response is perfectly clear: we should all take to the nearest tall building, since alert is from the Italian all’erta “to the watchtower”.The Conversation

Simon Horobin, Professor of English Language and Literature, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.


%d bloggers like this: