Category Archives: article

Huge find of silver coins provides new clues to turbulent times after Norman Conquest of England


Tom Licence, University of East Anglia

With their metal detectors and spades “detectorists” are a common sight in the British countryside. When their equipment bleeps, they start to dig in the hope of finding something old and valuable. They are often seen as figures of fun – in fact, the BBC shows a comedy series about a pair of such amateur archaeologists which has a cult following. But part-time treasure hunters do much of the heavy lifting when it comes to discovering antiquities buried in fields across the UK.

Two such detectorists, Lisa Grace and Adam Staples, recently uncovered a haul of more than 2,000 silver coins in Somerset in the south-west of England, dating back to the turbulent period following the Norman conquest of England in 1066.

In the years after William of Normandy defeated Harold II and took the throne, the Norman invaders were confronted by frequent rebellion. They responded by planting castles to subdue the population. The coin hoard found in the Chew Valley in Somerset dates from the years of unrest when William was establishing himself on the throne.

One of the largest hoards ever recovered from the years around 1066, it includes more than 1,000 coins minted in Harold’s name and a similar number in William’s. Harold had been king for only ten months at the time of his defeat and death in battle, so all the coins of Harold date from no earlier than January 1066. Some may have been minted in his name after his death, as a desperate measure by survivors to hold the regime together in the two months that elapsed between the Battle of Hastings and William’s coronation. Funds were very important at moments when the succession to the throne lay in doubt.

It is certain at any rate that whoever concealed the hoard was a person of high rank, probably one of the nobility – a circle of no more than 150 landed aristocrats, many of whom were related. A coin hoard of this size may have been to pay for an army. But we might only guess whose army or whether the hoarder was a supporter or opponent of the Norman regime.

Rivals for the English throne: William of Normandy (left) watches as Harold Godwinson apparently swears fealty.
Bayeux Tapestry

Historians have long disputed whether Harold succeeded to the throne with the approval of his predecessor and brother-in-law, the childless Edward the Confessor, or seized the throne in haste to prevent it falling to another candidate. The strongest claimants in the latter camp were Edward the Confessor’s great-nephew Edgar and William of Normandy, his second cousin, who argued that Edward had promised the throne to him.

Money and power

Coin evidence assists in this debate by showing the extent to which Harold was able to control mints up and down the country. Regimes which had only a shaky hold on power were unable to control all the mints, some of which struck coins in the names of their rivals. This happened in the early years of Harold I’s regime (1035-7), when mints in southern England struck coins in the name of his rival Harthacnut.

In the case of Harold II, though his legitimacy was in doubt, his control of the mints suggests a strong hold on power from the outset. Indeed the hoard is likely to provide specimens of coins minted at unrecorded mints and by previously unknown moneyers.

Historians also debate the extent to which the invasion of 1066 disrupted the operations of the Anglo-Saxon state. The presence in the hoard of a large sample of coins issued by William at the start of his reign will help shed new light on the era.

The portrait, design and text on William’s coins, moreover, reveals how he wanted his subjects to see him. A coin is not only a unit of currency – it is a tool of propaganda. Harold’s coins, ironically, bore the legend “PAX” (peace). It was a signal of his aspirations on becoming king.

The haul included coins minted by William the Conqueror (left) and Harold II.
Pippa Pearce/Trustees of the British Museum

Today Harold’s coins are keenly sought by collectors, being rare and evocative our nation’s story. Hoarded coins are often in fresh condition and each should command a high market value.

Rewarding hobby

Since the advent of the hobby of metal detecting in the 1970s, most hoards and single finds have been located by detectorists. Their painstaking efforts have resulted in the discovery of great treasures of recent years, including the Staffordshire Hoard and the Winfarthing pendant.

On most outings, detectorists find little or nothing. Most spend years in the hobby and never find a hoard. Thanks to a system of recording in place since the launching of the Portable Antiquities Scheme, more and more of their discoveries are now being reported.

The law requires that all finds of treasure be reported to the coroner within 14 days of discovery or of the finder’s realisation that the find might be treasure as defined by the Treasure Act of 1996. Any item of precious metal more than 300 years old, any two or more gold or silver coins, or a group of base metal coins, and any associated artefacts, such as a pot in which coins are buried, is treasure as defined by the Act.

All reported treasure items are entered in the online database of the Portable Antiquities Scheme. Their details are thereby captured for the nation, even if the finds are often returned to the finder. No hoard of Norman Conquest coins on the scale of the Chew Valley hoard has come to light for many years.

It is a reminder that the passions of hobbyists frequently turn up great benefits for everyone. And it is also a reminder of England’s turbulent past.The Conversation

Tom Licence, Professor of Medieval History and Consumer Culture, School of History, University of East Anglia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

NZ was first to grant women the vote in 1893, but then took 26 years to let them stand for parliament



After winning the right to vote in 1893, New Zealand’s suffragists kept up the battle, but the unity found in rallying around the major cause had receded.
Jim Henderson/Wikimedia Commons, CC BY-ND

Katie Pickles

Today marks the passing of the much celebrated 1893 Electoral Act, 126 years ago, which made New Zealand the first country in the world to grant women the right to vote.

But it would take 26 years before the often twinned step of allowing women to stand for parliament happened. On October 29, it will be a century since the passing of the 1919 Women’s Parliamentary Rights Act, which opened the way for women to enter politics.

Women’s suffrage and women’s right to stand for parliament are natural companions, two sides of the same coin. It would be fair to assume both happened at the same time.

Early women’s suffrage bills included women standing for parliament. But, in the hope of success, the right was omitted from the third and successful 1893 bill. Suffragists didn’t want to risk women standing for parliament sinking the bill.

The leader of the suffrage movement, Kate Sheppard, reluctantly accepted the omission and expected that the right would follow soon afterwards. But that didn’t happen.




Read more:
Why New Zealand was the first country where women won the right to vote


Post-vote agitation

After women won suffrage, agitation for several egalitarian causes, including women in parliament, continued. The Women’s Christian Temperance Movement (WCTU) and, from 1896, the National Council of Women (NCW) both called for the bar to be removed.

Women including Kate Sheppard, Margaret Sievwright, Stella Henderson and Sarah Saunders Page kept up the battle. But the unity found in rallying around the major women’s suffrage cause was lacking and the heady and energetic climate of 1893 had receded.

From 1894 to 1900, sympathetic male politicians from across the political spectrum presented eight separate bills. Supportive conservatives emphasised the “unique maternal influence” that women would bring to parliament. Conservative MP Alfred Newman argued that New Zealand must retain its world-leading reputation for social legislation, but he downplayed the significance. He predicted that even if women were allowed to stand for parliament, few would be interested and even fewer would be elected.

Left-leaning supportive MPs George Russell and Tommy Taylor saw the matter as one of extending women’s rights and the next logical step towards societal equality. But contemplating women in the House was a step too far and all attempts failed.

Enduring prejudice

The failure in the pre-war years was largely because any support for women in parliament was outweighed by enduring prejudice against their direct participation in politics.

At the beginning of the new century, Prime Minister Richard Seddon was well aware of public opinion being either indifferent to or against women in parliament. A new generation of women with professional careers who might stand for parliament, if allowed, comprised a small minority.

Much to the chagrin of supporters, New Zealand began to lag behind other countries. Australia simultaneously granted women the right to vote and stand for parliament in 1902 at the federal level, with the exception of Aboriginal women in some states.

Women in Finland were able to both vote and stand for election from 1906, as part of reforms following unrest. In 1907, 19 women were elected to the new Finnish parliament.

The game changer: the first world war

Importantly, during the first world war, women’s status improved rapidly and this overrode previous prejudices. Women became essential and valued citizens in the war effort. Most contributed from their homes, volunteering their domestic skills, while increasing numbers entered the public sphere as nurses, factory and public sector workers.

Ellen Melville became an Auckland city councillor in 1913. Ada Wells was elected to the Christchurch City Council in 1917. Women proved their worth in keeping the home fires burning while men were away fighting.

In 1918, British women, with some conditions, were enfranchised and allowed to stand for parliament. Canada’s federal government also gave most of its women both the right to vote and stand for parliament.




Read more:
100 years since women won the right to be MPs – what it was like for the pioneers


Late in 1918, MP James McCombs, the New Zealand Labour Party’s first president and long-time supporter of women’s rights, opportunistically included women standing for parliament in a legislative council amendment bill. It was unsuccessful, mostly due to technicalities, and Prime Minister Bill Massey promised to pursue the matter.

Disappointed feminist advocate Jessie Mackay pointed to women’s service during the war and the recent influenza epidemic and shamed New Zealand for failing to keep up with international developments.

Women’s wartime work, renewed feminist activism and male parliamentary support combined to make the 1919 act a foregone conclusion. Introducing the bill, Massey said he did not doubt it would pass because it was important to keep up with Britain. The opposition leader, Joseph Ward, thought war had changed what was due to women, and Labour Party leader Harry Holland pushed women’s role as moral citizens.

The Legislative Council (upper house) held out and women had to wait until 1941 for the right to be appointed there. It took until 1933 for the first woman, Elizabeth McCombs, to be elected to parliament. The belief that a woman’s place was in the home and not parliament, the bastion of masculine power, endured.

Between 1935 and 1975, only 14 women were elected to parliament, compared to 298 men. It was not until the advent of a second wave of feminism and the introduction of proportional representation in 1996 that numbers of women in the house began to increase.The Conversation

Katie Pickles, Professor of History at the University of Canterbury and current Royal Society of New Zealand Te Apārangi James Cook Research Fellow

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Who won the war? We did, says everyone



Winston Churchill, Joseph Stalin and Franklin D Roosevelt at the Yalta Conference, 1945.
Wikipedia

Nick Chater, Warwick Business School, University of Warwick

Ask any of the few remaining World War II veterans what they did during the war and you’re likely to get a humble answer. But ask the person on the street how important their country’s contribution to the war effort was and you’ll probably hear something far less modest. A new study suggests people from Germany, Russia, the UK and the US on average all think their own country shouldered more than half the burden of fighting World War II.

Our national collective memories seem to be deceiving us, and this is part of a far more general pattern. Aside from those veterans who have no desire to revel in the horrors of war, we may have a general psychological tendency to believe our contributions are more significant than they really are.

You can see this in even the most mundane of tasks. Unloading the dishwasher can be a perennial source of family irritation. I suspect that I’m doing more than my fair share. The trouble is that so does everybody else. Each of us can think: “The sheer injustice! I’m overworked and under-appreciated.”

But we can’t all be right. This strange magnification of our own efforts seems to be ubiquitous. In business, sport or entertainment, it’s all too easy for each participant to think that their own special stardust is the real reason their company, team or show was a hit.

It works for nations, too. A study last year, led by US memory researcher Henry Roediger III, asked people from 35 countries for the percentage contribution their own nation has made to world history. A dispassionate judge would, of course, assign percentages that add up to no more than 100% (and, indeed, considerably less, given the 160 or so countries left out). In fact, the self-rating percentages add up to over 1,000%, with citizens of India, Russia and the UK each suspecting on average that their own nations had more than half the responsibility for world progress.

A sceptic might note that “contributing to world history” is a rather nebulous idea, which each nation can interpret to its advantage. (The Italians, at 40%, might focus on the Romans and the Renaissance, for example.) But what about our responsibility for specific world events? The latest study from Roediger’s lab addresses the question of national contributions to World War II.

The researchers surveyed people from eight former Allied countries (Australia, Canada, China, France, New Zealand, Russia/USSR, the UK and the US) and three former Axis powers (Germany, Italy and Japan). As might be expected, people from the winning Allied side ranked their own countries highly, and the average percentage responses added up to 309%. Citizens of the UK, US and Russia all believed their countries had contributed more than 50% of the war effort and were more than 50% responsible for victory.

World War II deaths by country. How would you work out which country contributed the most?
Dna-Dennis/Wikimedia Commons

You might suspect that the losing Axis powers, whose historical record is inextricably tied to the immeasurable human suffering of the war, might not be so proud. As former US president John F Kennedy said (echoing the Roman historian Tacitus): “Victory has a hundred fathers and defeat is an orphan.” Perhaps the results for the Allied countries just reflect a general human tendency to claim credit for positive achievements. Yet citizens of the three Axis powers also over-claim shares of the war effort (totalling 140%). Rather than minimising their own contribution, even defeated nations seem to overstate their role.

Why? The simplest explanation is that we piece together answers to questions, of whatever kind, by weaving together whatever relevant snippets of information we can bring to mind. And the snippets of information that come to mind will depend on the information we’ve been exposed to through our education and cultural environment. Citizens of each nation learn a lot more about their country’s own war effort than those of other countries. These “home nation” memories spring to mind, and a biased evaluation is the inevitable result.

So there may not be inherent “psychological nationalism” in play here. And nothing special about collective, rather than individual, memory either. We simply improvise answers, perhaps as honestly as possible, based on what our memory provides – and our memory, inevitably, magnifies our own (or our nation’s) efforts.

How do you calculate real responsibility?

A note of caution is in order. Assigning responsibilities for past events baffles not just everyday citizens, but academic philosophers. Imagine a whodunit in which two hopeful murderers put lethal doses of cyanide into Lady Fotherington’s coffee. Each might say: “It’s not my fault – she would have died anyway.” Is each only “half” to blame, and hence due a reduced sentence? Or are they both 100% culpable? This poisoning is a simple matter compared with the tangled causes of military victory and defeat. So it is not entirely clear what even counts as over- or under-estimating our responsibilities because responsibilities are so difficult to assess.

Still, the tendency to overplay our own and our nation’s role in just about anything seems all too plausible. We see history through a magnifying glass that is pointing directly at ourselves. We learn the most about the story of our own nation. So our home nation’s efforts and contributions inevitably spring readily to mind (military and civilian deaths, key battles, advances in technology and so on). The efforts and contributions of other nations are sensed more dimly, and often not at all.

And the magnifying glass over our efforts is pervasive in daily life. I can find myself thinking irritably, as I unload the dishwasher, “Well, I don’t even remember the last time you did this!” But of course not. Not because you didn’t do it, but because I wasn’t there.The Conversation

Nick Chater, Professor of Behavioural Science, Warwick Business School, University of Warwick

This article is republished from The Conversation under a Creative Commons license. Read the original article.


How clay helped shape colonial Sydney



A large bowl or pan thought to have been made in Sydney by the potter Thomas Ball between 1801 and 1823.
Courtesy of Casey & Lowe, photo by Russell Workman

Nick Pitt, UNSW

In April 2020, Australia will mark 250 years since James Cook sailed into Kamay (later known as Botany Bay) on the Endeavour, kicking off a series of events that resulted in the British arriving and staying uninvited first at Warrane (Sydney Cove) in 1788, and later at numerous locations across the continent.

Indigenous sovereignty was never ceded, and as a nation we are still grappling with the consequences of these actions of 221 years ago. Although we often focus on the large-scale impact of British settlers – the diseases my ancestors brought, the violence they committed – we are less good at seeing the small and unwitting ways that settlers participated in British colonialism. One such story emerges when we track the history of an unlikely cultural object – clay from Sydney.

In April 1770, Joseph Banks – the gentleman botanist on James Cook’s first voyage – recorded in his journal how the traditional owners of Botany Bay painted their bodies with broad strokes of white ochre, which he compared to the cross-belt of British soldiers.




Read more:
How Captain Cook became a contested national symbol


Eighteen years later, Arthur Phillip, Governor of New South Wales, sent Banks a box full of this white ochre – he’d read the published journal and suspected Banks would be interested. The ochre was a fine white clay and Phillip wondered whether it would be useful for manufacturing pottery.

Once in Britain, this sample of clay took on a life of its own, passed between scientists across Europe. Josiah Wedgwood – Banks’ go-to expert on all things clay-related – tested a sample and described it as “an excellent material for pottery”. He had his team of skilled craftspeople make a limited number of small medallions using this Sydney clay.

These medallions depict an allegory according to the classical fashion of the time. A standing figure represents “Hope” (shown with an anchor) instructing three bowing figures – “Peace” (holding an olive branch), “Art” (with an artist’s palette) and “Labour” (with a sledgehammer).

The Sydney Cove medallion.
State Library of NSW

A cornucopia lies at their feet, representing the abundance that these qualities could produce in a society, while in the background a ship, town and fort suggest a flourishing urban settlement supported by trade.

This little ceramic disc made out of Sydney clay represented tangible evidence of how the new colony could flourish with “industry” – the right combination of knowledge, skills and effort. Yet notably absent from this vision of the new colony was any representation of Aboriginal people.

The back of the Sydney Cove medallion.
State Library of NSW

For something only a little larger than a 50 cent piece, this medallion had a long legacy in colonial NSW. It was reproduced on the front page of The Voyage of Governor Phillip to Botany Bay – one of the first accounts of the fledgling colony. Later it was adapted for the Great Seal of New South Wales – attached to convict pardons and land grants.

Later still, a version formed the first masthead of the Sydney Gazette – the first newspaper in the colony. The ideas behind the medallion gained even wider circulation in the colony. As historian of science Lindy Orthia has argued, the Sydney Gazette was a place where various schemes for improving manufacturing and farming were regularly discussed.

The first Great Seal of New South Wales as used on a land title deed.
State Library of NSW

We can see the impact of these ideas by looking at what colonists themselves did with the clay. Although the first examples of Sydney-made pottery were unglazed and fragile, by the first decades of the 19th century, the quality had improved.

Over the last 30 years, archaeologists have found examples of Sydney-made pottery across Sydney and Parramatta on sites dating from the 1800s to 1820s.

Commonly called “lead-glazed pottery”, this material ranges from larger basins and pans, to more refined, decorated items, including chamber pots, bowls, plates, cups and saucers. Although basic, it clearly was based on British forms. The discovery of the former site of a potter’s workshop in 2008 confirmed this material was made locally.

It has been found on sites ranging from the Governor’s residence on the corner of Bridge and Phillip Street, Sydney, to former convict huts in Parramatta, alongside imported British earthenware and Chinese export porcelain. Visitors to the fledgling colony commented on this pottery as evidence of its growth and development.

Examples of Sydney-made pottery found at an archaeological site at 15 Macquarie Street, Parramatta.
Courtesy of Casey & Lowe, photo by Russell Workman

Sydney-made pottery helped colonists maintain different aspects of “civilised” behaviour. When imported tableware was expensive, local pottery allowed convicts living outside of barracks and other poorer settlers to use ceramic plates and cups, rather than cheaper wooden items.

Locally-made pots were also used to cook stews over a fire. Stews not only continued the established food practices of their British and Irish homes, but also conformed to contemporary ideas of a good, nourishing diet.




Read more:
Why archaeology is so much more than just digging


These practices around food would have distinguished colonists from the local Aboriginal people. In the coastal area around Sydney, locals tended to roast meat and vegetables, and to eat some fish and smaller birds or animals after only burning off their scales, feathers and fur.

George Thompson, a visiting ship’s gunner who had a low opinion of most things in the colony, thought that eating half-roasted fish was evidence of “a lazy indolent people”.

As historian Penny Russell has discussed, eating “half-cooked” food became a well-worn trope in the 19th century, frequently repeated as evidence of the supposed lack of civilisation by Aboriginal people. By contrast, as the historian and curator Blake Singley has suggested, European cooking methods frequently became a way that native plants and animals could be “civilised” and incorporated into settler diets.

The colonists’ use of Sydney clay helped to distinguish their notion of civilisation from Aboriginal culture, and so implicitly helped to justify the dispossession of Aboriginal people. The story of this clay demonstrates how quickly colonists’ focus could shift away from Aboriginal people: although Aboriginal use of white ochre continued to be recorded by colonists and visitors, Sydney clay primary became seen as the material of a skilled European craft.

Through the use of local pottery, ordinary settlers could participate in this civilising program, replicating the culture of their homeland. These small, everyday actions helped create a vision of Sydney that excluded Aboriginal people – despite the fact that they have continued to live in and around Sydney since 1788.The Conversation

Nick Pitt, PhD candidate, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Explainer: from bloodthirsty beast to saccharine symbol – the history and origins of the unicorn


Domenichino’s A Virgin with a Unicorn. Artists of the Middle Ages believed the unicorn could only be captured by a virgin.
Wikipedia Commons

Jenny Davis Barnett, The University of Queensland

The unicorn is an enduring image in contemporary society: a symbol of cuteness, magic, and children’s birthday parties.

But while you might dismiss this one-horned creature as just a product for Instagram celebrities and five-year-old girls, we can trace the lineage of the unicorn from the 4th century BCE. It evolved from a bloodthirsty monster, to a tranquil animal bringing peace and serenity (which can only be captured by virgins), to a symbol of God and Christ.

These days the term unicorn can refer to a privately held start-up company valued at over US$1 billion,
a single female interested in meeting other couples, or the characters in My Little Pony.

Over the centuries, the meaning and imagery of the unicorn has shifted and persisted. But how did we get here?

Ferocious beasts and where to create them

The earliest written account of the unicorn comes from the text Indica (398 BCE), by Greek physician Ctesias, where he described beasts in India as large as horses with one horn on the forehead.

Ctesias was most likely describing the Indian Rhinoceros. The unicorn horn, he wrote, was a panacea for those who drink from it regularly.

A contemporary interpretation of the once ferocious beast.
Hachette

In the first-century CE, claiming to quote Ctesias, the Roman naturalist Pliny (Natural History, 77 CE), wrote that the unicorn was the fiercest animal in India, with the body of a horse, the head of a stag, the feet of an elephant, the tail of a boar, and a single horn projecting from the forehead.

Pliny also embellished the animal’s description by adding a trait that became extremely significant to society in the Middle Ages: it was impossible to capture the animal alive.

Just over a century later, the second-century CE Roman scholar Aelian compiled a book about animals based on Pliny. In his On the Nature of Animals, Aelian wrote that the unicorn grows gentle towards the chosen female during mating season.

The unicorn’s tender disposition when near the female became a highly symbolic trait for authors and artists of the Middle Ages, who believed it could only be captured by a virgin.

Despite the authoritative texts of the Greeks and Romans, the unicorn remained mostly unknown in the centuries leading up to the Middle Ages. For the public to become familiar with it, the creature had to come out of the library and develop a role in everyday events and popular culture: ie a role in Christianity.

Lost in translation

It was in the third-century BCE that the unicorn entered religious texts – although only by accident.

Between 300 and 200 BCE, a group of 70 scholars gathered together to create the first translation of the Hebrew Old Testament in Koine Greek. Although the Hebrew term for unicorn is Had-Keren (one horn), in the text commonly known as Septuagint (seventy) the scholars made an error when translating the Hebrew term Re’_em (ox), from Psalms as monokeros. In effect, they changed the word “ox” to “unicorn.”

The unicorn’s inclusion in a text of such magnitude laid the foundation for an obsession with the creature that thrived in both literary and visual arts from the earliest dates of the Middle Ages and continues to the modern day.

By the 12th century, the one-horned animal came to be associated with the allegory provided in the Physiologus, a collection of moralised beast tales on which many medieval bestiaries are based. One of the most widely read books in the Middle Ages, the Physiologus often identifies Christ with the unicorn.

The Rochester Bestiary (c late 1200s) draws on Physiologus to represent the unicorn as the spirit of Jesus.
Wikipedia Commons

The illustrations that accompany textual references to the unicorn in the Bible and medieval bestiaries often showed the allegorical representation rather than the literal.

The modern unicorn.
mlp.wikia.com

So instead of images depicting Christ as a man, the artists drew horses and goats with one large horn protruding from its head. In this medieval legend, the fanciful myth of the one-horned animal became the foundation of the unicorn image that circulated throughout Europe.

Contemporary images of the unicorn have changed very little since the medieval era. The creature in The Lady and the Unicorn tapestries in the Cluny museum in Paris, symbolising various overlapping meanings including chastity and heraldic animals, looks a lot like the My Little Pony characters Rarity and Princess Celestia.




Read more:
Explainer: the symbolism of The Lady and the Unicorn tapestry cycle


Imagery of the unicorn persisted sporadically in literature, film and television through the 20th century, but the 2010s saw interest boom.

The modern Instagram star

Social media helped lure the magical creature into quotidian life – the one-horned horse looks great as a Facebook emoji and surrounded by rainbows on Instagram. National Unicorn Day (April 9) was first observed in 2015.

Searches for “unicorns” reached an all-time high in April 2017, the same month Starbucks introduced the colour and taste-changing Unicorn Frappuccino, sparking a trend in adding glitter and rainbow colours to any food or beverage.

Now, the unicorn is marketed to children and adults alike on coffee mugs, keychains, stuffed animals, t-shirts. In secular contemporary culture it has become an LGBTI+ icon: a symbol of hope, something “uncatchable.”

The contemporary unicorn is a far cry from Ctesias’ beasts. Social media platforms like Instagram encourage us to project an idealised version of our life: the unicorn is a perfect symbol for this ideal.

If the last decade is anything to go by, its intrigue will only continue to grow.The Conversation

Jenny Davis Barnett, Academic in French, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.


How ancient seafarers and their dogs helped a humble louse conquer the world



Male (left) and female Heterodoxus spiniger from Borneo.
Natural History Museum, London, Author provided

Loukas Koungoulos, University of Sydney

This is the story of how a parasitic, skin-chewing insect came to conquer the world.

For more than a century, scientists have been puzzled as to how an obscure louse native to Australia came to be found on dogs across the world. Heterodoxus spiniger evolved to live in the fur of the agile wallaby.

Despite little evidence to back the idea, many researchers believed it was linked to people from Asia bringing the dingo to Australia in ancient times. Perhaps people later took dingoes infested with this parasite back home, where it spread to local dogs, and onwards from there.




Read more:
Dingoes do bark: why most dingo facts you think you know are wrong


But when we approached the question again using the most up-to-date information, my colleague Peter Contos and I came up with a completely different explanation – one that better fits what we know of ancient migration and trade in the Asia-Pacific region.

As we report in the journal Environmental Archaeology, this louse probably originated not in Australia but in New Guinea, an island with a long history of intimate connection with seafaring Asian cultures.

Louse on the loose

H. spiniger is a tiny louse that lives on mammals around the world, mostly dogs. Using its clawed legs to hang on, it bites and chews at the skin and hair of its hosts to draw the blood on which it feeds.

As all its closest relatives are specialised parasites of marsupials, mostly other wallabies, logic suggested that H. spiniger must have evolved within Australia. It also seemed logical it would have spread first to the dingo, Australia’s native dog.

Our first task was to figure out just how far away from Australia it had spread; this would inform the likely pathways by which it could have travelled to the wider world.

We looked at museum collections, entomological surveys, and veterinary research reports to generate a map of its worldwide distribution.

Global distribution of H. spiniger.
Lougoulos and Contos, Author provided

The specimens we found, collected from the late 19th century to the present day, showed that this species is found on every continent except Europe and Antarctica.

But in Australia, we couldn’t find a single verifiable instance of the parasite living on dingoes. The only cases were from agile wallabies and domestic dogs.

That meant the prevailing wisdom had been wrong, and we had to look elsewhere for the origins of H. spiniger.

Don’t blame the dingoes.
Blanka Berankova/Shutterstock

Where did it really come from?

Although marsupials are best known from Australia, they are also found in other parts of the surrounding region. The agile wallaby is also native to the island of New Guinea, which was once joined with Australia.

Dogs have also been in New Guinea for at least as long as the dingo has been in Australia. Traditionally, dogs were kept in Papuan villages, and were used to hunt game, including wallabies.

It came as little surprise, then, that we found H. spiniger on both agile wallabies and native dogs in New Guinea – and only a few decades after the first ever identification of the species.

So here was a more likely place in which the first transfer from wallaby to dog took place. But who took them out of New Guinea and into the wider world?

Austronesian voyagers

New Guinea was first colonised by humans around the same time as Australia. But since that time, compared with Australia it has had notably stronger connections with the outside world, reaching back millennia before European colonisation of Australia in 1788.

Around 4,000 years ago, agriculturalists known as Austronesians sailed out of Taiwan to settle several archipelagos in Oceania. With them they brought domestic species of plants and animals, including dogs.

By 3,000 years ago, at the latest, they reached New Guinea. We suggest this was the crucial moment when dogs first picked up H. spiniger.

In the ensuing centuries, Austronesians went on to settle much of Indonesia, the Philippines, Melanesia and Polynesia, and coastal sections of mainland Southeast Asia.

They even settled as far as Madagascar, suggesting their voyages probably took them around the rim of the Indian Ocean, along the margins of India and the Middle East.

Dogs accompanying the migrants probably helped spread the louse, which is found almost everywhere they went.

This spans an enormous distance – from Hawaii to Madagascar – a testament to the ancient Austronesians’ supreme seafaring skills.

New directions

Our research suggests how the parasite first got around the world, but not precisely when. Its journey probably progressed at different times in different places.

The Austronesian diaspora established trade routes between the places they settled, some of which spanned impressive distances across several island groups.




Read more:
How to get to Australia … more than 50,000 years ago


Later, foreign traders connected these communities with greater Asia and Africa. And in modern times, dogs continue to be transported as desirable goods themselves.

Trade and contact has probably led to further, possibly ongoing, dispersal of H. spiniger.

Unfortunately there are no archaeological examples that could demonstrate the louse’s early presence outside New Guinea, because this species prefers hot, humid environments.

A genetic approach is a better way forward. A start would be testing specimens from different parts of the world, to see when different regional populations – if they exist – branched off from one another.

This is particularly important in tracking its spread to the Americas, which likely occurred in recent centuries alongside European colonisation.

This research will help us further understand how migration, contact and trade unfolded in the prehistoric Asia-Pacific region, and how it affected the animal species – including the humblest of parasites – we see there today.


This paper would not have been possible without the contributions of Peter Contos, the work of volunteers on the Natural History Museum’s Boopidae of Australasia digitisation project, and the contributions of the public to Wikipedia Creative Commons, for which we are grateful.The Conversation

Loukas Koungoulos, PhD Candidate, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.


The past stinks: a brief history of smells and social spaces



‘Living Mady Easy: Revolving hat’, a satirical print with a hat supporting a spy glass, an ear trumpet, a ciggar, a pair of glasses, and a scent box, 1830, London.
Wellcome Images CCBY, CC BY-SA

William Tullett, Anglia Ruskin University

A sunny afternoon in Paris. An intrepid TV presenter is making his way through the streets asking passersby to smell a bottle he has in his hand. When they smell it they react with disgust. One woman even spits on the floor as a marker of her distaste. What is in the bottle? It holds, we are told, the “pong de paris”, a composition designed to smell like an 18th-century Parisian street.

The interpretation of past scents that we are given on the television, perhaps influenced by Patrick Süskind’s pungent novel Perfume, is frequently dominated by offence.

It’s a view found not just on TV but in museums. In England, York’s Jorvik Viking Centre, Hampton Court Palace, and the Museum of Oxfordshire have all integrated smells into their exhibits.

The one smell that unites these attempts at re-odorising the past: toilets. Viking toilets, a Georgian water closet, and the highly urinous and faecal smell of a Victorian street, all included in the above examples, thread the needle of disgust from the medieval to the modern.

The consequence of such depictions is to portray the past as an odorous prelude, with foul-smelling trades and poor sanitation, to the clean and pleasant land of modernity.

Phew, what a pong

Suggesting that people who are not “us” stink has a long history. It is applied to our forebears just as often as is to other countries, peoples, or cultures. It is not accident that, “Filthy Cities” – an English television program, highlighted the stink of 18th-century France – even in the 18th century the English had associated the French, their absolutist Catholic enemies, with the stink of garlic.

The toilet-training narrative is a simple and seductive story about “our” conquest of stench. But the “pong de paris” misses the point. Too busy turning the past into a circus of disgust for modern noses, it fails to ask how it smelt to those who lived there. New historical work reveals a more complex story about past scents.

A careful examination of the records of urban government, sanitation, and medicine reveal that 18th-century English city-dwellers were not particularly bothered by unsanitary scents. This was partly because people adapted to the smells around them quickly, to the extent that they failed to notice their presence.

But, thanks to 18th-century scientific studies of air and gases, many Georgians also recognised that bad smells were not as dangerous as had previously been thought. In his home laboratory, the polymath Joseph Priestley experimented on mice, while others used scientific instruments to measure the purity of the air on streets and in bedrooms. The conclusion was simple: smell was not a reliable indicator of danger.

Scientist and social reformer Edwin Chadwick famously claimed in 1846 that “all smell… is disease”. But smell had a much more complex place in miasma theory – the idea that diseases were caused by poisonous airs – than has often been assumed. In fact, by the time cholera began to work its morbid magic in the 1830s, a larger number of medical writers held that smell was not a carrier of sickness-inducing atmospheres.

Smells tend to end up in the archive, recorded in the sources historians use, for one of two reasons: either they are unusual (normally offensive) or people decide to pay special attention to them. One scent that appeared in the diaries, letters, magazines, and literature of 18th-century England, however, was tobacco smoke. The 18 century saw the rise of new anxieties about personal space. A preoccupation with politeness in public places would prove a problem for pipe smokers.

On the left a fashionable cigar smoker and on the right a rather less fashionable pipe-smoker, c.1805.
Own collection

Getting sniffy about tobacco

Tobacco had become popular in England during the 17th century. But, by the mid-18th century, qualms began to be raised. Women were said to abhor the smell of tobacco smoke. A satirical poem told the story of a wife who had banned her husband from smoking, only to allow its resumption – she realised that going cold turkey had made him impotent.

New sociable venues proliferated in towns and cities, with the growth of provincial theatres, assembly rooms, and pleasure gardens.
In these sociable spaces, a correspondent to The Monthly Magazine noted in 1798, “smoaking [sic] was a vulgar, beastly, unfashionable, vile thing” and “would not be suffered in any genteel part of the world”. Tobacco smoking was left to alehouses, smoking clubs and private masculine spaces.

Clouds of smoke invaded people’s personal space, subjecting them to atmospheres that were not of their own choosing. Instead, fashionable 18th-century nicotine addicts turned to snuff. Despite the grunting, hawking and spitting it encouraged, snuff could be consumed without enveloping those around you in a cloud of sour smoke.

The 18th century gave birth to modern debates about smoking and public space that are still with us today. The fact that the smell of tobacco smoke stains the archives of the period, metaphorically of course, is a testament to the new ideas of personal space that were developing within it.The Conversation

William Tullett, Lecturer in History, Anglia Ruskin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Hidden women of history: Eleanor Anne Ormerod, the self taught agricultural entomologist who tasted a live newt



Wikimedia Commons

Tanya Latty, University of Sydney

In this series, we look at under-acknowledged women through the ages.

Insects have always been intimately connected with agriculture. Pest insects can cause tremendous damage, while helpful insects like pollinators and predators provide free services. The relatively young field of agricultural entomology uses knowledge of insect ecology and behaviour to help farmers protect their crops.

One of the most influential agricultural entomologists in history was an insatiably curious and fiercely independent woman named Eleanor Anne Ormerod. Although she lacked formal scientific training, Ormerod would eventually be hailed as the “Protectress of British Agriculture”.

Eleanor was born in 1828 to a wealthy British family. She did not attend school and was instead tutored by her mother on subjects thought to increase her marriageability: languages, drawing and music.

Like most modern entomologists, Eleanor’s interest in insects started when she was a child. In her autobiography, she tells of how she once spent hours observing water bugs swimming in a small glass. When one of the insects was injured, it was immediately consumed by the others.

Shocked, Eleanor hurried to tell her father about what she had seen but he dismissed her observations. Eleanor writes that while her family tolerated her interest in science, they were not particularly supportive of it.

Securing an advantageous marriage was supposed to be the primary goal of wealthy young women in Eleanor’s day. But her father was reclusive and disliked socialising; as a result, the family didn’t have the social connections needed to secure marriages for the children. Of Ormerod’s three sisters, none would marry.

Ormerod as a young woman.
Wikimedia Commons

The Ormerod daughters were relatively fortunate; their father gave them enough money to live comfortably for the rest of their lives. Their status as wealthy unmarried women gave them the freedom to pursue their interests free from domestic responsibilities and the demands of husbands or fathers. For Eleanor, this meant time to indulge her scientific curiosity.

Foaming at the mouth

Ormerod’s first scientific publication was about the poisonous secretions of the Triton newt. After testing the poison’s effects on an unfortunate cat, she decided to test it on herself by putting the tail of a live newt into her mouth. The unpleasant effects – which included foaming at the mouth, oral convulsions and a headache – were all carefully described in her paper.

A Triton newt.
Wikimedia Commons

Omerod’s first foray into agricultural entomology came in 1868, when the Royal Horticultural Society asked for help creating a collection of insects both helpful and harmful to British agriculture. She enthusiastically answered the call and spent the next decade collecting and identifying insects on the society’s behalf.

In the process, she developed specialist skills in insect identification, behaviour and ecology.




Read more:
Hidden women of history: Flos Greig, Australia’s first female lawyer and early innovator


During her insect-collecting trips, Ormerod spoke with farmers who told her of their many and varied pest problems. She realised that farmers were in need of science-based advice for protecting their crops from insect pests.

Yet most professional entomologists of the time were focused on the collection and classification of insects; they had little interest in applying their knowledge to agriculture. Ormerod decided to fill the vacant role of “agricultural entomologist” herself.

In 1877, Ormerod self-published the first of what was to become a series of 22 annual reports that provided guidelines for the control of insect pests in a variety of crops. Each pest was described in detail including particulars of its appearance, behaviour and ecology. The reports were aimed at farmers and were written in an easy-to-read style.

An early form of crowdsourcing

Ormerod wanted to create a resource that would help farmers all over Britain. She quickly realised this task would require more information than she could possibly collect on her own. So Ormerod turned to an early version of crowdsourcing to obtain data.

She circulated questionnaires throughout the countryside asking farmers about the pests they observed, and the pest control remedies they had tried.

Whenever possible, she conducted experiments or made observations to confirm information she received from her network of farmers. Each of her reports combined her own work with that of the farmers and labourers she corresponded with. The resulting reports cemented Ormerod’s reputation.

Ormerod was invited to give lectures at colleges and institutes throughout Britain. She lent her expertise to pest problems in places as far afield as New Zealand, the West Indies and South Africa.

In recognition of her service, she was awarded an honorary law degree from the University of Edinburgh in 1900 – the first women in the university’s history to receive the honour. Such was her fame that acclaimed author Virginia Woolf later wrote a fictionalised account of Ormerod’s life called Miss Ormerod.

Virginia Woolf wrote a book about Ormerod.
Wikimedia Commons

While she undoubtedly contributed to the rise of agricultural entomology as a scientific field, Ormerod’s legacy is complicated by her vocal support of a dangerous insecticide known as Paris Green. Paris Green was an arsenic-derived compound initially used as a paint (hence the name).

Although Paris Green was used extensively in North America, it was relatively unheard of in Britain. Ormerod made it her mission to introduce this new advance to British farmers. So strongly did she believe in its crop-saving power, she joked about wanting the words, “She brought Paris Green to Britain,” engraved on her tombstone.

A tin of Paris Green paint.
Wikimedia Commons

Unfortunately, Paris Green is a “broad spectrum” insecticide that kills most insects, including pollinators and predators. The loss of predators in the crop ecosystem gives free rein to pests, creating a vicious cycle of dependence on chemical insecticides.

Paris Green also has serious human health impacts, some of which were recognised even in Ormerod’s day. The fact that arsenic was a common ingredient in all manner of products – including medicines – may partly explain why Ormerod seems to have underestimated the danger of Paris Green to human and environmental health.

Ormerod’s steadfast promotion of Paris Green seems naïve in retrospect. But the late 1800’s was a time of tremendous optimism about the power of science to solve the world’s problems.

Paris Green and other insecticides allowed farmers to cheaply and effectively protect their crops – and thus their livelihoods. In fact, less than 50 years after Ormerod’s death, chemist Paul Muller won a Nobel Prize for his discovery of the infamous (and environmentally catastrophic) insecticide DDT. When viewed in light of the “pesticide optimism” of her time, Ormerod’s enthusiasm about Paris Green is easier to understand.

Interestingly, Ormerod wasn’t just an insecticide evangelist. Her reports gave recommendations for a variety of pest control methods such as the use of exclusion nets and the manual removal of pests. These and other environmentally friendly techniques now form the core of modern “integrated pest management”, the gold standard for effective and sustainable pest control.

Eleanor Ormerod was devoted to the cause of protecting agriculture at a time when few “serious” entomologists were interested in applying their knowledge to agriculture. She recognised that progress in agricultural entomology could only happen when entomologists worked in close partnership with farmers.

She continued working and lecturing to within weeks of her death in 1901; in all of her years of service, she was never paid.The Conversation

Tanya Latty, Senior Lecturer, School of Life and Environmental Sciences, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.


A History of Faber & Faber


The link below is to an article that takes a look at the history of Faber & Faber, the publishing house.

For more visit:
https://www.newyorker.com/books/page-turner/the-unlikely-history-of-faber-and-faber


A Brief History of the Lobotomy


The link below is to an article that takes a brief look at the history of the lobotomy.

For more visit:
https://lithub.com/a-brief-and-awful-history-of-the-lobotomy/


%d bloggers like this: