Category Archives: article

Iran’s cultural heritage reflects the grandeur and beauty of the golden age of the Persian empire



The ancient Persian symbol of victory in Persepolis, capital of the ancient Achaemenid kingdom in Iran.
Delbars via Shutterstock

Eve MacDonald, Cardiff University

It’s simply not possible to do justice to the value of Iran’s cultural heritage – it’s a rich and noble history that has had a fundamental impact on the world through art, architecture, poetry, in science and technology, medicine, philosophy and engineering.

The Iranian people are intensely aware – and rightly proud of – their Persian heritage. The archaeological legacy left by the civilisations of ancient and medieval Iran extend from the Mediterranean Sea to India and ranges across four millennia from the Bronze age (3rd millennium BC) to the glorious age of classical Islam and the magnificent medieval cities of Isfahan and Shiraz that thrived in the 9th-12th centuries AD, and beyond.

The Persian Empire in 490 BC.
Department of History, United States Military Academy, West Point

The direct legacy of the ancient Iranians can be found across the Middle East, the Caucasus and Turkey, the Arabian Peninsula and Egypt and Turkmenistan, Uzbekistan, Afghanistan, India and Pakistan.

In the 6th century BC, Iran was home to the first world empire. The Achaemenids ruled a multicultural superpower that stretched to Egypt and Asia Minor in the west and India and Pakistan in the east. They were the power by which all other ancient empires measured themselves. Their cultural homeland was in the Fars province of modern Iran. The word Persian is the name for the Iranian people based on the home region of the Achaemenids – Pars.

Some of the richest and most beautiful of the archaeological and historical heritage in Iran remains there. This includes Parsgardae, the first Achaemenid dynastic capital where King Cyrus(c. 590-529BC) laid down the foundations of law and the first declaration of universal rights while ruling over a vast array of citizens and cultures.

Relief sculpture from Persepolis: one of the immortals perhaps?
Angela Meier via Shutterstock

Nearby is the magnificent site of Persepolis, the great palace of the Achaemenid kings and hub of government and administration. Architecturally stunning, it is decorated with relief sculptures that still today leave a visitor in awe.

Seleucid and Parthian Iran

When the Achaemenids fell to the armies of Alexander the Great in the 4th century BC, what followed was great upheaval and also one of the most extraordinary moments in human history. The mixing of Persian and eastern Mediterranean cultures created the Hellenistic Age. The Macedonian King Seleucus (died 281BC) and his Persian wife Apame ruled a hybrid kingdom that mixed Greek, Persian, Jewish, Bactrian, Armenian, Sogdian and Aramaean cultures and religions.

With new cities, religions and cultures, this melting pot encouraged the rise of a thriving connectivity that linked urban centres in Iran, Iraq, Afghanistan, Turkmenistan and Syria (where many of the Hellenistic sites (such as Apamea) have been devastated in recent years by war and looting). The great city of Seleucia-on-Tigris/Ctesiphon, just south of Baghdad on the Tigris river in modern Iraq, became the western capital and centre for learning, culture and power for a thousand years.

Hellenistic rulers gave way to Parthian kings in the 2nd century BC and the region was ruled by the Arsacid dynasty whose homeland, around Nisa, was the northern region of the Iranian world. The Parthian Empire witnessed growing connectivity between east and west and increasing traffic along the silk routes. Their control of this trade led to conflict with the Romans who reached east to grasp some of the resulting spoils.

Facade of Mosque of Sheikh Lotfollah in Isfahan city.
Fotokon via Shutterstock

It was also a time of religious transition that not only witnessed the rise of Buddhism, but also a thriving Zoroastrian religion that intersected with Judaism and developing Christianity. In the biblical story of the birth of Christ, who were the three kings – the Magi with their gifts for Jesus – but Persian priests from Iran coming to the side of child messiah, astronomers following the comet.

The Sasanians

The last great ancient kingdom of the Iranians was the Sasanian empire based around a dynasty that rose out of the final years of the Arsacid rule in the 3rd century AD. The Sasanians ruled a massive geopolitical entity from 224-751AD. They were builders of cities and frontiers across the empire including the enormous Gorgan wall. This frontier wall stretched 195km from the Caspian Sea to the mountains in Turkmenistan and was built in the 5th century AD to protect the Iranian agricultural heartland from northern invaders like the Huns.

The line of the Gorgan Wall and fort viewed from aerial photograph
Alchetron, Author provided

The wall is a fired-brick engineering marvel with a complex network of water canals running the whole length. It once stood across the plain with more than 30 forts manned by tens of thousands of soldiers.

The Sasanians were the final pre-Islamic dynasty of Iran. In the 7th century AD the armies of the Rashidun caliphs conquered the Sasanian empire, bringing with them Islam and absorbing much of the culture and ideas of the ancient Iranian world. This fusion led to a flowering of early medieval Islam and, of the 22 cultural heritage sites in Iran that are recognised by UNESCO, the 9th century Masjed-e Jāmé in Isfahan is one of the most stunningly beautiful and stylistically influential mosques ever built.

This was a thriving period of scientific, artistic and literary output. Rich with poetry that told of the ancient Iranian past in medieval courts where bards sang of great deeds. These are stories that we now believe reached the far west of Europe in the early medieval period possibly through the crusades and can only emphasise the long reach of the cultures of ancient and medieval Iran.

Iranian cultural heritage has no one geographic or cultural home, its roots belong to all of us and speak of the vast influence that the Iranians have had on the creation of the world we live in today. Iran’s past could never be wiped off the cultural map of the world for it is embedded in our very humanity.The Conversation

Eve MacDonald, Lecturer in Ancient History, Cardiff University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


History of the two-day weekend offers lessons for today’s calls for a four-day week



The leisure industry led one of many campaigns to free people from working on Saturday afternoons.

Brad Beaven, University of Portsmouth

The idea of reducing the working week from an average of five days to four is gaining traction around the world. Businesses and politicians have been considering a switch to fewer, but more productive hours spent working. But the idea has also been derided.

As a historian of leisure, it strikes me that there are a number of parallels between debates today and those that took place in the 19th century when the weekend as we now know it was first introduced. Having Saturdays as well as Sundays off work is actually a relatively modern phenomenon.

Throughout the 19th century, government legalisation reduced working hours in factories and prescribed regular breaks. But the weekend did not simply arise from government legislation – it was shaped by a combination of campaigns. Some were led by half-day holiday movements, others by trade unions, commercial leisure companies and employers themselves. The formation of the weekend in Britain was a piecemeal and uneven affair that had to overcome unofficial popular traditions that punctured the working week during the 19th century.

‘Saint Monday’

For much of the 19th century, for example, skilled artisan workers adopted their own work rhythms as they often hired workshop space and were responsible for producing items for their buyer on a weekly basis. This gave rise to the practice of “Saint Monday”. While Saint Monday mimicked the religious Saint Day holidays, it was in fact an entirely secular practice, instigated by workers to provide an extended break in the working week.

They worked intensively from Tuesday to finish products by Saturday night so they could then enjoy Sunday as a legitimate holiday but also took Mondays off to recover from Saturday night and the previous day’s excesses. By the mid-19th century, Saint Monday was a popular institution in British society. So much so that commercial leisure – like music halls, theatres and singing saloons – staged events on this unofficial holiday.

The Victorian period spawned a number of music halls, such as Canterbury Hall in London.
People Play

Workers in the early factory system also adopted the tradition of Saint Monday, despite manufacturers consistently opposing the practice, as it hurt productivity. But workers had a religious devotion to the unofficial holiday, which made it difficult for masters to break the habit. It continued to thrive into the 1870s and 1880s.

Nonetheless, religious bodies and trade unions were keen to instil a more formal holiday in the working week. Religious bodies argued that a break on Saturday would improve working class “mental and moral culture”. For example, in 1862 Reverend George Heaviside captured the optimistic tone of many religious leaders when, writing in the Coventry Herald newspaper, he claimed a weekend would allow for a refreshed workforce and greater attendance at church on Sundays.

Trade unions, meanwhile, wanted to secure a more formalised break in the working week that did not rely on custom. Indeed, the creation of the weekend is still cited as a proud achievement in trade union history.

In 1842 a campaign group called the Early Closing Association was formed. It lobbied government to keep Saturday afternoon free for worker leisure in return for a full day’s work on Monday. The association established branches in key manufacturing towns and its membership was drawn from local civic elites, manufacturers and the clergy. Employers were encouraged to establish half-day Saturdays as the Early Closing Association argued it would foster a sober and industrious workforce.

Half-day Saturdays were seen as a way to improve productivity.
Shutterstock

Trades unions and workers’ temperance groups also saw the half-day Saturday as a vehicle to advance working class respectability. It was hoped they would shun drunkenness and brutal sports like cock fighting, which had traditionally been associated with Saint Monday.

For these campaigners, Saturday afternoon was singled out as the day in which the working classes could enjoy “rational recreation”, a form of leisure designed to draw the worker from the public house and into elevating and educational pursuits. For example, in Birmingham during 1850s, the association wrote in the Daily News newspaper that Saturday afternoons would benefit men and women who could:

Take a trip into the country, or those who take delight in gardening, or any other pursuit which requires daylight, could usefully employ their half Saturday, instead of working on the Sabbath; or they could employ their time in mental or physical improvements.

Business opportunity

Across the country a burgeoning leisure industry saw the new half-day Saturday as a business opportunity. Train operators embraced the idea, charging reduced fares for day-trippers to the countryside on Saturday afternoons. With increasing numbers of employers adopting the half-day Saturday, theatres and music halls also switched their star entertainment from a Monday to Saturday afternoon.

Perhaps the most influential leisure activity to help forge the modern week was the decision to stage football matches on Saturday afternoon. The “Football Craze”, as it was called, took off in the 1890s, just as the new working week was beginning to take shape. So Saturday afternoons became a very attractive holiday for workers, as it facilitated cheap excursions and new exciting forms of leisure.

The well-attended 1901 FA Cup final.
Wikimedia Commons

The adoption of the modern weekend was neither swift nor uniform as, ultimately, the decision for a factory to adopt the half-day Saturday rested with the manufacturer. Campaigns for an established weekend had begun in the 1840s but it did not gain widespread adoption for another 50 years.

By the end of the 19th century, there was an irresistible pull towards marking out Saturday afternoon and Sunday as the weekend. While they had their different reasons, employers, religious groups, commercial leisure and workers all came to see Saturday afternoon as an advantageous break in the working week.

This laid the groundwork for the full 48-hour weekend as we now know it – although this was only established in the 1930s. Once again, it was embraced by employers who found that the full Saturday and Sunday break reduced absenteeism and improved efficiency.The Conversation

Brad Beaven, Professor of Social and Cultural History, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Why are there seven days in a week?



Your calendar dates back to Babylonian times.
Aleksandra Pikalova/Shutterstock.com

Kristin Heineman, Colorado State University

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Why are there seven days in a week? – Henry E., age 8, Somerville, Massachusetts


Waiting for the weekend can often seem unbearable, a whole six days between Saturdays. Having seven days in a week has been the case for a very long time, and so people don’t often stop to ask why.

Most of our time reckoning is due to the movements of the planets, Moon and stars. Our day is equal to one full rotation of the Earth around its axis. Our year is a revolution of the Earth around the Sun, which takes 365 and ¼ days, which is why we add an extra day in February every four years, for a leap year.

But the week and the month are a bit trickier. The phases of the Moon do not exactly coincide with the solar calendar. The Moon cycle is 27 days and seven hours long, and there are 13 phases of the Moon in each solar year.

Some of the earliest civilizations observed the cosmos and recorded the movements of planets, the Sun and Moon. The Babylonians, who lived in modern-day Iraq, were astute observers and interpreters of the heavens, and it is largely thanks to them that our weeks are seven days long.

The reason they adopted the number seven was that they observed seven celestial bodies – the Sun, the Moon, Mercury, Venus, Mars, Jupiter and Saturn. So, that number held particular significance to them.

Other civilizations chose other numbers – like the Egyptians, whose week was 10 days long; or the Romans, whose week lasted eight.

Some of the earliest civilizations recorded the movements of planets, the Sun and Moon.
Andrey Prokhorov/Shutterstock.com

The Babylonians divided their lunar months into seven-day weeks, with the final day of the week holding particular religious significance. The 28-day month, or a complete cycle of the Moon, is a bit too large a period of time to manage effectively, and so the Babylonians divided their months into four equal parts of seven.

The number seven is not especially well-suited to coincide with the solar year, or even the months, so it did create a few inconsistencies.

However, the Babylonians were such a dominant culture in the Near East, especially in the sixth and seventh centuries B.C., that this, and many of their other notions of time – such as a 60-minute hour – persisted.

The seven-day week spread throughout the Near East. It was adopted by the Jews, who had been captives of the Babylonians at the height of that civilization’s power. Other cultures in the surrounding areas got on board with the seven-day week, including the Persian empire and the Greeks.

Centuries later, when Alexander the Great began to spread Greek culture throughout the Near East as far as India, the concept of the seven-day week spread as well. Scholars think that perhaps India later introduced the seven-day week to China.

Finally, once the Romans began to conquer the territory influenced by Alexander the Great, they too eventually shifted to the seven-day week. It was Emperor Constantine who decreed that the seven-day week was the official Roman week and made Sunday a public holiday in A.D. 321.

The weekend was not adopted until modern times in the 20th century. Although there have been some recent attempts to change the seven-day week, it has been around for so long that it seems like it is here to stay.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.


This article has been updated to correct the details on Earth’s revolution around the Sun.The Conversation

Kristin Heineman, Instructor in History, Colorado State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Hidden women of history: Catherine Hay Thomson, the Australian undercover journalist who went inside asylums and hospitals



Catherine Hay Thomson went undercover as an assistant nurse for her series on conditions at Melbourne Hospital.
A. J. Campbell Collection/National Library of Australia

Kerrie Davies, UNSW and Willa McDonald, Macquarie University

In this series, we look at under-acknowledged women through the ages.

In 1886, a year before American journalist Nellie Bly feigned insanity to enter an asylum in New York and became a household name, Catherine Hay Thomson arrived at the entrance of Kew Asylum in Melbourne on “a hot grey morning with a lowering sky”.

Hay Thomson’s two-part article, The Female Side of Kew Asylum for The Argus newspaper revealed the conditions women endured in Melbourne’s public institutions.

Her articles were controversial, engaging, empathetic, and most likely the first known by an Australian female undercover journalist.

A ‘female vagabond’

Hay Thomson was accused of being a spy by Kew Asylum’s supervising doctor. The Bulletin called her “the female vagabond”, a reference to Melbourne’s famed undercover reporter of a decade earlier, Julian Thomas. But she was not after notoriety.

Unlike Bly and her ambitious contemporaries who turned to “stunt journalism” to escape the boredom of the women’s pages – one of the few avenues open to women newspaper writers – Hay Thomson was initially a teacher and ran schools with her mother in Melbourne and Ballarat.

Hay Thomson, standing centre with her mother and pupils at their Ballarat school, was a teacher before she became a journalist.
Ballarat Grammar Archives/Museum Victoria

In 1876, she became one of the first female students to sit for the matriculation exam at Melbourne University, though women weren’t allowed to study at the university until 1880.

Going undercover

Hay Thomson’s series for The Argus began in March 1886 with a piece entitled The Inner Life of the Melbourne Hospital. She secured work as an assistant nurse at Melbourne Hospital (now The Royal Melbourne Hospital) which was under scrutiny for high running costs and an abnormally high patient death rate.

Doctors at Melbourne Hospital in the mid 1880s did not wash their hands between patients, wrote Catherine Hay Thomson.
State Library of Victoria

Her articles increased the pressure. She observed that the assistant nurses were untrained, worked largely as cleaners for poor pay in unsanitary conditions, slept in overcrowded dormitories and survived on the same food as the patients, which she described in stomach-turning detail.

The hospital linen was dirty, she reported, dinner tins and jugs were washed in the patients’ bathroom where poultices were also made, doctors did not wash their hands between patients.

Writing about a young woman caring for her dying friend, a 21-year-old impoverished single mother, Hay Thomson observed them “clinging together through all fortunes” and added that “no man can say that friendship between women is an impossibility”.

The Argus editorial called for the setting up of a “ladies’ committee” to oversee the cooking and cleaning. Formal nursing training was introduced in Victoria three years later.

Kew Asylum

Hay Thomson’s next series, about women’s treatment in the Kew Asylum, was published in March and April 1886.

Her articles predate Ten Days in a Madhouse written by Nellie Bly (born Elizabeth Cochran) for Joseph Pulitzer’s New York World.

While working in the asylum for a fortnight, Hay Thomson witnessed overcrowding, understaffing, a lack of training, and a need for woman physicians. Most of all, the reporter saw that many in the asylum suffered from institutionalisation rather than illness.

Kew Asylum around the time Catherine Hay Thomson went undercover there.
Charles Rudd/State Library of Victoria

She described “the girl with the lovely hair” who endured chronic ear pain and was believed to be delusional. The writer countered “her pain is most probably real”.

Observing another patient, Hay Thomson wrote:

She requires to be guarded – saved from herself; but at the same time, she requires treatment … I have no hesitation in saying that the kind of treatment she needs is unattainable in Kew Asylum.

The day before the first asylum article was published, Hay Thomson gave evidence to the final sitting of Victoria’s Royal Commission on Asylums for the Insane and Inebriate, pre-empting what was to come in The Argus. Among the Commission’s final recommendations was that a new governing board should supervise appointments and training and appoint “lady physicians” for the female wards.

Suffer the little children

In May 1886, An Infant Asylum written “by a Visitor” was published. The institution was a place where mothers – unwed and impoverished – could reside until their babies were weaned and later adopted out.

Hay Thomson reserved her harshest criticism for the absent fathers:

These women … have to bear the burden unaided, all the weight of shame, remorse, and toil, [while] the other partner in the sin goes scot free.

For another article, Among the Blind: Victorian Asylum and School, she worked as an assistant needlewoman and called for talented music students at the school to be allowed to sit exams.

In A Penitent’s Life in the Magdalen Asylum, Hay Thomson supported nuns’ efforts to help women at the Abbotsford Convent, most of whom were not residents because they were “fallen”, she explained, but for reasons including alcoholism, old age and destitution.

Suffrage and leadership

Hay Thomson helped found the Austral Salon of Women, Literature and the Arts in January 1890 and the National Council of Women of Victoria. Both organisations are still celebrating and campaigning for women.

Throughout, she continued writing, becoming Table Talk magazine’s music and social critic.

In 1899 she became editor of The Sun: An Australian Journal for the Home and Society, which she bought with Evelyn Gough. Hay Thomson also gave a series of lectures titled Women in Politics.

A Melbourne hotel maintains that Hay Thomson’s private residence was secretly on the fourth floor of Collins Street’s Rialto building around this time.

Home and back

After selling The Sun, Hay Thomson returned to her birth city, Glasgow, Scotland, and to a precarious freelance career for English magazines such as Cassell’s.

Despite her own declining fortunes, she brought attention to writer and friend Grace Jennings Carmichael’s three young sons, who had been stranded in a Northampton poorhouse for six years following their mother’s death from pneumonia. After Hay Thomson’s article in The Argus, the Victorian government granted them free passage home.

Hay Thomson eschewed the conformity of marriage but tied the knot back in Melbourne in 1918, aged 72. The wedding at the Women Writer’s Club to Thomas Floyd Legge, culminated “a romance of forty years ago”. Mrs Legge, as she became, died in Cheltenham in 1928, only nine years later.The Conversation

Kerrie Davies, Lecturer, School of the Arts & Media, UNSW and Willa McDonald, Senior Lecturer, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Broadcast turns 100: from the Hindenburg disaster to the Hottest 100, here’s how radio shaped the world



The famous Hindenburg tragedy was heard around the world via recorded radio journalism.
Wiki Commons, CC BY

Peter Hoar, Auckland University of Technology

Eighty-one years ago, a broadcast of Orson Welles’s War of the Worlds supposedly caused mass hysteria in America, as listeners thought martians had invaded New Jersey.

There are varying accounts of the controversial incident, and it remains a topic of fascination, even today.

Back when Welles’s fictional martians attacked, broadcast radio was considered a state-of-the-art technology.

And since the first transatlantic radio signal was transmitted in 1901 by Guglielmo Marconi, radio has greatly innovated the way we communicate.

Dots and dashes

Before Marconi, German physicist Heinrich Hertz discovered and transmitted the first radio waves in 1886. Other individuals later developed technologies that could send radio waves across the seas.

At the start of the 20th century, Marconi’s system dominated radio wave-based media. Radio was called “wireless telegraphy” as it was considered a telegraph without the wires, and did what telegraphs had done globally since 1844.

Messages were sent in Morse code as dots and dashes from one point to another via radio waves. At the time, receiving radio required specialists to translate the dots and dashes into words.




Read more:
Nazis pressed ham radio hobbyists to serve the Third Reich – but surviving came at a price


The more refined technology underpinning broadcast radio was developed during the first world war, with “broadcast” referring to the use of radio waves to transmit audio from one point to many listeners.

This year, organised broadcast radio turns 100. These days it’s considered a basic technology, but that may be why it remains such a vital medium.

SOS: the Titanic sinks

By 1912, radio was used to run economies, empires and armed forces.

Its importance for shipping was obvious – battleships, merchant ships and passenger ships were all equipped with it. People had faith in technological progress and radio provided proof of how modern machines benefited humans.

However, the sinking of the Titanic that year caused a crisis in the world’s relationship with technology, by revealing its fallibility. Not even the newest technologies such as radio could avoid disaster.

A replica of the radio room on the Titanic. One of the first SOS messages in history came from the ship.
Wiki Commons

Some argue radio use may have increased the ship’s death toll, as the Titanic’s radio was outdated and wasn’t intended to be used in an emergency. There were also accusations that amateur “ham radio” operators had hogged the bandwidth, adding to an already confusing and dire situation.

Nonetheless, the Titanic’s SOS signal managed to reach another ship, which led to the rescue of hundreds of passengers. Radio remains the go-to medium when disasters strike.

Making masts and networks

Broadcast radio got traction in the early 1920s and spread like a virus. Governments, companies and consumers started investing in the amazing new technology that brought the sounds of the world into the home.

Huge networks of transmitting towers and radio stations popped-up across continents, and factories churned out millions of radio receivers to meet demand.

Some countries started major public broadcasting networks, including the BBC.




Read more:
NPR is still expanding the range of what authority sounds like after 50 years


Radio stations sought ways around regulations and, by the mid 1930s, some broadcasters were operating stations that generated up to 500,000 watts.

One Mexican station, XERA, could be heard in New Zealand.

Hearing the Hindenburg

On May 6, 1937, journalist Herbert Morrison was experimenting with recording news bulletins for radio when the Hindenburg airship burst into flames.

His famous commentary, “Oh the humanity”, is often mistaken for a live broadcast, but it was actually a recording.

Recording technologies such as transcription discs, and later magnetic tape and digital storage, revolutionised radio.

Broadcasts could now be stored and heard repeatedly at different places instead of disappearing into the ether.

Transistors and FM

In 1953 radios got smaller, as the first all transistor radio was built.

A 1960 ad for a pocket sized Motorola transistor radio.
Wiki Commons

Transistor circuits replaced valves and made radios very cheap and portable.

Along with being portable, radio sound quality improved after the rise of FM broadcasting in the 1960s. While both FM and AM are effective ways to modulate carrier waves, FM (frequency modulation) offers better audio quality and less noise compared to AM (amplitude modulation).

Music on FM radio sounded as good as on a home stereo. Rock and roll and the revolutionary changes of the 1960s started to spread via the medium.

AM radio was reserved for talkback, news and sport.

Beeps in space

In 1957, radio experienced lift-off when the USSR launched the world’s first satellite.

Sputnik 1 didn’t do much other than broadcast a regular “beep” sound by radio.

But this still shocked the world, especially the USA, which didn’t think the USSR was so technologically advanced.

Sputnik’s beeps were propaganda heard all round the world, and they heralded the age of space exploration.

The launch of Sputnik 1 started the global space race.

Today, radio is still used to communicate with astronauts and robots in space.

Radio astronomy, which uses radio waves, has also revealed a lot about the universe to astronomers.

Digital, and beyond

Meanwhile on Earth, radio stations continue to use the internet to extend their reach beyond that of analogue technologies.

Social media helps broadcasters generate and spread content, and digital editing tools have boosted the possibilities of what can be done with podcasts and radio documentaries.




Read more:
Radio as a form of struggle: scenes from late colonial Angola


The radio industry has learnt to use digital plenitude to the max, with broadcasters building archives and producing an endless flood of material beyond what they broadcast.

This year marks a century of organised broadcast radio around the world.

Media such as movies, television, the internet and podcasts were expected to sound its death knell. But radio embraces new technology. It survives, and advances.The Conversation

Peter Hoar, Senior Lecturer, School of Communications Studies, Auckland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.


History repeats itself. That’s bad news for the 2020s



When there are too many elites in a society, competition for power makes existing problems worse.
Francisco Goya / Wikimedia

David Baker, Macquarie University

What will happen in the 2020s? If history is any guide (and there’s good reason to think it is), the outlook isn’t great.

Here are some big-picture predictions: stagnant real wages, faltering standard of living for the lower and middle classes, worsening wealth inequality, more riots and uprisings, ongoing political polarisation, more elites competing for limited positions of power, and elites co-opting radical movements.

Thanks to globalisation, all this won’t just happen in one country but in the majority of countries in the world. We will also see geopolitical realignment, dividing the world into new alliances and blocs.

There is also a low to moderate chance of a “trigger event” – a shock like an environmental crisis, plague, or economic meltdown – that will kick off a period of extreme violence. And there is a much lower chance we will see a technological breakthrough on par with the industrial revolution that can ease the pressure in the 2020s and reverse the trends above.

These aren’t just guesses. They are predictions made with the tools of cliodynamics, which uses dozens of case studies of civilisations over the past 5,000 years to look for mathematical patterns in human history.




Read more:
Cliodynamics: can science decode the laws of history?


Cycles of growth and decline

One area where cliodynamics has borne fruit is “demographic-structural theory”, which explains common cycles of prosperity and decline.

Here’s an example of a full cycle, taken from Roman history. After the second Punic war in 201 BCE, the Roman republic enjoyed a period of extreme growth and prosperity. There was a relatively small divide between the richest and poorest, and fewer members of elites.

As the population grew, smallholders had to sell off their farms. Land coalesced into larger plantations run by elites mostly with slave labour. Elite numbers ballooned, wealth inequality became extreme, the common people felt pinched, and numerous wealthy people found themselves shut out of power.

The assassination of Julius Caesar was a key event in the decline of the Roman republic.
Jean-Leon Gerome

The rich resisted calls for land reform, and eventually the elites split into two factions called the Optimates and the Populares. The following century involved slave revolts and two massive civil wars.

Stability only returned when Augustus defeated all other rivals in 30 BCE – and ended the republic, making himself emperor. So began a new cycle of growth.

Booms and busts

Demographic-structural theory looks at things like the economic and political strength of the state, the ages and wages of the population, and the size and wealth of the elite to diagnose a society’s health – and work out where it’s heading.

Historically, some things we see today are bad signs: shrinking real wages, a growing gap between the richest and the poorest, rising numbers of wealthy and influential people who are becoming more competitive and factionalised.

Another bad sign is if previous generations witnessed periods of growth and plenty. It might mean that your society is about to hit a wall – unless a great deal of innovation and good policy relieves the pressure once again.

We are living in an unprecedented period of global growth. History says it won’t last.
SRC / IGBP / F Pharand Deschenes

The modern global system has experienced a period of growth unprecedented in human history since 1945, often referred to as the “Great Acceleration”. Yet in country after country today, we see stagnant wages, rising inequality, and wealthy elites jousting for control.

Historically, periods of strain and “elite overpopulation” are followed by a crisis (environmental or economic), which is in turn followed by years of sociopolitical instability and violence.

Elite competition makes crises worse

Factional warring after a disaster in a top-heavy society makes things much worse. It can keep the population low for decades after the initial catastrophe, and may only end when elites are exhausted or killed off.

This underlying cycle fed the Wars of the Roses between the Lancastrians and Yorkists in 15th century England, the struggle between the Optimates and Populares in the Roman Republic, and countless other conflicts in history.




Read more:
Computer simulations reveal war drove the rise of civilisations


In a period of growth and expansion these dynastic, political, and religious animosities would be less pronounced – as there is more of everything to go around – but in a period of decline they become incendiary.

In different regions and time periods, the factions vary widely, but the ideological merits or faults of any particular faction have literally no bearing on the pattern.

We always massacre each other on the downward side of a cycle. Remember that fact as we embark on the pattern again in the 2020s, and you find yourself becoming blindingly angry while watching the news or reading what someone said on Twitter.

A connected world

Because the world’s societies and economies are more unified than ever before, the increasing political division we see in Australia or the United States also manifests itself around the world.

Violence between the Bharatiya Janata Party (BJP) and Trinamool Congress in Bengal, political polarisation in Brazil following the election of Jair Bolsonaro, and less public conflicts within China’s ruling party are all part of a global trend.

Trigger events

We can expect this decline to continue steadily in the next decade, unless a trigger event kicks off a crisis and a long period – perhaps decades – of extreme violence.

Here’s a dramatic historical example: in the 12th century, Europe’s population was growing and living standards were rising. The late 13th century ushered in a period of strain. Then the Great Famine of 1315–17 set off a time of strife and increasing violence. Next came an even bigger disaster, the Black Death of 1347–51.

After these two trigger events, elites fighting over the wreckage led to a century of slaughter across Europe.

From my own studies, these “depression phases” kill an average of 20% of the population. On a global scale, today, that would mean 1.6 to 1.7 billion people dead.

There is, of course, only a low to moderate probability that such a trigger event will occur in the 2020s. It may happen decades later. But the kindling for such a conflagration is already being laid.




Read more:
Big gods came after the rise of civilisations, not before, finds study using huge historical database


Technology to the rescue?

One thing that could reverse this cycle would be a major technological breakthrough. Innovation has temporarily warded off decline in the past.

In mid-11th century Europe, for example, new land-clearing and agricultural methods allowed a dramatic increase in production which led to relative prosperity and stability in the 12th century. Or in the mid-17th century, high-yield crops from the Americas raised carrying capacities in some parts of China.

In our current situation, something like nuclear fusion – which could provide abundant, cheap, clean energy – might change the situation drastically.

The probability of this occurring in the 2020s is low. Nevertheless, innovation remains our best hope, and the sooner it happens the better.

This could be a guiding policy for public and private investment in the 2020s. It is a time for generous funding, monumental projects, and bold ventures to lift humanity out of a potential abyss.

Sunlit uplands of the distant future

If you look far enough ahead, our prospects become brighter.
Shutterstock

Cheer up. All is not lost. The further we project into the future the brighter human prospects become again, as great advances in technology do occur on a long enough timescale.

Given the acceleration of the frequency of such advances over the past 5,000 years of history, we can expect something profound on the scale of the invention of agriculture or the advent of heavy industry to occur within the next 100 years.

That is why humanity’s task in the 2020s – and much of the 21st century – is simply to survive it.The Conversation

David Baker, Lecturer in Big History, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Mormons and money: An unorthodox and messy history of church finances



There was something fishy about this $3 bill.
Everett Historical/Shutterstock.com

John Turner, George Mason University

The Church of Jesus Christ of Latter-day Saints has allegedly amassed US$100 billion in purportedly charitable assets since 1997 without ever giving any money away – a possible breach of federal tax laws.

This estimate of the size of its investment vehicle known as Ensign Peak Advisors became public knowledge when David A. Nielsen, a former employee and a member of the church, blew the whistle.

Together with his twin brother Lars, a former church member, Nielsen gave the Internal Revenue Service evidence he claims proves the church mishandled funds.

According to the Nielsens, Ensign Peak Advisors has invested the church’s annual surplus member contributions to build up a $100 billion portfolio. But the Nielsens say they could find no evidence that Ensign Peak Advisors spent a dime of this money for religious, charitable, educational or other “public” purposes as IRS rules require under most circumstances. They also allege that it diverted tax-exempt funds to finance some for-profit projects, which could also violate IRS rules banning such transactions in some situations.

If the IRS determines that the investment fund failed to act as a charity even though it benefited from tax breaks, it might find that Ensign Peak Advisors broke tax laws. If that happens, and the IRS collects back taxes, David Nielsen could receive a cut as a reward.

If the numbers are accurate, Ensign is the nation’s largest charitable endowment, with as much money as Harvard University and the Bill and Melinda Gates Foundation have at their disposal, combined, if not more.

Church leaders deny that they have violated any laws that regulate tax-exempt institutions. The church “complies with all applicable law governing our donations, investments, taxes and reserves,” said the three-member council headed by church president Russell M. Nelson.

From my vantage point as a historian of Mormonism, this news marks a new twist on an old story. For nearly two centuries, the church has conducted its finances in ways that defy the expectations Americans have for religious organizations.

Lars Nielsen, brother of whistleblower David Nielsen, explains how Ensign Peak Advisors allegedly operates.

A church-owned ‘anti-bank’

Consider what happened in the summer of 1837, when the fledgling church teetered on the brink of collapse.

At the time, Joseph Smith and many church members lived in Kirtland, a small town in northeastern Ohio. The Smith family had moved there in the early 1830s, seeking a safer gathering place for church members in the face of persecution in New York state.

Joseph Smith’s followers built this temple in Kirtland, Ohio before most of them moved westward.
Library of Congress

Smith and his followers began building a temple in Kirtland. The Saints dedicated their temple in 1836, but the project left Smith and others deep in debt. Like many communities in antebellum America, Mormon Kirtland was land-rich and cash-poor. A lack of hard currency hampered commerce.

Smith and his associates decided to start their own bank to solve their financial woes. The circulation of bank notes, they thought, would boost Kirtland’s economic prospects and make it easier for church leaders to satisfy their creditors.

Lots of currency

The idea of Mormon leaders printing their own money wasn’t as crazy as it sounds in 2019. The United States still lacked a uniform currency. A host of institutions of varying integrity – chartered banks, unchartered banks, other businesses and even counterfeiting rings – issued notes whose acceptance depended on the confidence of citizens who might accept or refuse them.

Mormon leaders bought engraving plates for printing bank notes and asked the Ohio state legislature to charter their bank. The Mormon proposal went nowhere in the legislature.

Joseph Smith: Latter-day Saints movement founder and, for a time, currency creator.
AP Photo/Douglas C. Pizac

At this point, church leaders took a more fateful and dubious step.

They had collected money from investors and had already begun printing notes of the “Kirtland Safety Society Bank.” Instead of shutting down the operation when the charter failed to come through, they doubled down. Worried about the legal risk of running an unchartered bank, church leaders altered the notes to read “anti-Banking-Co.”

A brief boom

For a while, all went well. “Kirtland bills are as safe as gold,” one church member wrote in January 1837. The town enjoyed a short-lived boom.

Soon, however, the anti-bank proved anything but safe. Non-Mormons questioned the society’s ability to redeem its notes, and church leaders could not keep it afloat. The Kirtland Safety Society’s struggles were not unusual. Scores of banks, including some of the nation’s largest, failed in what became the Panic of 1837. Real estate speculators lost their fortunes, and workers lost their jobs.

What made Kirtland different was the bank’s ownership. Many church members lost not only confidence in the society’s banknotes, but faith in the prophet who had signed them.

The crisis divided the church. At one point that summer, church members wielding pistols and bowie knives fought with each other in the temple. Smith and one of his top associates were convicted of issuing banknotes without a charter and fined $1,000 each. They soon fled the courts and their creditors, taking refuge with fellow church members in Missouri.

After anti-Mormon mobs forced the Latter-day Saints out of Missouri and then Illinois, Smith’s successor, Brigham Young, led thousands of church members to what became the Utah Territory.

From a railroad to a shopping mall

The church has never stopped blending commerce and religion.

In the late 1860s, Mormons built the Utah Central Railroad, which connected Salt Lake City with Ogden – a stop along the transcontinental railroad. Church leaders controlled the railway until 1878, when Union Pacific bought it.

Beginning in 1868, the church also operated the Zion’s Cooperative Mercantile Institution, a department store designed to put the squeeze on non-Mormon businesses.

The church sold the store in 1999, but in many ways its commercial interests have become more grandiose since its frontier days of railroading and retailing.

In 2003, the church’s for-profit real estate division purchased the land on which the store had stood. Nine years later, the estimated $1.5 billion City Creek Center development opened to the public, including a glitzy mall.

The Mormon Church’s commercial real estate arm built the lavish City Creek Center shopping mall in Salt Lake City.
AP Photo/Rick Bowmer

At the time, church officials asserted that they had not used any tithing money on the City Creek project. The church explains that tithing – the contribution of 10% of its 16 million members’ annual income – is for the construction and maintenance of church buildings, local congregational activities and the church’s educational programs. The church’s for-profit divisions handle commercial projects, including real estate and publishing.

The Nielsen brothers allege that Ensign Peak Advisors diverted $1.4 billion in tithing funds to pay for the development, a possible violation of the IRS rules that govern tax-exempt institutions.

It is impossible to confirm the accusation without greater transparency on the part of the church, which has told Religion Unplugged, a nonprofit media outlet, that it “does not provide information about specific transactions or financial decisions.”

According to Samuel Brunson, a tax law professor, the church was more open about its ledger sheet and business arrangements during the first half of the 20th century.

Then, in the mid- to late 1950s, it lost approximately $10 million in municipal bond investments. The resulting embarrassment was one factor in the church’s decision to become less forthcoming about its finances.

In this respect, the church is not unique. U.S. laws do not require churches to disclose their financial information in much detail. While some churches do so voluntarily, others – including the Catholic Church – keep their financial and commercial interests shrouded from public view.

Saving for a ‘rainy decade’

It remains to be seen whether Ensign Peak Advisors is going to become the subject of IRS investigations.

There are, of course, ethical and moral questions in addition to legal ones. For example, should the church amass so much money? And might the church use more of its excess funds and investment gains for humanitarian purposes or to make the tuition at church-owned Brigham Young University even more affordable?

What’s also at stake is confidence in the church’s leaders. Sen. Mitt Romney, the Republican Party’s 2012 presidential nominee and the nation’s most politically influential Mormon, professed to be “happy that they’ve not only saved for a rainy day, but for a rainy decade.”

Romney’s perspective makes some historical sense, given that the most obvious problem in Kirtland, Ohio, was that Joseph Smith’s financial stewardship was decidedly unwise. At least today’s church leaders earn good returns on their investments.

[ Deep knowledge, daily. Sign up for The Conversation’s newsletter. ]The Conversation

John Turner, Professor of American Religion, George Mason University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Hidden women of history: Wauba Debar, an Indigenous swimmer from Tasmania who saved her captors



Though her brave acts were acknowledged after her death, Wauba Debar’s grave was later robbed in the name of “science”.
Tirin/Wikimedia, CC BY-SA

Megan Stronach, University of Technology Sydney and Daryl Adair, University of Technology Sydney

Aboriginal and Torres Strait Islander readers are advised this article contains images and names of deceased people.

Aboriginal women and girls in lutruwita (Tasmania or Van Diemen’s Land) were superb swimmers and divers.

For eons, the palawa women of lutruwita had productive relationships with the sea and were expert hunters. Scant knowledge remains of these women, yet we can find fleeting glimpses of their aquatic skills.

Wauba Debar of Oyster Bay’s Paredarerme tribe was stolen as a teenager to become, according to Edward Cotton (a Quaker who settled on Tasmania’s East Coast), “a sealer’s slave and paramour”.

Servitude and rescue

Foreign sealers arrived on the Tasmanian coastlines in the late 18th century. The ensuing fur trade nearly destroyed the seal populations of Tasmania in a matter of two decades.

At the same time, life became extremely difficult for the female palawa population.

Slavery was still legal in the British Empire, and so the profitability of the sealing industry was underpinned by the servitude of palawa women.

Sporadic raids known as “gin-raiding” by sealers rendered the coastlines a place of constant danger for female palawa.

Pêche des sauvages du Cap de Diemen (Natives preparing a meal from the sea). Drawn by Jean Piron in 1817. Engraving by Jacques Louis Copia.
National Library of Australia



Read more:
Noted works: The Black War


Little is known of Wauba Debar other than tales of a daring rescue at sea. Though variations to her story can be found, it most frequently details her long swim and lifesaving efforts in stormy conditions. As one version tells it:

The boat went under; the two men were poor swimmers, and looked set to drown beneath the mountainous grey waves. Wauba could have left them to drown, and swim ashore on her own. But she didn’t.

First, she pulled her husband under her arm — the man who had first captured her — and dragged him back to shore, more than a kilometre away. Wauba next swam back out to the other man, and brought him in as well. The two sealers coughed and spluttered on the Bicheno beach, but they did not die. Wauba had saved them.

Death at sea

Sadly, no one was there to rescue Wauba when she needed it. Her demise during a sealing trip, was at the hands of Europeans.

According to a sailor’s account to Cotton, Wauba was one of the “gins” captured to take along on a whaleboat sailing from Hobart to the Straits Islands (Furneaux Group) as “expert hunters, fishers, and divers, as in most barbarous tribes, the slaves of the men”.

The sailing party camped at Wineglass Bay but woke to find the women and dogs had vanished. A group set off to pursue those who’d taken them. In his 1893 account, Cotton speculated in The Mercury newspaper on the likely cause of her death:

Wauba Debar had, I suppose, been captured in like manner … and possibly died of injuries sustained in the capture, which no doubt was not done very tenderly.

The crew interred Wauba at Bicheno, and marked her grave by a slab of wood with details inscribed.

Accounts differ as to when this actually took place. In 1893, elderly Bicheno residents said Wauba was buried 10 years before the date on the headstone, placing her death around 1822.

However, in his diary entry on 24 January 1816, Captain James Kelly described how he hauled up in Waub’s Boat Harbour due to the heavy afternoon swell. Considering the area was already named after her, it can be concluded that Wauba was likely buried before 1816.

Cotton’s report imagined her burial:

Wauba Debar did not live to be a mother of the tribe of half-bred sealers of the Straits, which became a sort of city or refuge of for bushrangers in aftertime … But she, poor soul was buried decently, perchance reverently, and I suppose other of the captured sisters would be present by the graveside on the shores of that silent nook near the beached boat.

Here lies Wauba

Wauba’s reputation was such that in 1855 the public of Bicheno decided to commemorate her by erecting a railing, headstone, and footstone (paid for by public subscription) at her grave, with “Waub” carved into it.

John Allen, who had been granted land nearby, donated ten shillings towards the cost of the gravestone – notwithstanding his involvement in a massacre at Milton Farm, Great Swanport, 30 years earlier.

The inscription reads:

Here lies Wauba Debar. A Female Aborigine of Van Diemen’s Land. Died June 1832. Aged 40 Years. This Stone is Erected by a few of her white friends.

Whether prompted by a sense of loss, guilt, or admiration, the community memorialised Wauba, and by extension, the original inhabitants of the land.

Yet by the late 1800s, European demand for Aboriginal physical remains for “scientific investigation” was high. In 1893, the Tasmanian Museum and Art Gallery was determined to procure the remains of Wauba.

Waub’s Bay, Bicheno, is named after Wauba Debar.
Shutterstock

The prevailing ethnological theories believed that the study of Australian Aboriginal people, and particularly Indigenous Tasmanians, would reveal much about the earliest stages of human development and its progress.

Wauba’s grave was exhumed, put into a box, labelled “Native Currants”, and dispatched to Hobart.

The locals were outraged. An editorial in the Tasmanian Mail newspaper condemned the act as “a pure case of body snatching for the purposes of gain, and nothing else” that “the name of Science is outraged at being connected with”.

Snowdrops bloom

Wauba’s memorial is the only known gravestone erected to a Tasmanian Aboriginal person during the 19th century, and she is the only palawa woman known to have been buried and commemorated by non-Indigenous locals.

In 2014, Olympic swimmer and Bicheno resident Shane Gould dedicated a fundraising swim to Wauba Debar’s swimming abilities and memory.

The European styled memorial serves as a reminder of the more turbulent interactions between the two peoples that shaped Tasmania’s history from the 1800s onwards.

Wauba’s empty grave is Tasmania’s smallest State Reserve. Her remains were returned to the Tasmanian Indigenous community in 1985. Snowdrops are said to bloom around the grave every spring.The Conversation

Megan Stronach, Post Doctoral Research Fellow, University of Technology Sydney and Daryl Adair, Associate Professor of Sport Management, University of Technology Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Knights Templar: still loved by conspiracy theorists 900 years on


Conny Skogberg via Shutterstock

Patrick Masters, University of Portsmouth

On Christmas Day, 1119, the king of Jerusalem, Baldwin II persuaded a group of French knights led by Hugh de Payne II to save their souls by protecting pilgrims travelling the Holy Land. And so the Order of the Knights Templar was formed.

This revolutionary order of knights lived as monks and took vows of poverty and chastity, but these were monks with a difference – they would take up arms as knights to protect the civilians using the dangerous roads of the newly conquered Kingdom of Jerusalem. From these humble beginnings, the order would grow to become one of the premier Christian military forces of the Crusades.

Over the next 900 years, these warrior monks would become associated with the Holy Grail, the Freemasons and the occult. But are any of these associations true, or are they just baseless myth?

The Crusades ended in 1291 after the Christian capital of Acre fell to the Mameluke forces of Egypt and the Templars found themselves redundant. Despite their wealth and European holdings, their reason for existence had been to wage war in defence of the Holy Land.

But the French king Philip IV was in debt to the Templar order and, with the Holy land lost, he capitalised on their vulnerability and had the Templars arrested in France on Friday October 13, 1307 in a dawn raid on their Paris Temple and residences. In 1312, the order was abolished by papal decree and in 1314 the last grand-master, Jacque de Molay, was burned at the stake in Paris with three other Templars. With the order destroyed, any surviving former members joined other orders or monasteries.

Execution of Jacques de Molay in Paris, March 1314.
Giovanni Villani, Nuova Cronica – ms. Chigiano L VIII 296 – Biblioteca Vaticana

Despite the arrests and charges of heresy being laid against the order, a document known as the Chinon Parchment was found in 2001 in the Vatican’s archives which documents that the Templars were, in fact, exonerated by the Catholic Church in 1312. But, despite clearing them of heresy, Pope Clement ordered that they be disbanded.

Appropriation of a legend

The suppression of the Templars meant that there was nobody to safeguard their legacy. Since then, the order has been appropriated by other organisations – most notably as ancestors to the Masonic order in the 18th century and, more recently, by right-wing extremist groups such as the Knights Templar-UK and mass-murdering terrorist Anders Behring Breivik.




Read more:
Alt-right claims to march in step with the Knights Templar – this is fake history


The Knights Templar’s association with Freemasonry is not so much a myth as it was a marketing campaign by 18th-century Freemasons to appeal to the aristocracy. Historian Frank Sanello explained in his 2003 book, The Knights Templars: God’s Warriors, the Devil’s Bankers, that initially it was Andrew Ramsey, a senior French Freemason of the era, who first made the link between the Freemasons and the Crusader knights.

But he originally claimed the Freemasons were descended from the crusading Order of the Knight Hospitaller. Of course, the Hospitallers were still operational, unlike the Knight Templar, so Ramsey quickly changed his claim to the Templars being the Freemasons’ crusading ancestry.

The Knights Templar had actually been mythologised in popular culture as early as the 13th century in the Grail epic Parzival by German knight and poet Wolfram von Eschenbach. In this Grail epic, the Knights Templar were included in the story as the guardians of the Grail. After the order’s sudden fall, these warrior monks became associated with conspiracies and the occult.

For some, a mystery still surrounds the fate of the Templar fortune (which was in reality seized by Phillip IV, with the majority of their property redistributed to the Hospitallers) and the Templar confessions (extracted under torture) to worshipping an idol dubbed Baphomet. The link between the Templars and the occult would resurface again in the 16th century in Henry Agrippa’s book De Occulta Philosophia.

Modern-day myth

Modern fiction continues to draw upon the widespread mysteries and fanciful theories. These mythical associations are key themes for many popular works of fiction, such as Dan Brown’s The Da Vinci Code in which the Templars guard the Grail. The Templar myth has also found its way into the digital gaming format in the globally successful Assassin’s Creed franchise, in which the player must assassinate a villainous Templar.

Nine centuries after they were formed, the Templars remain the most iconic and infamous order of knights from the Crusades. The Templar legacy has grown beyond their medieval military role and the name has become synonymous with the occult, conspiracies, the Holy Grail and the Freemasons. But these are all false narratives – fantastical, but misleading.

The real legacy of the Templars remains with the Portuguese Order of Knights, Ordem dos Cavaleiros de Nosso Senhor Jesus Cristo(Order of the Knights of Jesus Christ). This order was created by King Diniz in 1319 with Papal permission due to the prominent role the Templars played in establishing the kingdom of Portugal. The new knighthood even moved into the Templars’ former headquarters at Tomar.

For historian Micheal Haag, this new order “was the Templars under another name” – but it pledged obedience to the king of Portugal and not the Pope like their Templar predecessors.

And so the essence of the Templar’s successors still exists today as a Portuguese order of merit for outstanding service – and the Templar myth continues to provide a rich source of inspiration for artistic endeavours.The Conversation

Patrick Masters, Lecturer in Film Studies, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Vikings didn’t just murder monks and pillage monasteries – they helped spread Christianity too


Christians?
Shutterstock

Caitlin Ellis, University of Oxford

Vikings are often seen as heathen marauders mercilessly targeting Christian churches and killing defenceless monks. But this is only part of their story. The Vikings played a key role in spreading Christianity, too.

Norse mythology has long captured the popular imagination and many today have heard stories about the pagan gods, particularly Odin, Thor and Loki, recently reimagined in Marvel’s comic books and movies. Some now even follow reconstructed versions of these beliefs, known as Ásatrú (the religion of the Aesir).

Our main source for this mythology, the Prose Edda, was written by a 13th-century Christian, the Icelandic politician Snorri Sturluson. Scandinavia converted to Christianity later than many parts of Europe, but this process is still an important part of the Vikings’ real story. Indeed, there are fascinating works of Norse literature with a Christian theme, including sagas of bishops and saints.

It would be wrong to minimise Viking violence, but raiding – hit and run attacks for plunder – in the medieval period was not confined to these Scandinavian seafarers. The Irish annals, such as the Annals of Ulster, record far more attacks by Irishmen on other Irishmen, including the raiding and burning of churches, than attacks by Scandinavians.

An ideological clash is one suggested cause of the “Viking Age”. This line of thinking suggests that pagan Scandinavians sought to avenge Christian attacks, such as the Frankish emperor Charlemagne’s invasion of Saxony from 772AD to 804AD. This 30-year conflict involved forced mass baptism, the death penalty for “heathen practices” and included the execution of 4,500 Saxon rebels at Verden in 782AD.

It seems more likely, however, that Christian monasteries were initially targeted because they were poorly defended and contained portable wealth in the form of metalwork and people. Settling in richer Christian lands also offered better prospects for some than remaining in resource-poor Scandinavia.

The rise of Christianity

The conversion of Scandinavia was gradual with Christian missionaries preaching intermittently in Scandinavia from the eighth century. While there was some resistance, Christianity and Norse paganism were not always fundamentally opposed. A 10th-century soapstone mould from Trendgården in Jutland, Denmark, allowed the casting of metal Thor’s hammer amulets alongside crosses. The same craftsman clearly catered for both pagans and Christians.

The first Scandinavian king to be converted was the Danish exile Harald Klak. He was baptised in 826AD with the Carolingian emperor Louis the Pious as his sponsor, in exchange for imperial support for an (albeit unsuccessful) attempt to regain his throne.

Guthrum, a king from the Viking Great Army which attacked England in the ninth century, was also baptised as part of his agreement following defeat by the West Saxon king Alfred “the Great” in 878AD. Indeed, coming into contact with Christian kingdoms which were more politically centralised arguably led to greater unification of the Scandinavian realms.




Read more:
Viking homes were stranger than fiction: portals to the dead, magical artefacts and ‘slaves’


One of the most significant turning points in the Christianisation of Scandinavia was the conversion of the Danish king Harald Bluetooth in the 960s. Bluetooth technology is named after Harald because he united disparate parts of Denmark, while the technology unites communication devices.

Uh oh!
Shutterstock

Harald proudly proclaimed on the now iconic Jelling stone, an impressive monument with a runic inscription, that he “made the Danes Christian”. And this connection between kingship and Christianity continued.

Norway was converted largely due to two of its kings: Olaf Tryggvason and Olaf Haraldsson. The latter was canonised shortly after his death in battle in 1030AD, becoming Scandinavia’s first native saint.

Future Norwegian kings benefited from their association with Olaf Haraldsson, who became Norway’s patron saint. Other royal Scandinavian saints would follow, notably Erik of Sweden and Knud the Holy of Denmark. The Norse earldom of Orkney also produced a martyr from its ruling family: St Magnus, who was killed in around 1116 in a dynastic squabble.

The 2018 Danish Eurovision entry (Rasmussen’s song Higher Ground) portrays Magnus as a pacifist viking refusing to fight. Saga sources do suggest that Magnus refused on one occasion to raid with the Norwegian king and fled from the fleet, but his career was not without violence.

Scandinavians who settled abroad in Christian lands were also converted to the dominant religion. While Scandinavian settlers initially buried their dead in traditional pagan ways, they soon adopted the customs of those living around them. And their settlements became part of the political and cultural makeup of their host societies.




Read more:
Viking migration left a lasting legacy on Ireland’s population


Some of the most celebrated pieces of medieval Irish ecclesiastical art were likely made by Hiberno-Scandinavian craftsmen from Viking-founded towns like Dublin. These objects also feature stylistic elements which had spread from the Scandinavian homelands.

For example, the 11th-century Clonmacnoise crozier is decorated in the Scandinavian art style of Ringerike, with snake-like animals in figure-of-eight patterns. Clonmacnoise in County Offaly, associated with the sixth-century St Ciaran, is one of Ireland’s oldest and most important ecclesiastical sites. And the ancestors of these craftsmen might have been the very raiders who had attacked Irish churches.

Soldiers of God

Even Scandinavian settlers in the remote islands of the North Atlantic joined the European mainstream with some enthusiasm. Partly due to pressure from Norway, Iceland officially converted to Christianity in the year 1000. Following consultation at their national assembly (the Alþing) it was decided that the country would convert but that some pagan practices would still be tolerated.

The settlements on Greenland eventually failed in the 14th and 15th centuries, but even when the inhabitants were starving they still devoted precious resources to importing luxury goods for the church, including wine and vestments.

Scandinavians also joined the Crusades; now they were the Christians attacking the so-called heathens. The Norwegian king Sigurd “Jerusalem-farer” – named for his visit to the Holy Land – was, in fact, the first European king to participate in the Crusades personally, making a journey from 1108 to 1111, a short while after the First Crusade culminated in the Christian reoccupation of Jerusalem in 1099.

Crusading was, after all, not so different from Viking raiding, but this time the killing and looting had Christian backing. Instead of an afterlife of feasting in Valhalla as a reward for dying in battle, those who died on Crusade would go straight to Heaven.

Indeed, the Viking world was as much populated by missionary kings, bishops and saints as it was by raiders, gods and giants.The Conversation

Caitlin Ellis, Stipendiary Lecturer in Medieval History, University of Oxford

This article is republished from The Conversation under a Creative Commons license. Read the original article.


%d bloggers like this: