Tag Archives: history

The History of Modern Yemen



History of the Germans



History of Scotland



History of Scandinavia



WD-40


The link below is to an article that takes a look at the history of WD-40.

For more visit:
https://www.lifehacker.com.au/2020/02/the-history-of-wd-40-is-stranger-than-you-think/


A brief history of black names, from Perlie to Latasha



Black names have changed over the centuries.
fizkes/Shutterstock.com

Trevon Logan, The Ohio State University

Most people recognize that there are first names given almost exclusively by black Americans to their children, such as Jamal and Latasha.

While fodder for comedians and social commentary, many have assumed that these distinctively black names are a modern phenomenon. My research shows that’s not true.

Long before there was Jamal and Latasha, there was Booker and Perlie. The names have changed, but my colleagues and I traced the use of distinctive black names to the earliest history of the United States.

As scholars of history, demographics and economics, we found that there is nothing new about black names.

A 2012 ‘Key & Peele’ sketch poked fun of historically black names.

Black names aren’t new

Many scholars believe that distinctively black names emerged from the civil rights movement, perhaps attributable to the Black Power movement and the later black cultural movement of the 1990s as a way to affirm and embrace black culture. Before this time, the argument goes, blacks and whites had similar naming patterns.

Historical evidence does not support this belief.

Until a few years ago, the story of black names depended almost exclusively on data from the 1960s onward. New data, such as the digitization of census and newly available birth and death records from historical periods, allows us to analyze the history of black names in more detail.

We used federal census records and death certificates from the late 1800s in Illinois, Alabama and North Carolina to see if there were names that were held almost exclusively by blacks and not whites in the past. We found that there were indeed.

For example, in the 1920 census, 99% of all men with the first name of Booker were black, as were 80% of all men named Perlie or its variations. We found that the fraction of blacks holding a distinctively black name in the early 1900s is comparable to the fraction holding a distinctively black name at the end of the 20th century, around 3%.

What were the black names back then?

We were interested to learn that the black names of the late 1800s and early 1900s are not the same black names that we recognize today.

The historical names that stand out are largely biblical such as Elijah, Isaac, Isaiah, Moses and Abraham, and names that seem to designate empowerment such as Prince, King and Freeman.

These names are quite different from black names today such as Tyrone, Darnell and Kareem, which grew in popularity during the civil rights movement.

Once we knew black names were used long before the civil rights era, we wondered how black names emerged and what they represented. To find out, we turned to the antebellum era – the time before the Civil War – to see if the historical black names existed before the emancipation of slaves.

Since the census didn’t record the names of enslaved Africans, this led to a search of records of names from slave markets and ship manifests.

Using these new data sources, we found that names like Alonzo, Israel, Presley and Titus were popular both before and after emancipation among blacks. We also learned found that roughly 3% of black Americans had black names in the antebellum period – about the same percentage as did in the period after the Civil War.

But what was most striking is the trend over time during enslavement. We found that the share of black Americans with black names increased over the antebellum era while the share of white Americans with these same names declined, from more than 3% at the time of the American Revolution to less than 1% by 1860.

By the eve of the Civil War, the racial naming pattern we found for the late 1800s was an entrenched feature in the U.S.

Company E was the fourth U.S. Colored Infantry during the Civil War.
Everett Historical/Shutterstock.com

Why is this important?

Black names tell us something about the development of black culture, and the steps whites were taking to distance themselves from it.

Scholars of African American cultural history, such as Lawrence W. Levine, Herbert Gutman and Ralph Ellison, have long held that the development of African American culture involves both family and social ties among people from various ethnic groups in the African diaspora.

In other words, people from various parts of Africa came together to form black culture as we recognize it today. One way of passing that culture on is through given names, since surnames were stolen during enslavement.

How this culture developed and persisted in a chattel slavery system is a unique historical development. As enslavement continued through the 1800s, African American culture included naming practices that were national in scope by the time of emancipation, and intimately related to the slave trade.

Since none of these black names are of African origin, they are a distinct African American cultural practice which began during enslavement in the U.S.

As the country continues to grapple with the wide-ranging effects of enslavement in the nation’s history, we cannot – and should not – forget that enslavement played a critical role in the development of black culture as we understand it today.

[ Like what you’ve read? Want more? Sign up for The Conversation’s daily newsletter. ]The Conversation

Trevon Logan, Hazel C. Youngberg Distinguished Professor of Economics, The Ohio State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Tennis: a smashing history of how rackets shaped the game



Shutterstock/nd3000

Thomas Allen, Manchester Metropolitan University

The start of the Australian Open, the first tennis grand slam of the year, signals detailed discussions of metrics such as points won, serve speeds and shot placement. While many of these performance metrics can, of course, be attributed to the player, we should also consider the important role played by the racket.

Tennis is an old sport with a rich history of technological development in equipment. Wimbledon, the oldest tennis tournament, was founded in 1877, and the first Australian Open was held in 1905. Through the application of advanced engineering, the tennis racket has changed considerably since these early competitions, as detailed in a recent research article and summarised in the video below.

Photos of tennis rackets through time.

Early tennis rackets borrowed their design from the older sport of real tennis, an early racket sport dating back to around the 16th century and played by the rich and elite. They were made of wood, with long handles and small lopsided heads, which made it easier for the player to bring the hitting surface close to the ground to hit the typically low bouncing balls of real tennis. These soon disappeared as tennis developed as a sport in its own right. Symmetrical racket frames were becoming commonplace by the time of the first Australian Open.

1870s lopsided racket.
Image provided by author

Most manufacturers continued to make their rackets from wood until the 1960s, with few other design developments seen. Some early tennis racket manufacturers did produce metal frames to try and overcome the issue of wood warping due to humidity, but these were unsuccessful.

Not only does metal offer less damping than wood, meaning the player feels harsher vibrations if they mishit the ball, but the metal frame often damaged the natural gut strings at the point of contact. The Dayton Steel Racket Corporation attempted the use of more durable metal strings, but these affected the felt cover on the ball and were prone to rusting.

A technology boom

The start of the open era in 1968, when professionals and amateurs began competing together for cash prizes, was probably a key driver behind the rapid development of tennis rackets seen around this period. During the 1960s wooden rackets were still the most common, but fibre-reinforced composite materials such as fibreglass started to appear as a reinforcement on wooden frames, like the Challenge Power by Slazenger and the Kramer Cup by Wilson.

By the 1970s, racket engineers were experimenting with a range of materials, such as wood, fibre-reinforced composites, aluminium and steel. A key racket from this period was the Classic by Prince, based on a 1976 patent from Howard Head. The Classic was made of aluminium, which allowed for a much larger head than its wooden predecessors and made it easier to hit the ball. Plastic grommets were used to overcome the issue of string (now synthetic) damage experienced with earlier metal rackets.

Classic by Prince.
Image provided by author

The Classic set the foundations for the modern tennis racket, with most of its successors featuring large heads. Indeed, the International Tennis Federation began limiting racket size in 1981, so technological developments would not change the nature of the game.

Since the 1980s, high-end tennis rackets have been made from fibre-reinforced composite materials, such as fibreglass, carbon fibre and aramid (strong synthetic fibres). The advantage of these composite materials over wood and metal is their high stiffness and low density, combined with manufacturing versatility. Composites provide the racket engineer with more freedom over parameters such as the shape, mass distribution and stiffness of the racket, as they can control the placement of different materials around the frame.

While wooden rackets had small, solid cross sections, composite rackets have large, hollow cross sections to give high stiffness and low mass. The increased design freedom offered by composites was demonstrated with the introduction of “widebody” rackets, such as the Profile by Wilson, in the late 1980s. Widebody rackets have larger cross sections around the centre of the frame than the handle and tip, to give higher stiffness in the region of maximum bending.

Player-racket interaction

The higher stiffness of composite rackets means that they lose less energy to vibrations upon impact, so the player can hit the ball faster. However, there may be an increased risk of overuse injury to the arm when using a high stiffness racket with a large head. A lightweight modern racket with a lower swingweight (moment of inertia about the handle) is also easier for the player to wield, and they tend to swing them faster during strokes.

Despite the higher swing speed achieved with a lighter racket, ball speeds tend to remain similar as the increased racket speed is counteracted by the reduction in striking mass. There is most likely an optimum racket for each player, rather than a one-size-fits-all solution, and player preference is an important consideration. Customisation techniques and player monitoring using sensor and camera systems are likely to play an important role in the future of tennis racket design.

Modern composite tennis rackets are made using labour intensive processes that are not very environmentally friendly. We may see racket manufacturers exploring more sustainable materials, such as recycled and natural fibre composites, and more automated manufacturing techniques like additive manufacturing. We might monitor how a player swings a racket using a sensor, and then manufacture them a customised racket optimised to their playing style.

The development of the tennis on display at the Australian Open has been bound to the evolving design of the racket. Researchers have calculated that a player could serve the ball around 17.5% faster using a modern racket than with those used by the first players in the 1870s. No doubt we will see further advances in racket design shape the sport into the future.The Conversation

Thomas Allen, Senior Lecturer, Department of Engineering, Manchester Metropolitan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


The erotic theatre of the pool edge: a short history of female swimwear



Women at Brisbane’s Oasis Swimming Pool, January 1950.
Brisbane City Council, CC BY-NC

Lydia Edwards, Edith Cowan University

Human beings have a surprisingly long relationship with the concept of swimwear. After all, the first heated swimming pool is believed to have been built by Gaius Maecenas of Rome in the 1st century BC.

Before the early 1800s, it was relatively common to swim either nude or simply in your underwear. When communal swimming baths became more popular and prevalent in the mid-19th century, decorum demanded men and women cover their modesty with garments made specially for the purpose. Women covered up with cotton or wool bathing dresses, drawers, and sometimes even stockings.

A women’s swimsuit from the 1870s.
The Metropolitan Museum of Art

While seeming ungainly today, these impractical garments must have been liberating for women used to corsets and long, hampering skirts worn over multiple petticoats. By their very nature these “swimming suits” also threatened entrenched ideas around feminine activity (or lack thereof), perhaps suggesting women who swam energetically could no longer be considered “the weaker sex”.

Nonetheless, modesty presided above all else during this period, and it wasn’t until women began to swim competitively that change began.

A scandalous arrest

Water, particularly the beach, has been described by fashion scholars Harold Koda and Richard Martin as the “great proscenium of twentieth-century dress” – a statement that encourages us to rethink the importance of swimwear in our everyday dress and lifestyles.

In 1907, Australian swimmer Annette Kellerman was arrested on Revere Beach, Massachusetts, for wearing a one-piece bathing suit in public. This garment was a sporting necessity, and fellow athletes successfully championed a skirtless, sleeveless one-piece for the 1912 Olympics.

Annette Kellerman demonstrating her diving skills at Adelaide’s Glenelg baths, 1905.
State Library of South Australia

Kellerman’s incredible figure was admired as much as her actions were berated, and she was known to strip down to her bathing costume in all-female public lectures, proving a healthy lifestyle (rather than a corset) was to thank for her silhouette.

“If more girls would swim and dance and care for athletics”, she commented in 1910, “instead of rushing into matrimony as the only joy in the world, there’d be fewer divorces”.

The new one-piece contributed hugely to what has been described as the “erotic theatre” of the pool edge: swimwear is an item of both form and function, and so the pool or sea is an acceptable space to bare all.

Itsy Bitsy Teenie Weenie

The introduction of elastic yarn in the 1930s created a fabric that clung to the body and enabled risqué designs.

The influence of the Hollywood starlet, lying immaculate (and dry) by a sparkling pool sowed the seed swimwear need have nothing to do with exercise. It could instead suggest leisure and luxury: the embodiment of a society now used to annual holidays.

The 1940s introduced what we now recognise as the bikini, and the 50s saw iconic portrayals of swimsuits worn by the likes of Esther Williams and Marilyn Monroe.

A young Marilyn Monroe.
Wikimedia Commons

The swinging 60s opened with Brian Hyland’s Itsy Bitsy Teenie Weenie, Yellow Polka Dot Bikini, and further promotion through the Bond franchise firmly cemented the bikini’s erotic prowess.

Soon, swimwear’s eroticism was being used by some to promote ideals of gender equality and acceptance.

In 1964, Austrian-American designer Rudi Gernreich introduced his notorious “Monokini”, a bathing suit featuring two skinny straps just grazing the breasts.

Gernreich hoped the suit would challenge existing prudishness and shame around the nude female body. His plan backfired. From its birth, the press described the monokini as controversial – and, although it sold well, it never became conventional swimwear.

The 1970s and 80s welcomed fashionable suits and bikinis with less internal structuring, fitting the silhouette of the decade. Fashionable first and practical second, they could still withstand a certain amount of sun, sand and chlorine.

A protest symbol

Swimming, fashion, and baring all are not mutually exclusive.

“Rashies” or “rash guards” (so-called because they protect the wearer from rashes and sunburn), are long-sleeved waterproof shirts that first originated as surfwear. In countries like Australia with prominent beach culture and harsh weather the garment has grown in popularity.

In 2004, Australian designer Aheda Zanetti, inspired by the increasing presence of Muslim women in Australian sports (especially swimming), created the “burkini”. Acting as a kind of lightweight wetsuit, the garment covers the entire body and comes in a variety of styles and colours.

The burkini in action.
Aheda Zanettii

The style came under intense scrutiny in 2016 when several French municipalities banned the burkini in line with the country’s secular laws (it had banned the wearing of a burqa and niqab three years earlier).

It doesn’t seem to matter whether women’s swimsuits bare-all or cover-all: those wearing them will still be judged. But much as the shift from bulky dresses to lean one-pieces opened up new opportunities for women in the water, this latest suit also makes the beach lifestyle more accessible, with wearers remaining both cool and UV-protected.

With our “house on fire”, as Thunberg eloquently put it, we may be seeing more swimsuit innovation heading our way as a matter of necessity.The Conversation

Lydia Edwards, Fashion historian, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


History of the two-day weekend offers lessons for today’s calls for a four-day week



The leisure industry led one of many campaigns to free people from working on Saturday afternoons.

Brad Beaven, University of Portsmouth

The idea of reducing the working week from an average of five days to four is gaining traction around the world. Businesses and politicians have been considering a switch to fewer, but more productive hours spent working. But the idea has also been derided.

As a historian of leisure, it strikes me that there are a number of parallels between debates today and those that took place in the 19th century when the weekend as we now know it was first introduced. Having Saturdays as well as Sundays off work is actually a relatively modern phenomenon.

Throughout the 19th century, government legalisation reduced working hours in factories and prescribed regular breaks. But the weekend did not simply arise from government legislation – it was shaped by a combination of campaigns. Some were led by half-day holiday movements, others by trade unions, commercial leisure companies and employers themselves. The formation of the weekend in Britain was a piecemeal and uneven affair that had to overcome unofficial popular traditions that punctured the working week during the 19th century.

‘Saint Monday’

For much of the 19th century, for example, skilled artisan workers adopted their own work rhythms as they often hired workshop space and were responsible for producing items for their buyer on a weekly basis. This gave rise to the practice of “Saint Monday”. While Saint Monday mimicked the religious Saint Day holidays, it was in fact an entirely secular practice, instigated by workers to provide an extended break in the working week.

They worked intensively from Tuesday to finish products by Saturday night so they could then enjoy Sunday as a legitimate holiday but also took Mondays off to recover from Saturday night and the previous day’s excesses. By the mid-19th century, Saint Monday was a popular institution in British society. So much so that commercial leisure – like music halls, theatres and singing saloons – staged events on this unofficial holiday.

The Victorian period spawned a number of music halls, such as Canterbury Hall in London.
People Play

Workers in the early factory system also adopted the tradition of Saint Monday, despite manufacturers consistently opposing the practice, as it hurt productivity. But workers had a religious devotion to the unofficial holiday, which made it difficult for masters to break the habit. It continued to thrive into the 1870s and 1880s.

Nonetheless, religious bodies and trade unions were keen to instil a more formal holiday in the working week. Religious bodies argued that a break on Saturday would improve working class “mental and moral culture”. For example, in 1862 Reverend George Heaviside captured the optimistic tone of many religious leaders when, writing in the Coventry Herald newspaper, he claimed a weekend would allow for a refreshed workforce and greater attendance at church on Sundays.

Trade unions, meanwhile, wanted to secure a more formalised break in the working week that did not rely on custom. Indeed, the creation of the weekend is still cited as a proud achievement in trade union history.

In 1842 a campaign group called the Early Closing Association was formed. It lobbied government to keep Saturday afternoon free for worker leisure in return for a full day’s work on Monday. The association established branches in key manufacturing towns and its membership was drawn from local civic elites, manufacturers and the clergy. Employers were encouraged to establish half-day Saturdays as the Early Closing Association argued it would foster a sober and industrious workforce.

Half-day Saturdays were seen as a way to improve productivity.
Shutterstock

Trades unions and workers’ temperance groups also saw the half-day Saturday as a vehicle to advance working class respectability. It was hoped they would shun drunkenness and brutal sports like cock fighting, which had traditionally been associated with Saint Monday.

For these campaigners, Saturday afternoon was singled out as the day in which the working classes could enjoy “rational recreation”, a form of leisure designed to draw the worker from the public house and into elevating and educational pursuits. For example, in Birmingham during 1850s, the association wrote in the Daily News newspaper that Saturday afternoons would benefit men and women who could:

Take a trip into the country, or those who take delight in gardening, or any other pursuit which requires daylight, could usefully employ their half Saturday, instead of working on the Sabbath; or they could employ their time in mental or physical improvements.

Business opportunity

Across the country a burgeoning leisure industry saw the new half-day Saturday as a business opportunity. Train operators embraced the idea, charging reduced fares for day-trippers to the countryside on Saturday afternoons. With increasing numbers of employers adopting the half-day Saturday, theatres and music halls also switched their star entertainment from a Monday to Saturday afternoon.

Perhaps the most influential leisure activity to help forge the modern week was the decision to stage football matches on Saturday afternoon. The “Football Craze”, as it was called, took off in the 1890s, just as the new working week was beginning to take shape. So Saturday afternoons became a very attractive holiday for workers, as it facilitated cheap excursions and new exciting forms of leisure.

The well-attended 1901 FA Cup final.
Wikimedia Commons

The adoption of the modern weekend was neither swift nor uniform as, ultimately, the decision for a factory to adopt the half-day Saturday rested with the manufacturer. Campaigns for an established weekend had begun in the 1840s but it did not gain widespread adoption for another 50 years.

By the end of the 19th century, there was an irresistible pull towards marking out Saturday afternoon and Sunday as the weekend. While they had their different reasons, employers, religious groups, commercial leisure and workers all came to see Saturday afternoon as an advantageous break in the working week.

This laid the groundwork for the full 48-hour weekend as we now know it – although this was only established in the 1930s. Once again, it was embraced by employers who found that the full Saturday and Sunday break reduced absenteeism and improved efficiency.The Conversation

Brad Beaven, Professor of Social and Cultural History, University of Portsmouth

This article is republished from The Conversation under a Creative Commons license. Read the original article.


History repeats itself. That’s bad news for the 2020s



When there are too many elites in a society, competition for power makes existing problems worse.
Francisco Goya / Wikimedia

David Baker, Macquarie University

What will happen in the 2020s? If history is any guide (and there’s good reason to think it is), the outlook isn’t great.

Here are some big-picture predictions: stagnant real wages, faltering standard of living for the lower and middle classes, worsening wealth inequality, more riots and uprisings, ongoing political polarisation, more elites competing for limited positions of power, and elites co-opting radical movements.

Thanks to globalisation, all this won’t just happen in one country but in the majority of countries in the world. We will also see geopolitical realignment, dividing the world into new alliances and blocs.

There is also a low to moderate chance of a “trigger event” – a shock like an environmental crisis, plague, or economic meltdown – that will kick off a period of extreme violence. And there is a much lower chance we will see a technological breakthrough on par with the industrial revolution that can ease the pressure in the 2020s and reverse the trends above.

These aren’t just guesses. They are predictions made with the tools of cliodynamics, which uses dozens of case studies of civilisations over the past 5,000 years to look for mathematical patterns in human history.




Read more:
Cliodynamics: can science decode the laws of history?


Cycles of growth and decline

One area where cliodynamics has borne fruit is “demographic-structural theory”, which explains common cycles of prosperity and decline.

Here’s an example of a full cycle, taken from Roman history. After the second Punic war in 201 BCE, the Roman republic enjoyed a period of extreme growth and prosperity. There was a relatively small divide between the richest and poorest, and fewer members of elites.

As the population grew, smallholders had to sell off their farms. Land coalesced into larger plantations run by elites mostly with slave labour. Elite numbers ballooned, wealth inequality became extreme, the common people felt pinched, and numerous wealthy people found themselves shut out of power.

The assassination of Julius Caesar was a key event in the decline of the Roman republic.
Jean-Leon Gerome

The rich resisted calls for land reform, and eventually the elites split into two factions called the Optimates and the Populares. The following century involved slave revolts and two massive civil wars.

Stability only returned when Augustus defeated all other rivals in 30 BCE – and ended the republic, making himself emperor. So began a new cycle of growth.

Booms and busts

Demographic-structural theory looks at things like the economic and political strength of the state, the ages and wages of the population, and the size and wealth of the elite to diagnose a society’s health – and work out where it’s heading.

Historically, some things we see today are bad signs: shrinking real wages, a growing gap between the richest and the poorest, rising numbers of wealthy and influential people who are becoming more competitive and factionalised.

Another bad sign is if previous generations witnessed periods of growth and plenty. It might mean that your society is about to hit a wall – unless a great deal of innovation and good policy relieves the pressure once again.

We are living in an unprecedented period of global growth. History says it won’t last.
SRC / IGBP / F Pharand Deschenes

The modern global system has experienced a period of growth unprecedented in human history since 1945, often referred to as the “Great Acceleration”. Yet in country after country today, we see stagnant wages, rising inequality, and wealthy elites jousting for control.

Historically, periods of strain and “elite overpopulation” are followed by a crisis (environmental or economic), which is in turn followed by years of sociopolitical instability and violence.

Elite competition makes crises worse

Factional warring after a disaster in a top-heavy society makes things much worse. It can keep the population low for decades after the initial catastrophe, and may only end when elites are exhausted or killed off.

This underlying cycle fed the Wars of the Roses between the Lancastrians and Yorkists in 15th century England, the struggle between the Optimates and Populares in the Roman Republic, and countless other conflicts in history.




Read more:
Computer simulations reveal war drove the rise of civilisations


In a period of growth and expansion these dynastic, political, and religious animosities would be less pronounced – as there is more of everything to go around – but in a period of decline they become incendiary.

In different regions and time periods, the factions vary widely, but the ideological merits or faults of any particular faction have literally no bearing on the pattern.

We always massacre each other on the downward side of a cycle. Remember that fact as we embark on the pattern again in the 2020s, and you find yourself becoming blindingly angry while watching the news or reading what someone said on Twitter.

A connected world

Because the world’s societies and economies are more unified than ever before, the increasing political division we see in Australia or the United States also manifests itself around the world.

Violence between the Bharatiya Janata Party (BJP) and Trinamool Congress in Bengal, political polarisation in Brazil following the election of Jair Bolsonaro, and less public conflicts within China’s ruling party are all part of a global trend.

Trigger events

We can expect this decline to continue steadily in the next decade, unless a trigger event kicks off a crisis and a long period – perhaps decades – of extreme violence.

Here’s a dramatic historical example: in the 12th century, Europe’s population was growing and living standards were rising. The late 13th century ushered in a period of strain. Then the Great Famine of 1315–17 set off a time of strife and increasing violence. Next came an even bigger disaster, the Black Death of 1347–51.

After these two trigger events, elites fighting over the wreckage led to a century of slaughter across Europe.

From my own studies, these “depression phases” kill an average of 20% of the population. On a global scale, today, that would mean 1.6 to 1.7 billion people dead.

There is, of course, only a low to moderate probability that such a trigger event will occur in the 2020s. It may happen decades later. But the kindling for such a conflagration is already being laid.




Read more:
Big gods came after the rise of civilisations, not before, finds study using huge historical database


Technology to the rescue?

One thing that could reverse this cycle would be a major technological breakthrough. Innovation has temporarily warded off decline in the past.

In mid-11th century Europe, for example, new land-clearing and agricultural methods allowed a dramatic increase in production which led to relative prosperity and stability in the 12th century. Or in the mid-17th century, high-yield crops from the Americas raised carrying capacities in some parts of China.

In our current situation, something like nuclear fusion – which could provide abundant, cheap, clean energy – might change the situation drastically.

The probability of this occurring in the 2020s is low. Nevertheless, innovation remains our best hope, and the sooner it happens the better.

This could be a guiding policy for public and private investment in the 2020s. It is a time for generous funding, monumental projects, and bold ventures to lift humanity out of a potential abyss.

Sunlit uplands of the distant future

If you look far enough ahead, our prospects become brighter.
Shutterstock

Cheer up. All is not lost. The further we project into the future the brighter human prospects become again, as great advances in technology do occur on a long enough timescale.

Given the acceleration of the frequency of such advances over the past 5,000 years of history, we can expect something profound on the scale of the invention of agriculture or the advent of heavy industry to occur within the next 100 years.

That is why humanity’s task in the 2020s – and much of the 21st century – is simply to survive it.The Conversation

David Baker, Lecturer in Big History, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.


%d bloggers like this: