Category Archives: United Kingdom

A genealogy of the term British reveals its imperial history – and a Brexit paradox



File 20181218 27764 116ol1a.jpg?ixlib=rb 1.1

Richard Paton, Battle of Barfleur via Wikimedia Commons

Mark A Hutchinson, University of York

The genealogy of the term British reveals a fragile and contested historical identity – something Brexit has thrown into stark relief.

In the 17th century, being British only had meaning as a colonial identity, when it was used to denote the projection of English and Scottish interests overseas. When the term was used within the geographical confines of Great Britain – and later in Great Britain and Ireland – its common use was in reference to the British government or the British constitution.

Understanding the genealogy of the term British can help make sense of the lack of consensus which has emerged over Brexit. After all, the British empire no longer exists and the British government is instead managing a declining British presence worldwide. Alongside the devolution of powers within the UK, it’s unclear what the term British is now meant to describe.

The Irish context

While the term British had a medieval heritage, a modern genealogy of the term British began in the early 17th century. With the accession of James I of England (who was James VI of Scotland) to the English throne in 1603, the crowns of Scotland and England were united in one person. This recalled the ancient idea of a British monarchy, recounted by the 12th-century chronicler Geoffrey of Monmouth, who had described a distant past when there had been kings of Britain.

Scotland and England, however, remained separate kingdoms until the Act of Union of 1707 and so the idea of a united “British identity” had little traction within the geographical confines of Great Britain in this period.

Instead, in those records which still exist of material published in Great Britain and its dependencies up to 1800, the term British was mostly used in relation to Ireland in the first half of the 17th century.

It was with the flight of the Gaelic earls from Ulster in 1607, which opened the way for plantation by Scottish and English settlers in the north of Ireland, that the first truly British policy emerged. The Scots were co-opted into the long-running English involvement with Ireland, justified by the idea of “civilising” the Irish. Crucially, it was the collective actions of the English and the Scots outside their home nations which gave meaning to the term “British”.

A 1610 pamphlet listed the “Conditions to be observed by the Brittish Undertakers of the Escheated Lands in Ulster”, while a 1618 pamphlet restated the terms under which “Brittish undertakers” had received land.

Even with the English Civil Wars in the 1640s, British continued to be used in relation to Ireland, rather than in reference to the internal dynamics of Great Britain. A massacre of Protestants in Ireland, for example, was reported on in 1646 as “Cruelties exercised in Ireland upon the Brittish Protestants.”

An imperial project

The 1707 Articles of Union.
Parliament of England, via Wikimedia Commons

A similar pattern can be found from the late 17th century well into the 19th. The 1707 Act of Union of England and Scotland created the United Kingdom of Great Britain. The debates which surrounded the union were complex, but an important strand concerned the need to project British imperial power in order to counterbalance other European trading nations.

From the 1690s onwards, different pamphlets referred to “British plantations” overseas and later “British seamen”, which suggests that a growing imperial identity helped underpin political union at home.

Figures such as Edmund Burke (1729-97), an Irishman embedded in English domestic politics, expressed the growing complexity of the term British. Burke wrote about “British navigation” and “British trade”, which he argued could, under the right circumstances, benefit the sister Kingdom of Ireland. He also wrote famously about the benefits of the “British constitution”. There were also references from Burke by this stage of the 18th century to the “British nation”. Nevertheless, the propensity remained for the term British to denote imperial expansion – as well as the shared institutional structures of the United Kingdom.

The imperial logic of the term British can also be found in the circumstances which underlay the 1801 Act of Union of Great Britain and Ireland, and more importantly Catholic emancipation in 1829, which gave Catholics the right to sit in the British parliament and hold most public offices. Even after 1829, within the confines of Great Britain and Ireland, Irish Catholics continued to be viewed with suspicion – as “disloyal” to the crown – and they were very aware of their separate identity. But the crucial role played by Irish Catholics in the British army overseas, where they embraced a British identity outside Ireland, made increasingly untenable those arguments which had continued against Catholic emancipation.

The British Empire at the end of the 19th century.
Cambridge University Library via Wikimedia Commons

Searching for a British identity

With the decline of empire, and the rise of nationalism in Ireland at the beginning of the 20th century, the imperial traction of the term British began, slowly, to diminish. The awkward emptiness of the term British is neatly expressed in the “Order of the British Empire”, which was created to honour those who had acted in service and defence of the British empire during World War I, but now somehow honours those who have contributed to the life of the United Kingdom. The disappearance of the British empire after World War II underlines the strangeness in talking about an OBE.




Read more:
Empire 2.0 and Brexiteers’ ‘swashbuckling’ vision of Britain will raise hackles around the world


The genealogy of the term British therefore points to an inherent problem with the Brexit project. British, by its very definition, is an imperial term, not a national one – but there is no longer an empire. Speaking of a British outlook invokes a demand for a global presence. British was also meant to refer to a functional constitutional settlement which, in its idealised form, protected the interests of the different nations of the UK. Devolution, and a divergence in those interests, has placed the constitutional settlement under severe strain.

With Brexit, despite an empty imperial nostalgia so neatly encapsulated by the promise of an “Empire 2.0” after the UK leaves the EU, the term British has lost even more of its meaning. Now, more than ever, the country needs to decide what it wants the term British to mean.The Conversation

Mark A Hutchinson, Research Fellow in Politics, University of York

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

250 years after Captain Cook’s arrival, we still can’t be sure how many Māori lived in Aotearoa at the time



File 20181218 27779 34j23e.jpg?ixlib=rb 1.1
Captain James Cook sailed the Endeavour off New Zealand’s east coast in 1769.
from Wikimedia Commons, CC BY-ND

Simon Chapple, Victoria University of Wellington

Two hundred and fifty years ago this year, James Cook’s ship the Endeavour arrived off the eastern coast of New Zealand. The following circumnavigation marked the beginning of ongoing European contact with the indigenous population, and eventually mass British immigration from 1840.

One important question historians are trying to answer is how many Māori lived in Aotearoa at the time of Cook’s arrival. This question goes to the heart of the negative impacts of European contact on the size and health of the 19th-century Māori population, which subsequently bottomed out in the 1890s at just over 40,000 people.

The conventional wisdom is that there were about 100,000 Māori alive in 1769, living on 268,000 square kilometres of temperate Aotearoa. This is a much lower population density (0.37 people per square kilometre) than densities achieved on tropical and much smaller Pacific Islands.

Examples of order-of-magnitude higher density Pacific populations in the contact-era include:

In conjunction with later 19th-century census figures, the conventional wisdom implies that European contact and colonisation following Cook’s arrival was much less devastating for the indigenous population of Aotearoa than for many other Pacific islands.

Three approaches have been used to support the estimate of 100,000 Māori. Unfortunately, none bears any serious weight.




Read more:
As we celebrate the rediscovery of the Endeavour let’s acknowledge its complicated legacy


The Cook population estimate

The 100,000-strong estimate of the contact-era Māori population is often attributed to Cook. However, it never received his seal of approval, and it was not made in 1769.

It was published in a 1778 book written by Johann Forster, the naturalist on Cook’s second expedition of 1772-1775. Forster’s estimate is a guess, innocent of method. He suggests 100,000 Māori as a round figure at the lower end of likelihood. His direct observation of Māori was brief, in the lightly populated South Island, far from major northern Māori population centres.

Later visitors had greater direct knowledge of the populous coastal northern parts of New Zealand. They also made population estimates. Some were guesses like Forster’s. Others were based on a rough method. Their estimates range from 130,000 (by early British trader Joel Polack) to over 500,000 Māori (by French explorer Dumont D’Urville), both referring to the 1820s. The wide range further emphasises the lack of information in Forster’s guess.

A map of the east coast on New Zealand’s North Island, drawn by Captain James Cook.
from Wikimedia Commons, CC BY-ND

Working backwards from the 1858 census

A second method takes the figure from the first New Zealand-wide Māori population census of 1858, of about 60,000 people. It works this number backwards over 89 years to 1769, making assumptions about the rate of annual population decline between 1769 and 1858.

There is a good quantitative estimate for the rate of decline back from 1858 to 1844, taken from a Waikato longitudinal census. But there is nothing solid for the period before 1844.

To overcome the absence of numbers, an apparently better documented and very low average annual rate of decline of the Moriori people of the Chatham Islands of 0.4% between 1791 and 1835 has been applied to New Zealand. However, the estimated rate is calculated from wrong numbers for both the 1791 and 1835 Moriori populations. In fact, there is no contemporary 1791 estimate of the Moriori population from which to calculate a meaningful rate of quantitative decline to 1835.

The qualitative conclusion of low population decline is based on two propositions. The first is that prior to the 1850s, imported European diseases were localised to a few coastal areas. The second is that the impact of warfare on populations over the first half of the 19th century was minimal. What is the evidence for these propositions? The answer is not much in either case.

Historical evidence suggests that there were indeed widespread epidemics in New Zealand prior to the 1850s. For example, there is evidence of a great epidemic around 1808, possibly some form of enteric fever or influenza, which killed many people across the North Island and the top of the South Island. Other high-mortality diseases known to be present in New Zealand pre-1840 and readily transmittable internally include syphilis and tuberculosis.

The estimates of how many Māori died directly and indirectly on account of warfare over the 1769 to 1840 period lack a coherent method. They are weak on definitions of what they count. They cover varying or indeterminate periods. Where they can be made roughly comparable, the numbers arrived at are wildly different, with estimates of deaths ranging from 300 to 2000 people on average annually. In other words, the impact of warfare on population decline could have been quite small or quite large. We simply don’t know.

Overall, Hawaiian archaeologist Patrick Kirch’s conclusion on the validity of this method for estimating other contact-era Pacific populations is also applicable to New Zealand. It is a largely circular exercise in assuming what needs to be proven.

Waka paddles, as described in Joseph Banks’ journal in 1769. From New Zealand drawings made in the countries visited by Captain Cook in his First Voyage.
from Wikimedia Commons, CC BY-ND

Predicting population from settlement

The third method used to estimate 100,000 Māori predicts the population forward from first arrival in New Zealand. Prediction requires a minimum of three parameters. These are the arrival date of Māori in New Zealand, the size of the founding population and the prehistoric population growth rate to 1769.

The current consensus is that voyagers from Eastern Polynesia arrived in New Zealand between 1230 and 1280 AD and then became known as Māori. However, even a 50-year difference in arrival dates can make large differences to an end population prediction.

Geneticists have estimated the plausible size of the Māori female founding population as between 50 to 230 women. The high population estimate which would result from using these numbers is therefore nearly five times the size of the low estimate. Such a broad range is meaningless.

The third big unknown of the prediction method is the growth rate. Minimalists have employed low rates, based on prehistoric Eurasian populations, where humans had lived for tens of thousands of years. This perspective of low Māori prehistoric growth rates is problematic. Humans did not live in New Zealand prior to Māori. The population density faced by newly arriving people was zero.

Also, New Zealand’s flora and fauna had evolved without people. Once people arrived, they would have found more niches of exploitable nutrients than in regions where plants and animals had long co-evolved with people as apex predators. Such circumstances allowed for a potentially rapid Māori population expansion.

Indeed, historically recorded population growth rates for Pacific islands with small founding populations could be exceptionally high. For example, on tiny, resource-constrained Pitcairn Island, population growth averaged an astounding 3% annually over 66 years between 1790 and 1856.

Arguments for rapid prehistoric population growth run up against other problems. Skeletal evidence seems to show that prehistoric Māori female fertility rates were too low; and mortality, indicated by a low average adult age at death, was too high to generate rapid population growth.

This low-fertility finding has always been puzzling, given high Māori fertility rates in the latter 19th century. Equally, archaeological findings of a low average adult age at death have been difficult to reconcile with numbers of elderly Māori observed in accounts of early explorers.

However, recent literature on using skeletal remains to estimate either female fertility or adult age at death is sceptical that this evidence can determine either variable in a manner approaching acceptable reliability. So high growth paths cannot be ruled out.

Because of resulting uncertainties in the three key parameters and the 500-year-plus forecast horizon, the plausible population range around 100,000 Māori in 1769 is so broad as to make any prediction estimate meaningless. Virtually any contact-era population can be illustrated by someone with a modicum of numerical nous.

Density analogies

In the 2017 New Zealand Journal of History, New Zealand archaeologist Atholl Anderson argues that medieval population density on the large (about 103,000 square kilometres, slightly smaller than the North Island), isolated and sub-arctic island of Iceland is a much better analogy for likely contact-era Māori density than those of smaller tropical Pacific islands.

He uses Icelandic population density from the year 1800, over 900 years into the settlement sequence. If Icelandic population numbers closest to 500 years into the settlement sequence were used, they would provide a more direct temporal analogy for 500 years of Māori settlement in 1769.

Iceland was settled circa 870 AD. The best estimates of the pre-industrial Icelandic population closest to 500 years post-settlement are from 1311. They are based on farm numbers counted for tax purposes. This method gives 72,000 to 95,000 Icelanders. So, in its medieval period, sub-arctic Iceland achieved population densities of 0.70 to 0.92 people per square km. Applying these densities to contact-era temperate New Zealand gives a Māori population of between 190,000 to 250,000 people when Cook arrived.

In terms of a New Zealand-related density analogy, there is good 1835 population data from the temperate Chatham Islands (about 970 square kilometres in area), giving a Moriori population density exceeding two people per square kilometre. It was measured after decades of likely population decline from contact with European sealers and whalers, as well as after at least one serious epidemic. Applying this density figure to the North Island alone, which the Chatham Islands climatically best resembles, gives 230,000 people when Cook arrived.

Using analogies from Iceland and the Chatham Islands suggests that post-Cook European contact may have been more devastating for Māori than conventional wisdom acknowledges. There may have been 200,000 or more Māori in 1769, falling to about 40,000 in the 1890s. Additionally, a figure of 200,000 or more Māori implies that much post-contact population decline occurred prior to mass British immigration.

As elsewhere in the Americas and the Pacific, perhaps European germs, not mass immigration, were the primary driver of indigenous population decline. But 250 years on from Cook, more work and different methods are needed to answer this question.The Conversation

Simon Chapple, Director, Institute for Governance and Policy Studies, Victoria University of Wellington

This article is republished from The Conversation under a Creative Commons license. Read the original article.


As we celebrate the rediscovery of the Endeavour let’s acknowledge its complicated legacy


Natali Pearson, University of Sydney

Researchers, including Australian maritime archaeologists, believe they have found Captain Cook’s historic ship HMB Endeavour in Newport Harbour, Rhode Island. An official announcement will be made on Friday.

The discovery is the culmination of decades of work by the Rhode Island Marine Archaeology Project and the Australian National Maritime Museum to locate and positively identify the vessel, which had been missing from the historical record for over two centuries. Plans are now under way to raise funds to excavate and conduct scientific testing in 2019.

As the first European seafaring vessel to reach the east coast of Australia, the Endeavour – much like James Cook himself – has become part of Australia’s national mythology. Unlike Cook, who famously met his end on Hawaiian shores, the fate of the Endeavour had long been unknown. The discovery has therefore resolved a long-standing maritime mystery.

In a serendipitous twist, it coincides with two significant dates: the 250th anniversary of the Endeavour’s departure from England in 1768 on its now (in)famous voyage south, and the 240th anniversary of the ship’s scuttling in 1778 during the American War of Independence.

Identifying the Endeavour’s location has been a 25-year processs. Archaeologists initially identified 13 potential candidates in the harbour. Over time, the number of possible sites was narrowed to five.

This month, a joint diving team has worked to measure and inspect these sites, drawing upon knowledge of Endeavour’s size to identify a likely candidate. Excavation and timber analysis is expected to provide final confirmation. Those expecting an entire ship to be recovered will be disappointed, as very little of it remains.

But this is a controversial vessel, and celebrations of its discovery will be tempered by reflection about its complicity in the British colonisation of Indigenous Australian land. While Endeavour played an instrumental role in advancing science and exploration, its arrival in what is now known as Botany Bay in 1770 also precipitated the occupation of territory that its Aboriginal owners never ceded.




Read more:
How Captain Cook became a contested national symbol


A ship by any other name …

Although Endeavour’s early days are well known, it has taken many years for researchers to piece together the rest of its story. One problem has been the many names the vessel was known by during its lifetime.

Built in 1764 in Whitby, England, as a collier (coal carrier), the vessel was originally named Earl of Pembroke. Its flat-bottomed hull and box-like shape, designed to transport bulk cargo, later proved helpful when navigating the treacherous coral reefs of the southern seas.

Endeavour, then known as Earl of Pembroke, leaving Whitby Harbour in 1768. Painting by Thomas Luny, c. 1790. (Some think Luny painted another ship after Endeavour became famous.)
Wikimedia

In 1768, Earl of Pembroke was sold into the service of the Royal Navy and the Royal Society. It underwent a major refit to accommodate a larger crew and sufficient provisions for a long voyage. In keeping with the ambitious spirit of the era, the vessel was renamed His Majesty’s Bark (HMB) Endeavour (bark being a nautical term to describe a ship with three masts or more).

Endeavour departed England in 1768 under the command of then-Lieutenant Cook. Ostensibly sailing to the South Pacific to observe the 1769 Transit of Venus, Cook was also under orders to search for the fabled southern continent. So it was that a coal carrier and a rare astronomical event changed the history of the Australian continent and its people.




Read more:
Transit of Venus: a tale of two expeditions


Mysterious ends

Following Endeavour’s circumnavigation of the globe (1768-1771), the vessel was used as a store ship before the Royal Navy sold it in 1775. Here, the ship’s fate become mysterious.

Many believed it had been renamed La Liberté and put to use as a French whaling ship before succumbing to rotting timbers in Newport Harbour in 1793. Others rejected this theory, suggesting instead that Endeavour had spent her final days on the river Thames.

A breakthrough came in 1997. Australian researchers suggested the Endeavour had in fact been renamed Lord Sandwich. The theory gained weight following an archival discovery by Kathy Abbass, director of the Rhode Island project, in 2016, which indicated that Lord Sandwich had been used as a troop transport and prison ship during the American War of Independence before being scuttled in Newport Harbour in 1778.

Lord Sandwich was one of a number of transport ships deliberately sunk by the British in an attempt to prevent the French fleet from approaching the shore.

Finding a shipwreck is not impossible, but finding the one you’re looking for is hard. Rhode Island volunteers have been searching for this vessel since 1993, slowly narrowing down the search area and eliminating potential contenders as they explore the often-murky waters of Newport Harbour.

They were joined in their efforts by the Australian National Maritime Museum in 1999 and, in more recent years, by the Silentworld Foundation, a not-for-profit organisation with a particular interest in Australasian maritime archaeology.

Endeavour’s voyage across the Pacific Ocean.
Wikimedia

Museums around the world are already turning their attention to the significant Cook anniversaries on the horizon and the complex legacy of these expeditions. These interpretive endeavours will only be heightened by the planned excavation of the ship’s remains in the near future.

Shipwrecks are a productive starting point for thinking about how we make meaning from the past because of the firm hold they have on the public imagination. They conjure images of lost treasure, pirates and, especially in the case of Endeavour, bold adventures to distant lands.

But as we celebrate the spirit of exploration that saw a humble coal carrier circumnavigate the globe – and the same spirit of exploration that has led to its discovery centuries later – we must also make space for the unsettling stories that will resurface as a result of this discovery.The Conversation

Natali Pearson, Deputy Director, Sydney Southeast Asia Centre, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.


Seen from the air, the dry summer reveals an ancient harvest of archaeological finds



File 20180816 2894 1echakz.jpg?ixlib=rb 1.1
Unseen from ground level, this Iron Age farmstead with recognisable round house near the Yorkshire Wolds is revealed in cropmarks. The lighter green shows it was carefully placed on a gravel rise surrounded by wetter land, shown here where the crop grows a darker green.
Peter Halkon, Author provided

Peter Halkon, University of Hull

For an aerial archaeologist 2018 has been a bumper year. The long, hot summer has revealed ancient landscapes not visible from ground level, but easily recognised in fields of growing crops from the air.

The principle behind the appearance of cropmarks is simple. If, for example, an Iron Age farmer dug a ditch around his field, over time this ditch will fill up with soil and other debris and will generally retain more moisture than the soil or bedrock it was cut into. Centuries later, a cereal crop sown over this earth will grow for a longer period and ripen more slowly, appearing greener as the surrounding crop ripens to a golden colour. Conversely, a crop planted in soil covering the remains of a stone building or roadway will ripen more quickly and parch, again appearing a different colour to the rest of the crop.

What has made the summer of 2018 so remarkable is that the winter and spring was so wet that plants grew relatively shallow roots, having no need to search deeply for water. So when the drought came this summer, those plants that grew over buried features such as ditches and pits benefited from the greater store of water retained in the infilled soil. Well-drained sandy soils and those over chalk are particularly conducive to revealing features through cropmarks.

These cropmarks show a known cursus monument at Warborough, Oxfordshire. The purpose of cursus monuments is debated, thought to be enclosed paths or processional ways.
Damian Grady/Historic England

Recognising archaeological sites by cropmarks is noted as far back as the antiquarians of the 17th century, although it was William Stukeley – who pioneered the study of Stonehenge and Avebury – who provided the clearest early explanation in his description of features in the Roman town of Great Chesterford in Essex in 1719. In the modern era, at first using balloons, then aeroplanes and, most recently, drones, aerial archaeology photography has become a standard reconnaissance technique.

History from the air

One area where this has been used widely is the Yorkshire Wolds, among the first to be covered in the National Mapping Programme undertaken by the former Royal Commission on Historical Monuments England, begun in 1908, now part of Historic England.

Compiled from thousands of aerial photographs by Cathy Stoertz and published as Ancient Landscapes of the Yorkshire Wolds in 1997, this remains one of the most detailed studies of an archaeological landscape in the UK. From the River Humber at Hessle to Flamborough Head, Stoertz’s mapping revealed a network of prehistoric and Romano-British enclosures, burials mounds, ceremonial monuments and linear earthworks.

Cropmarks showing square barrows to either side of the road at Arras, East Yorkshire.
Peter Halkon, Author provided

My own research has examined many of these sites on the ground through geophysical survey and excavation, and further aerial sorties, and this has greatly expanded our knowledge of the region. Flying from Hull Aero Club’s airfield near Beverley, I have focused on the western escarpment of the Yorkshire Wolds and the eastern fringes at the Vale of York, a region I have studied for many years.

For example, the picture above shows the square barrow cemetery at Arras, in East Yorkshire. Here, burials were placed on the ground and a mound was built over them with soil dug out from a surrounding ditch. The barrow ditches show as green squares. Dating from the Middle Iron Age, probably around 300 BC, this site gave its name to the internationally recognised Arras Culture of East Yorkshire.




Read more:
Bones of Iron Age warriors may reveal link between Yorkshire’s ‘spear-people’ and the ancient Gauls


A portion of the massive later prehistoric earthworks of Huggate Dykes has survived since the banks and ditches were built in around 1000 BC, probably as territorial boundaries or as a means to control access to springs and streams.

Later Bronze Age earthworks known as Huggate Dykes, from ground level.
Peter Halkon, Author provided

Impressive from ground level, an aerial view reveals faint green stripes in an adjacent cornfield – all that is left of the buried ditches after centuries of ploughing.

Seen from above, the remnants of the Huggate Dyke linear earthworks (centre) can be seen as faint green stripes continuing into the adjacent field (top right).
Peter Halkon, Author provided

This year I have discovered hitherto unknown sites and, in other places, greater detail at already recorded sites. These include Bronze Age round barrows, apparent as rings in the crop, the characteristic square barrows of the Iron Age Arras Culture, and linear features running across the landscape from Iron Age and Romano-British farmsteads and other settlements.

Soil marks of three Bronze Age round barrows on the Yorkshire Wolds, appearing as circular marks in the soil. The darker circles show the infill of the ditch around the barrow that was originally dug to create the barrow mound.
Peter Halkon, Author provided

Collaborating with Tony Hunt of Yorkshire Aerial Archaeology and Mapping, for the first time I have also used drones. Although these are subject to altitude restrictions, a good quality camera on a drone guided along pre-programmed tracks by GPS can gather precise images. The hundreds of overlapping images can be combined to provide a huge two-dimensional mosaic image, or processed to create 3D imagery, an elevation model, or to colourise the images in order to make the hidden archaeological features more visible.

Here, left, a conventional orthophoto of a field showing the faint outline of an Iron Age or Romano-British square enclosure with the ditch of an associated droveway, and right, the same site processed using the DroneDeploy Plant Health filter, adding false colour to better highlight the archaeological features.
Tony Hunt/Yorkshire Aerial Archaeology and Mapping, Author provided

This technique is truly revolutionary as mapping was tricky and time-consuming in the past, particularly aerial photographs taken at oblique angles, requiring hours peering through a stereoscope, mapping sites by hand using geometry.

<!– Below is The Conversation's page counter tag. Please DO NOT REMOVE. –>
The Conversation

While the drought of 2018 has seriously affected crop yields, it has provided a rich harvest of a different kind, one that will take a considerable time to digest. An opportunity to do so will be as archaeologists meet to discuss finds from across Europe at the Aerial Archaeology Research Group annual conference, held this year in Venice, September 12-14.

Peter Halkon, Senior Lecturer in Archaeology, University of Hull

This article was originally published on The Conversation. Read the original article.


United Kingdom’s Hot Weather Assisting Archaeology



How Captain Cook became a contested national symbol


Tracy Ireland, University of Canberra

Captain Cook has loomed large in the federal government’s 2018 budget. The government allocated $48.7 million over four years to commemorate the 250th anniversary of Cook’s voyages to the South Pacific and Australia in 1770. The funding has been widely debated on social media as another fray in Australia’s culture wars, particularly in the context of $84 million in cuts to the ABC.

Closer scrutiny suggests that this latest celebration of Cook may serve as a headline for financial resources already committed to a range of cultural programs, at least some of which could be seen as business as usual. These include the development of digital heritage resources and exhibitions at the National Maritime Museum, National Library, AIATSIS and the National Museum of Australia, as well as support for training “Indigenous cultural heritage professionals in regional areas”.

However, the budget package also includes unspecified support for the “voyaging of the replica HMB Endeavour” and a $25 million contribution towards redevelopment of Kamay Botany Bay National Park, including a proposed new monument to the great man.

So while the entire $48.7 million won’t simply go towards a monument, it’s clear that celebrating the 250th anniversary of Cook’s landing at Botany Bay is a high priority for this federal government.

In 1770 Lieutenant (later Captain) James Cook, on a scientific mission for the British Navy, anchored in a harbour he first called Stingray Bay. He later changed it to Botany Bay, commemorating the trove of specimens collected by the ship’s botanists, Joseph Banks and Daniel Solander.

Cook made contact with Aboriginal people, mapped the eastern coast of the continent, claimed it for the British Crown and named it New South Wales, allowing for the future dispossession of Australia’s First Nations. He would later return to the Pacific on two more voyages before his death in Hawaii in 1779.

Scholars agree that Cook had a major influence on the world during his lifetime. His actions, writings and voyages continue to resonate through modern colonial and postcolonial history.

Cook continues to be a potent national symbol. Partly this is due to the rich historical written and physical records we have of Cook’s journeys, which continue to reward further study and analysis.

But the other side to the hero story is the dispossession of Australia’s Indigenous peoples from their land. As a symbol of the nation, Cook is, and has always been, contested, political and emotional.

Too many Cooks

There are other European contenders for the title of “discoverer of the continent”, such as Dirk Hartog in 1616 and William Dampier in 1699. However, both inconveniently landed on the west coast. Although Englishman Dampier wrote a book about his discoveries, he never became a major figure like Cook.

Cook’s legend began immediately after his death, when he became one of the great humble heroes of the European Enlightenment. Historian Chris Healy has suggested that Cook was suited to the title of founder of Australia because his journey along the entire east coast made him more acceptable in other Australian states. Importantly, unlike that other great contender for founding father, the First Fleet’s Governor Arthur Phillip, Cook was not associated with the “stain of convictism”.

Landing of Captain Cook at Botany Bay, 1770, by Emanuel Phillips Fox, 1902.
Wikimedia

Australians celebrated the bicentenary of Cook’s arrival in 1970, and the bicentenary of the arrival of the First Fleet in 1988. Throughout this period it was widely accepted that Cook was the single most important actor in the British possession of Australia, despite the fact that many other political figures played significant roles.

This perhaps partly explains why Cook has featured so prominently in Aboriginal narratives of dispossession, and why the celebrations in 1970 and 1988 triggered debate around Aboriginal land rights.

Other scholars have examined the Aboriginal perspective on Cook’s landing. In the 1970s archaeologist Vincent Megaw found British artefacts in a midden at Botany Bay. He cautiously suggested that these items might have been part of the gifts given by Cook to the Aboriginal people he encountered.

Historian Maria Nugent has assessed the narratives recounted by Percy Mumbulla and Hobbles Danaiyarri. Both were senior Aboriginal lawmen and knowledge holders who, in the 1970s and ’80s, shared their sagas of the coming of Cook to their lands with anthropologists.

Too pale, stale and male?

Controversy over the celebration of Cook as founding father is not a new thing. It dates back to the 19th century when his first statues were raised.

This latest Captain Cook fanfare comes hot on the heels of broader global debates about the contemporary values and meaning of civic statues of (“pale, stale, male”) heroes associated with colonialism and slavery.

In Australia, there has also been debate about how the events of the first world war have been commemorated so expansively by Australia. A further $500 million was recently allocated for the extension of the Australian War Memorial, at a time when other cultural institutions in Canberra are being forced to shed jobs and tighten their belts.

The view from Captain Cook’s landing in Botany Bay, Kamay National Park.
Wikimedia/Maksym Kozlenko, CC BY-SA

The funding cycle for our contemporary cultural institutions and activities in Australia has been closely linked to anniversaries and their commemoration since at least the 1970 bicentenary. The 2018 budget lists support for programs at a number of cultural institutions and for training Indigenous cultural heritage professionals. It would be interesting to know whether these funds have been diverted away from existing operational budgets and core activities in these institutions to support the Cook celebrations.

The master plan for Kamay Botany Bay National Park has also been in development for some time. While centred on the historical event of Cook’s landing, the plan itself is more about the rehabilitation and activation of this somewhat neglected landscape. Plans have been drawn up in consultation with the La Perouse Aboriginal Land Council.

Should we be devoting scarce financial resources to yet another celebration of Cook? Focal events such as these can divert funds into cultural activities and may allow researchers and creative practitioners to unearth new evidence and develop fresh interpretations. Some of these funds may also go to support initiatives driven by First Nations communities.

The ConversationThere is no escaping the fact that Captain Cook is a polarising national symbol, representing possession and dispossession. Another anniversary of Cook’s landing may give us much to reflect upon, but it also the highlights the need for investment in new symbols that grapple with colonial legacies and shared futures.

Tracy Ireland, Associate Professor Cultural Heritage, University of Canberra

This article was originally published on The Conversation. Read the original article.


The day bananas made their British debut


File 20180410 554 3ygp4b.jpeg?ixlib=rb 1.1
Thomas Johnson’s illustration of his banana plant from The Herball Or Generall Historie of Plantes.
Wikimedia Commons

Rebecca Earle, University of Warwick

When Carmen Miranda sashayed her way into the hearts of Britain’s war-weary population in films such as The Gang’s All Here and That Night in Rio, her combination of tame eroticism and tropical fruit proved irresistible. Imagine having so much fruit you could wear it as a hat. To audiences suffering the strictures of rationing, Miranda’s tropical headgear shouted exoticism and abundance – with a touch of phallic sensuality thrown in.

In 1940s and 1950s Britain, bananas represented luxury, sunshine and sexiness. But entranced cinema-goers might have been surprised to learn that the bananas in Miranda’s tutti-frutti hat were in all probability descended from a strain developed in a hothouse at a stately home in Derbyshire, in England’s picturesque – but decidedly non-tropical – Midlands.

England got its first glimpse of the banana when herbalist, botanist and merchant Thomas Johnson displayed a bunch in his shop in Holborn, in the City of London, on April 10, 1633. He included the woodcut you see at the top of this article in his “very much enlarged” edition of John Gerard’s popular botanical encyclopedia, The herball or generall historie of plantes.

Page 1516 of the Johnson edition of The herball or generall historie of plantes.
Wellcome Images

Johnson’s single stem of bananas came from the recently colonised island of Bermuda. We don’t know what variety it was – but these days the chances are that any banana you will find in a British supermarket will be descended from the Cavendish banana. This strain was developed in the 19th century by the head gardener at Chatsworth House, John Paxton. His invention is called the Cavendish, rather than the Paxton, after the family name of the owners of the Chatsworth estate, the Duke and Duchess of Devonshire.

Paxton spent several years developing his banana. In 1835 his plant finally bore fruit, which won him a prize from the Royal Horticultural Society.

The Cavendish slowly gained popularity as a cultigen, but its current dominance is the result of a calamity. The genetic uniformity of commercial banana plantations is a hostage to ill-fortune. During the 1950s a virulent fungal pathogen wiped out the previously ubiquitous Gros Michel variety. The Cavendish stepped into the space left by the attack of Panama Disease. There is no reason to assume the fate suffered by the Gros Michel will not befall the Cavendish. What then will adorn our bowls of cereal and add volume to our smoothies?




Read more:
Disease may wipe out world’s bananas – but here’s how we might just save them


Taste of the tropics

Europeans have long associated bananas with the exotic pleasures of distant, island paradises. When the exhausted Ilarione da Bergamo arrived in the Caribbean in 1761 after a long sea voyage, the sight of the local fruit convinced the Italian friar that the travails of his protracted journey had been worthwhile. “Thus I began enjoying the delights of America,” he noted in his diary. Travellers marvelled at the exuberance of new-world nature, which – unlike her more parsimonious European sister – offered ripe, sweet fruit all year round.

The opportunity to gorge on sugary fruits became part of the European image of the tropics. The historian David Arnold pointed out that, in English: “One of the earliest and most enduring uses of the adjective ‘tropical’ was to describe fruit.”

De negro e india, china cambuja, by Miguel Cabrera (1695–1768).
Museum of the Americas

And of course these juicy, succulent treasures quickly became associated, not only with the tropics, but also with the sexual allure travellers projected onto women in the torrid zone. Women and tropical fruits merged into one delightful commodity in the overheated imagination of the US journalist, Carleton Beals, as he travelled through Costa Rica in the 1930s. “And the women,” he wrote breathlessly in Banana Gold, “their firm ample flesh seems ready to burst through the satin skin—like ripe fruit!”. Carmen Miranda’s provocative wink and her banana hat played masterfully on this centuries-old association.

Banana republics

Bananas originated in South-East Asia and were brought to the New World by European settlers – who, by the 19th century, were growing them on vast plantations in the Caribbean. Labour conditions on banana plantations were often atrocious. When underpaid workers at a plantation on Colombia’s Caribbean coast struck for better working conditions in 1928, they were gunned down by Colombian troops probably called in at the behest of the United Fruit Company.

The novelist Gabriel García Márquez immortalised this tragedy in a memorable scene in his One Hundred Years of Solitude. “Look at the mess we’ve got ourselves into,” one of his characters remarks, “just because we invited a gringo to eat some bananas”.

Banana plantation in Nicaragua, 1894.
Popular Science Monthly

Far worse messes were to occur in Guatemala in 1954, when the United Fruit Company cooperated closely with the Guatemalan military and the US State Department to overthrow the democratically-elected government of Jacobo Arbenz, who had made the mistake of nationalising some of the unused lands owned by the fruit company. The coup ushered in decades of military rule, during which the government, locked in a struggle with the guerrilla movement that inevitably arose in response, engaged in what many scholars have described as genocide against the Maya population.

The ConversationToday, bananas are so commonplace – thanks, of course, to industrial-scale production and working conditions that continue to attract critique – that they scarcely conjure up the delight they once inspired in the travel-fatigued Ilarione da Bergamo and weary postwar cinema goers. Since April 10 2018 marks the 385th anniversary of the day in 1633 when bananas were displayed for the first time to Londoners, it’s worth pondering the complex history behind the everyday banana.

Rebecca Earle, Professor of HIstory, University of Warwick

This article was originally published on The Conversation. Read the original article.


The War of the Roses



Barracking, sheilas and shouts: how the Irish influenced Australian English


File 20180313 131610 vf8lj0.jpg?ixlib=rb 1.1
The Warrnambool potato harvest of 1881.
State Library of Victoria

Howard Manns, Monash University and Kate Burridge, Monash University

Australian English decidedly finds its origins in British English. But when it comes to chasing down Irish influence, there are – to paraphrase Donald Rumsfeld – some knowun knowuns, some unknowun knowuns, and a bucket load of furphies.

Larrikins, sheilas and Aboriginal Irish speakers

The first Irish settlers, around half of whom were reputedly Irish language speakers, were viewed with suspicion and derision. This is reflected in the early Australian English words used to describe those who came from Patland (a blend of Paddy and Land).

The Irish were guided by paddy’s lantern (the moon); their homes adorned with Irish curtains (cobwebs); and their hotheadedness saw them have a paddy or paddy out. These Irish were said to follow Rafferty’s Rules – an eponym from the surname Rafferty – which meant “no rules at all”.

More than a few Irish were larrikins. In his book Austral English, E.E. Morris reports that
in 1869, an Irish sergeant Dalton charged a young prisoner with “a-larrr-akin about the streets” (an Irish pronunciation of larking, or “getting up to mischief”). When asked to repeat by the magistrate, Dalton said: “a larrikin, your Worchup”.

This Irish origin of larrikin had legs for many years, and perhaps still does. Unfortunately, here we have our first furphy, with more compelling evidence linking larrikin to a British dialect word meaning “mischievous or frolicsome youth”.

But if larrikin language is anything to go by, these youths went way beyond mischievous frolicking – jump someone’s liver out, put the boot in, stonker, rip into, go the knuckle on and weigh into are just some items from the larrikin’s lexicon of fighting words.




Read more:
Future tense: how the language you speak influences your willingness to take climate action


With the Dalton furphy, though, we see evidence of something called “epenthesis”, the insertion of extra sounds. Just as Dalton adds a vowel after his trilled “r” in a-larrr-akin, many Aussies add a vowel to words like “known” and “film” (knowun and filum) – and here we see a potential influence of the Irish accent on Australian English.

In contrast to larrikin, the word sheila is incontrovertibly Irish. Popular belief derives it from the proper name, Sheila, used as the female counterpart to Paddy, a general reference to Irish males.

Author Dymphna Lonergan, in her book Sounds Irish, prefers to derive it from Irish Gaelic síle, meaning “homosexual”, noting Sheila wasn’t a particularly popular Irish name as it began to appear down under.

Significantly though, St Patrick had a wife (or mother) named Sheila, and the day after St Paddy’s Day was once celebrated as Sheelah’s Day. So, Sheila was something of a celebrity.

Barrack is another likely Irish-inspired expression. A range of competing origins have been posited for this one, including the Aboriginal Wathawarung word borak, meaning “no, not”, and links to the Victorian military barracks in Melbourne.

But the most likely origin is the Northern Irish English barrack, “to brag, be boastful of one’s fighting powers”. The word has since sprouted opposite uses – Australian barrackers shout noisy support for somebody, while British barrackers shout in criticism or protest.

Perhaps surprisingly to many, the Irish were the first Europeans some Australian Aboriginal tribes encountered.

This contact is evident in the presence of Irish words in some Aboriginal languages. For instance, in the Ngiyampaa language of New South Wales, the word for shoe is pampuu, likely linked to a kind of shoe associated with the Aran Islands in Ireland, pampúta.

Didgeridoos, chooks and shouts: An Irish language perspective

Lonergan argues that more attention should be directed to this sort of Irish Gaelic influence.

Lonergan points, for example, to archival evidence linking the origin of didgeridoo to an outsider’s perception of how the instrument sounds, questioning the degree to which the sound corresponds to the word.

As a counter-argument, she notes an Irish word dúdaire meaning “trumpeter or horn-blower”, as well as Irish and Scots-Gaelic dubh, “black” and dúth, “native”. She observes that Irish and Scots-Gaelic speakers first encountering the instrument might well have called it dúdaire dubh or dúdaire dúth (pronounced respectively “doodereh doo” or “doojerreh doo”).




Read more:
The origins of Pama-Nyungan, Australia’s largest family of Aboriginal languages


Similar arguments are made for a number of other words traditionally viewed as having British English origins.

The Australian National Dictionary sees chook (also spelled chuck) as linked to a Northern English/Scottish variation of “chick”. However, Lonergan notes this is phonetically the same word (spelled tioc) the Irish would have used when calling chickens to feed (tioc, tioc, tioc).

Another potential influence also comes from the transference of Irish meaning to English words. For example, the Australian National Dictionary is unclear as to the exact origin of shout, “to buy a round of drinks”, but Lonergan links it to Irish working in the goldfields and an Irish phrase glaoch ar dheoch, “to call or shout for a drink”.

Lonergan posits that Irish miners translating to English might have selected “shout” rather than “call” – “shouting” could easily have spread to English speakers as a useful way to get a drink in a noisy Goldfields bar.

Good dollops of Irish in the melting pot

Irish influence on Australian English is much like the influence of the Irish on Australians themselves – less than you’d expect on the surface, but everywhere once you start looking.

And those with a soft spot for Irish English might feel better knowing that some of their bête noires are in fact Irish (haitch, youse, but, filum and knowun).

The ConversationAs Irish settlers entered the Australian melting pot, so too did a hearty dose of their language.

Howard Manns, Lecturer in Linguistics, Monash University and Kate Burridge, Senior Fellow at the Freiburg Institute for Advanced Studies and Professor of Linguistics, Monash University

This article was originally published on The Conversation. Read the original article.


In medieval Britain, if you wanted to get ahead, you had to speak French



Image 20170327 3303 1tyzyjb.jpg?ixlib=rb 1.1
Medieval teaching scene.
gallica.bnf.fr / BnF

Huw Grange, University of Oxford

The study of modern languages in British secondary schools is in steep decline. The number of students taking French and German GCSE has more than halved in the last 16 years. But as the UK prepares to forge new relationships with the wider world, and with a question mark over the status of English as an official EU language, it may be that many more Britons will need to brush up on their language skills – not unlike their medieval ancestors.

In the Middle Ages, a variety of vernacular languages were spoken by inhabitants of the British Isles, from Cornish to English to Norn – an extinct North Germanic language. The literati of the time learned to speak and write Latin.

But another high prestige language was also used in medieval Britain. After the Norman Conquest, French became a major language of administration, education, literature and law in England (and, to some extent, elsewhere in Britain). To get ahead in life post-1066, it was pretty important to “parler français”.

Historiated initial depicting Pentecost, when linguistic miracles were supposedly most prevalent.
gallica.bnf.fr / BnF

French would have been the mother tongue for several generations of the Anglo Norman aristocracy. But many more Britons must have learned French as a second language. Medieval biographies of saints, such as the 12th-century recluse Wulfric of Haselbury, tell of miracle workers who transformed monoglot Englishmen into fluent francophones.

In reality, many probably acquired French at “song” school, where young boys were taught reading and singing before moving on to study Latin at “grammar” school.

Slipping standards

But, by the late 14th century, standards of French in Britain were slipping – at least in some quarters. Perhaps not such a problem at home, where English had already assumed some of the roles previously performed by French. But if British merchants wanted to export wool, or import bottles of Bordeaux, knowledge of French was still a must.

It’s around this time that the “Manieres de langage” – or “Manners of Speaking” – began to appear. These model conversations, the earliest used to teach French to English speakers, were used by business teachers who taught all the necessary skills for performing basic clerical work.

Colourful language

As well as teaching learners how to ask for directions and find lodgings in France, the “Manieres” feature rather more colourful language than you’d find in today’s textbooks.

Some of the dialogues are made up entirely of insults and chat-up lines. Learners could quickly progress from “Mademoiselle, do I know you?” to “You’re quite sure you don’t have another boyfriend?”. And if things didn’t quite go to plan, an expression such as: “Va te en a ta putaigne … quar vous estez bien cuillez ensemble” (That’s it, run along to your whore! You’re made for each other!) may have proved useful.

Astrologer and demon.
British Library, Royal 6 E VI/2, f. 396v

The “Manieres” also taught learners about life across the Channel. In one dialogue a Parisian chap mentions to an Englishman that he’s been to Orléans. The Englishman is amazed: “But that’s near the edge of the world!” he exclaims. “It’s actually in the middle of France,” replies the Parisian, “and there’s a great law school there”. Once again, the Englishman is taken aback. He’s heard it’s where the devil teaches his disciples black magic. The Parisian is exasperated until the Englishman offers to buy him a drink.

Lessons from the past

The French spoken in Britain was mocked from at least the 12th-century, even by the British themselves. In the “Canterbury Tales”, for instance, Chaucer teases his Prioress for speaking the French of “Stratford-at-Bow” (rather than proper Parisian).




Read more:
English swearing’s European origins


Like many a language learner in Britain today, the Englishman in the “Manieres” lacks confidence in his linguistic abilities and worries about how he is ever going to speak like a native.

But the “Manieres” also suggest there was less separating the French of Britain from “proper Parisian” than we might think. When the Englishman lets slip he’s never actually been to France, it’s the Parisian’s turn to be amazed. How could anyone learn such good French in England?

Catte Street, Oxford, where a business school was located in the 15th-century.
Photo © Marathon (cc-by-sa/2.0)

These language learning resources date from a time when the association between linguistic identity and nationality was looser than it often is today. French doesn’t just belong to the French, according to the “Manieres” – learners can take pride in it too.

In Oxford, business school French proved so popular its success seemed to rattle the dons. In 1432 a University statute banned French teaching during lecturing hours to stop students skiving Latin.

The ConversationIt’s hard to imagine needing to curb enthusiasm for learning a foreign language in Brexit Britain. But perhaps there are lessons in the “Manieres” that could help promote language learning in the 21st-century classroom.

Huw Grange, Junior Research Fellow in French, University of Oxford

This article was originally published on The Conversation. Read the original article.


%d bloggers like this: