The link below is to an infographic/timeline of the history of medical technology.
Category Archives: Health and Fitness
At Sydney’s enormous Rookwood Cemetery, a lichen-spotted headstone captures a family’s double burden of grief.
The grave contains the remains of 19-year-old Harriet Ann Ottaway, who died on 2 July 1919. Its monument also commemorates her brother Henry James Ottaway, who “died of wounds in Belgium, 23rd Sept 1917, aged 21 years”.
While Henry was killed at the infamous Battle of Passchendaele, Harriet’s headstone makes no mention of her own courageous combat with “Spanish flu”.
Harriet’s story typifies the enduring public silence around the pneumonic influenza pandemic of 1918–19. Worldwide, it killed an estimated 50-100 million people – at least three times all of the deaths caused by the First World War.
Why historians ignored the Spanish flu
After the disease came ashore in January 1919, about a third of all Australians were infected and the flu left nearly 15,000 dead in under a year. Those figures match the average annual death rate for the Australian Imperial Force throughout 1914–18.
Arguably, we could consider 1919 as another year of war, albeit against a new enemy. Indeed, the typical victims had similar profiles: fit, young adults aged 20-40. The major difference was that in 1919, women like Harriet formed a significant proportion of the casualties.
Deadly flu spread rapidly
There was no doubt about the medical and social impact of the “Spanish flu”. Although its origins remain contested, it certainly didn’t arise in Spain. What is known is that by early 1918, a highly infectious respiratory disease, caused by a then-unknown agent, was moving rapidly across Europe and the United States. By the middle of that year, as the war was reaching a tipping point, it had spread to Africa, India and Asia.
It also took on a much deadlier profile. While victims initially suffered the typical signs and symptoms of influenza – including aches, fever, coughing and an overwhelming weariness – a frighteningly high proportion went rapidly downhill.
Patients’ lungs filled with fluid – which is why it became known as “pneumonic influenza” – and they struggled to breathe. For nurses and doctors, a tell-tale sign of impending death was a blue, plum or mahogany colour in the victim’s cheeks.
This, sadly, was the fate of young Harriet Ottaway. Having nursed a dying aunt through early 1919, in June she tended her married sister Lillian, who had come down with pneumonic influenza.
Despite taking the recommended precautions, Harriet contracted the infection and died in hospital. Ironically, Lillian survived. But in the space of less than two years she had lost both a brother to the Great War and her younger sister to the Spanish flu.
An intimate impact worldwide
Indeed, as Harriet’s headstone reminds us, this was an intimate pandemic. The statistics can seem overwhelming until you realise what it means that about a third of the entire world’s population was infected.
It wasn’t just victims who were affected. Across Australia, regulations intended to reduce the spread and impact of the pandemic caused profound disruption. The nation’s quarantine system held back the flu for several months, meaning that a less deadly version came ashore in 1919.
But it caused delay and resentment for the 180,000 soldiers, nurses and partners who returned home by sea that year.
Responses within Australia varied from state to state but the crisis often led to the closure of schools, churches, theatres, pubs, race meetings and agricultural shows, plus the delay of victory celebrations.
The result was not only economic hardship, but significant interruptions in education, entertainment, travel, shopping and worship. The funeral business boomed, however, as the nation’s annual death rate went up by approximately 25%.
Yet for some reason, the silence of Harriet’s headstone is repeated across the country. Compared with the Anzac memorials that peppered our towns and suburbs in the decades after the Great War, few monuments mark the impact of pneumonic influenza.
Nevertheless, its stories of suffering and sacrifice have been perpetuated in other ways, especially within family and community memories. A century later, these stories deserve to be researched and commemorated.
Despite the disruption, fear and substantial personal risk posed by the flu, tens of thousands of ordinary Australians rose to the challenge. The wartime spirit of volunteering and community service saw church groups, civic leaders, council workers, teachers, nurses and organisations such as the Red Cross step up.
They staffed relief depots and emergency hospitals, delivered comforts from pyjamas to soup, and cared for victims who were critically ill or convalescent. A substantial proportion of these courageous carers were women, at a time when many were being commanded to hand back their wartime jobs to returning servicemen.
In resurrecting stories such as the sad tale of Harriet Ottaway, it’s time to restore our memories of the “Spanish flu” and commemorate how our community came together to battle this unprecedented public crisis.
Melbourne man Raffaele Di Paolo pleaded guilty last week to a number of charges related to practising as a medical specialist when he wasn’t qualified to do so. Di Paolo is in jail awaiting his sentence after being found guilty of fraud, indecent assault and sexual penetration.
This case follows that of another so-called “fake doctor” in New South Wales. Sarang Chitale worked in the state’s public health service as a junior doctor from 2003 until 2014. It was only in 2016, after his last employer – the research firm Novotech – reported him to the Australian Health Practitioner Regulation Agency (AHPRA), that his qualifications were investigated.
“Dr” Chitale turned out to be Shyam Acharya, who had stolen the real Dr Chitale’s identity and obtained Australian citizenship and employment at a six-figure salary. Acharya had no medical qualifications at all.
Cases of impersonation, identity theft and fraudulent practice happen across a range of disciplines. There have been instances of fake pilots, veterinarians and priests. It’s especially confronting when it happens in medicine, because of the immense trust we place in those looking after our health.
So what drives people to go to such extremes, and how do they get away with?
A modern phenomenon
Impersonation of doctors is a modern phenomenon. It grew out of Western medicine’s drive towards professionalism in the 19th century, which ran alongside the explosion of scientific medical research.
Before this, doctors would be trained by an apprentice-type system, and there was little recourse for damages. A person hired a doctor if they could afford it, and if the treatment was poor, or killed the patient, it was a case of caveat emptor – buyer beware.
But as science made medicine more reliable, the title of “doctor” really began to mean something – especially as the fees began to rise. By the end of the 19th century in the British Empire, becoming a doctor was a complex process. It required long university training, an independent income and the right social connections. Legislation backed this up, with medical registration acts controlling who could and couldn’t use medical titles.
Given the present social status and salaries of medical professionals, it’s easy to see why people would aspire to be doctors. And when the road ahead looks too hard and expensive, it may be tempting to take short cuts.
Today, there are four common elements that point to weaknesses in our health-care systems, which allow fraudsters to slip through the cracks and practise medicine.
1. Misplaced trust
Everyone believes someone, somewhere, has checked and verified a person’s credentials. But sometimes this hasn’t been done, or it takes a long time.
Fake psychiatrist Mohamed Shakeel Siddiqui – a qualified doctor who stole a real psychiatrist’s identity and worked in New Zealand for six months in 2015 – left a complicated trail of identity theft that required the assistance of the FBI to unravel.
Last year, in Germany, a man was found to have forged foreign qualifications that he presented to the registering body in early 2016. He was issued with a temporary licence while these were checked. When the qualifications turned out to be fraudulent, he was fired from his job as a junior doctor in a psychiatric ward. But this wasn’t until June 2017.
2. Foreign credentials
Credentials from a foreign university, issued in a different language, are another common element among medical fraudsters. Verifying these can be time-consuming, so a health system desperate for staff may cut corners.
Ioannis Kastanis was appointed as head of medicine at Skyros Regional Hospital in Greece in 1999 with fake degrees from Sapienza University of Rome. The degrees were recognised and the certificates translated, but their authenticity was never checked.
Dusan Milosevic, who practised as a psychologist for ten years, registered in Victoria in 1998. He held bogus degrees from the University of Belgrade in Serbia – at the time a war-torn corner of Europe, which made verification difficult.
3. Regional and remote practice
It’s easier to get away with faking in regional or remote areas where there is less scrutiny. Desperation to retain staff may also silence complaints.
“Dr” Balaji Varatharaju fraudulently gained employment in remote Alice Springs, where he worked as a junior doctor for nine months.
Ioannis Kastanis had worked on a distant Greek island with a population of only around 3,000 people.
4. It’s not easy to dob
Finally, there are two unnerving questions. How do you tell a poorly trained but legally qualified practitioner from a faker? And who do you tell if you suspect something is off?
The people best placed to spot the fakes – other hospital and health-care staff – work in often stressful conditions where complaints about colleagues can lead to reprisals. If the practitioner is from another ethnicity or culture, this adds an extra layer of sensitivity. It was only after “Dr Chitale” was exposed that staff were willing to say his practice had been “shabby”, “unsavoury” and “poor”.
So, why do they do it?
The reasons for fakery are as diverse as the fakers. “Dr Nick Delaney”, at Lady Cilento Children’s Hospital in Brisbane, reportedly pretended to be a doctor to “make friends” and keep a fling going with a security guard at the same hospital.
On a more sinister level, there are possible sexually predatory reasons, like those of bogus gynaecologist Raffale Di Paolo. Fake psychiatrist Mohamed Shakeel Siddiqui said he only did it to help people.
There are also the less easily understood fakers, like “Dr” Adam Litwin, who worked as a resident in surgery at UCLA Medical Center in California for six months in 1999. Questions only began to be asked when he turned up to work in his white coat with a picture of himself silk-screened on it: even by Californian standards, this was going too far.
So how do we stop this happening?
Part of the problem is our cultural dependence on qualifications as the passkey to higher income and social status, making them an easy target for fraudsters. Qualifications only reduce risk, but they can’t eliminate it. Qualified doctors can also cause havoc: think Jayant Patel and other bona fide qualified practitioners who have been struck off for malpractice, mutilation and manslaughter.
Conversely, no one complained about “Dr Chitale” in 11 years. The only complaints Kastanis received in 14 years were from people who thought his Ferrari was vulgar. The German junior doctor had an excellent knowledge of mental health-care procedures and language – obtained from his time as a psychiatric patient.
Most of these loopholes can be closed with time and patience. What would help is if hospital and health-care staff felt sufficiently supported to report their suspicions to their employer, rather than to their colleagues. This would foster a more open culture of flagging concerns about fellow practitioners without fear of formal or informal punishment. It might also uncover more “Dr Chitales” before anyone is seriously harmed.
But there’s a long history of opposition to childhood vaccination, from when it was introduced in England in 1796 to protect against smallpox. And many of the themes played out more than 200 years ago still resonate today.
For instance, whether childhood vaccination should be compulsory, or whether there should be penalties for not vaccinating, was debated then as it is now.
Throughout the 19th century, anti-vaxxers widely opposed Britain’s compulsory vaccination laws, leading to their effective end in 1907, when it became much easier to be a conscientious objector. Today, the focus in Australia has turned to ‘no jab, no pay’ or ‘no jab, no play’, policies linking childhood vaccination to welfare payments or childcare attendance.
Of course, the methods vaccine objectors use to discuss their position has changed. Today, people share their views on social media, blogs and websites; then, they wrote letters to newspapers for publication, the focus of my research.
Many studies have looked at the role of organised anti-vaccination societies in shaping the vaccination debate. However, “letters to the editor” let us look beyond the inner workings of these societies to show what ordinary people thought about vaccination.
Many of the UK’s larger metropolitan newspapers were wary of publishing letters opposing vaccination, especially those criticising the laws. However, regional newspapers would often publish them.
As part of my research, I looked at more than 1,100 letters to the editor, published in 30 newspapers from south-west England. Here are some of the recurring themes.
Smallpox vaccination a gruesome affair
In 19th century Britain, the only vaccine widely available to the public was against smallpox. Vaccination involved making a series of deep cuts to the arm of the child into which the doctor would insert matter from the wound of a previously vaccinated child.
These open wounds left many children vulnerable to infections, blood poisoning and gangrene. Parents and anti-vaccination campaigners alike described the gruesome scenes that often accompanied the procedure, like this example from the Royal Cornwall Gazette from December 1886:
Some of these poor infants have been borne of pillows for weeks, decaying alive before death ended their sufferings.
Conspiracy theories and vaccine cults
Side-effects were so widespread many parents refused to vaccinate their children. And letters to the editor show they became convinced the medical establishment and the government were aware of the dangers of vaccination.
If this was the case, why was vaccination compulsory? The answer, for many, could be found in a conspiracy theory.
Their letters argued doctors had conned the government into enforcing compulsory vaccination so they could reap the financial benefits. After all, public vaccinators were paid a fee for each child they vaccinated. So people believed compulsory vaccination must have been introduced to maximise doctors’ profits, as this example from the Wiltshire Times in February 1894 shows:
What are the benefits of vaccination? Salaries and bonuses to public vaccinators; these are the benefits; while the individuals who have to endure the operation also have to endure the evils which result from it. Health shattered, lives crippled or destroyed – are these benefits?
Conspiracy theories went further. If doctors knew vaccination could result in infections, then they knew children died from the procedure. As a result, some conspiracy theorists began to argue there was something inherently evil about vaccination. Some saw vaccination as “the mark of the beast”, a ritual perpetuated by a “vaccine cult”. Writing in the Salisbury Times, in December 1903, one critic said:
This is but the prototype of that modern species of doctorcraft, which would have us believe that their highly remunerative invocations of the vaccine god alone avert the utter extermination of the human race by small-pox.
For many, the issue of compulsory vaccination was directly related to the rights of the individual. Just like modern anti-vaccination arguments, many people in the 19th century believed compulsory vaccination laws were an incursion into the rights enjoyed by free citizens.
By submitting to the compulsory vaccination laws, a parent was allowing the government to insert itself into the individual home, and take control of a child’s body, something traditionally protected by the parent. Here’s an example from the Royal Cornwall Gazette in April 1899:
[…] civil and religious liberty must of necessity include the right to protect healthy children from calf-lymph defilement […] trust […] cannot be handed over at the demand of a medical tradesunion, or tamely relinquished at the cool request of some reverend rural justice of the peace.
What can we learn by looking at the past?
If anti-vaccination arguments from the past significantly overlap with those presented by their counterparts today, then we can learn about how to deal with anti-vaccination movements in the future.
Not only can we see compulsory vaccination laws in Australia could, as some researchers say, be problematic, we can use the history of vaccine opposition to better understand why vaccination remains so controversial for some people.
Surgeries and treatments come and go. A new BMJ guideline, for example, makes “strong recommendations” against the use of arthroscopic surgery for certain knee conditions. But while this key-hole surgery may slowly be scrapped in some cases due to its ineffectiveness, a number of historic “cures” fell out of favour because they were more akin to a method of torture. Here are five of the most extraordinary and unpleasant.
Trepanation (drilling or scraping a hole in the skull) is the oldest form of surgery we know of. Humans have been performing it since neolithic times. We don’t know why people did it, but some experts believe it could have been to release demons from the skull. Surprisingly, some people lived for many years after this brutal procedure was performed on them, as revealed by ancient skulls that show evidence of healing.
Although surgeons no longer scrape holes in peoples’ skulls to release troublesome spirits, there are still reports of doctors performing the procedure to relieve pressure on the brain. For example, a GP at a district hospital in Australia used an electric drill he found in a maintenance cupboard to bore a hole in a 13-year-old boy’s skull. Without the surgery, the boy would have died from a blood clot on the brain.
It’s hard to believe that a procedure more brutal than trepanation was widely performed in the 20th century. Lobotomy involved severing connections in the brain’s prefrontal lobe with an implement resembling an icepick (a leucotome).
Antonio Egas Moniz, a Portuguese neurologist, invented the procedure in 1935. A year later, Walter Freeman brought the procedure to the US. Freeman was an evangelist for this new form of “psychosurgery”. He drove around the country in his “loboto-mobile” performing the procedure on thousands of hapless patients.
Instead of a leucotome, Freeman used an actual icepick, which he would hammer through the corner of an eye socket using a mallet. He would then jiggle the icepick around in a most unscientific manner. Patients weren’t anaesthetised – rather they were in an induced seizure.
Thankfully, advances in psychiatric drugs saw the procedure fall from favour in the 1960s. Freeman performed his last two icepick lobotomies in 1967. One of the patients died from a brain haemorrhage three days later.
Ancient Greek, Roman, Persian and Hindu texts refer to a procedure, known as lithotomy, for removing bladder stones. The patient would lay on their back, feet apart, while a blade was passed into the bladder through the perineum – the soft bit of flesh between the sex organ and anus. Further indignity was inflicted by surgeons inserting their fingers or surgical instruments into the rectum or urethra to assist in the removal of the stone. It was an intensely painful procedure with a mortality rate of about 50%.
The number of lithotomy operations performed began to fall in the 19th century, and it was replaced by more humane methods of stone extraction. Healthier diets in the 20th century helped make bladder stones a rarity, too.
4. Rhinoplasty (old school)
Syphilis arrived in Italy in the 16th century, possibly carried by sailors returning from the newly exploited Americas (the so-called Columbian exchange).
The sexually transmitted disease had a number of cruel symptoms, one of which was known as “saddle-nose”, where the bridge of the nose collapses. This nasal deformity was an indicator of indiscretions, and many used surgery to try and hide it.
An Italian surgeon, Gaspare Tagliacozzi, developed a method for concealing this nasal deformity. He created a new nose using tissue from the patient’s arm. He would then cover this with a flap of skin from the upper arm, which was rather awkwardly still attached to the limb. Once the skin graft was firmly attached – after about three weeks – Tagliacozzie would separate the skin from the arm.
The were reported cases of patients’ noses turning purple in cold winter months and falling off.
Today, syphilis is easily treated with a course of antibiotics.
Losing blood, in modern medicine, is generally considered to be a bad thing. But, for about 2,000 years, bloodletting was one of the most common procedures performed by surgeons.
The procedure was based on a flawed scientific theory that humans possessed four “humours” (fluids): blood, phlegm, black bile and yellow bile. An imbalance in these humours was thought to result in disease. Lancets, blades or fleams (some spring loaded for added oomph) were used to open superficial veins, and in some cases arteries, to release blood over several days in an attempt to restore balance to these vital fluids.
Bloodletting in the West continued up until the 19th century. In 1838, Henry Clutterbuck, a lecturer at the Royal College of Physicians, claimed that “blood-letting is a remedy which, when judiciously employed, it is hardly possible to estimate too highly”.
Finally, one medical procedure, dating from one of the earliest Egyptian medical texts, that isn’t used anymore – and I can’t for the life of me think why – is the administration of half an onion and the froth of beer. It cures death, apparently.
We expect to feel no pain during surgery or at least to have no memory of the procedure. But it wasn’t always so.
Until the discovery of general anaesthesia in the middle of the 19th century, surgery was performed only as a last and desperate resort. Conscious and without pain relief, it was beset with unimaginable terror, unspeakable agony and considerable risk.
Not surprisingly, few chose to write about their experience in case it reawakened suppressed memories of a necessary torture.
One of the most well-known and vivid records of this “terror that surpasses all description” was by Fanny Burney, a popular English novelist, who on the morning of September 30, 1811 eventually submitted to having a mastectomy:
When the dreadful steel was plunged into the breast … I needed no injunctions not to restrain my cries. I began a scream that lasted unintermittently during the whole time of the incision … so excruciating was the agony … I then felt the Knife [rack]ling against the breast bone – scraping it.
But it wasn’t only the patient who suffered. Surgeons too had to endure considerable anxiety and distress.
John Abernethy, a surgeon at London’s St Bartholomew’s Hospital at the turn of the 19th century, described walking to the operating room as like “going to a hanging” and was sometimes known to shed tears and vomit after a particularly gruesome operation.
Discovery of anaesthesia
It was against this background that general anaesthesia was discovered.
A young US dentist named William Morton, spurred on by the business opportunities afforded by technical advances in artificial teeth, doggedly searched for a surefire way to relieve pain and boost dental profits.
His efforts were soon rewarded. He discovered when he or small animals inhaled sulfuric ether (now known as ethyl ether or simply ether) they passed out and became unresponsive.
A few months after this discovery, on October 16, 1846 and with much showmanship, Morton anaesthetised a young male patient in a public demonstration at Massachusetts General Hospital.
The hospital’s chief surgeon then removed a tumour on the left side of the jaw. This occurred without the patient apparently moving or complaining, much to the surgeon’s and audience’s great surprise.
So began the story of general anaesthesia, which for good reason is now widely regarded as one of the greatest discoveries of all time.
Anaesthesia used routinely
News of ether’s remarkable properties spread rapidly across the Atlantic to Britain, ultimately stimulating the discovery of chloroform, a volatile general anaesthetic.
According to its discoverer, James Simpson, it had none of ether’s “inconveniences and objections” – a pungent odour, irritation of throat and nasal passages and a perplexing initial phase of physical agitation instead of the more desirable suppression of all behaviour.
Chloroform subsequently became the most commonly used general anaesthetic in British surgical and dental anaesthetic practice, mainly due to the founding father of scientific anaesthesia John Snow, but remained non-essential to the practice of most doctors.
This changed after Snow gave Queen Victoria chloroform during the birth of her eighth child, Prince Leopold. The publicity that followed made anaesthesia more acceptable and demand increased, whether during childbirth or for other reasons.
By the end of the 19th century, anaesthesia was commonplace, arguably becoming the first example in which medical practice was backed by emerging scientific developments.
Anaesthesia is safe
Ether was highly flammable so could not be used with electrocautery (which involves an electrical current being passed through a probe to stem blood flow or cut tissue) or when monitoring patients electronically. And chloroform was associated with an unacceptably high rate of deaths, mainly due to cardiac arrest (when the heart stops beating).
The practice of general anaesthesia has now evolved to the point that it is among the safest of all major routine medical procedures. For around 300,000 fit and healthy people having elective medical procedures, one person dies due to anaesthesia.
Despite the increasing clinical effectiveness with which anaesthesia has been administered for over the past 170 years, and its scientific and technical foundations, we still have only the vaguest idea about how anaesthetics produce a state of unconsciousness.
Anaesthesia remains a mystery
General anaesthesia needs patients to be immobile, pain free and unconscious. Of these, unconsciousness is the most difficult to define and measure.
For example, not responding to, or then not remembering, some event (such as the voice of the anaesthetist or the moment of surgical incision), while clinically useful, is not enough to decisively determine whether someone is or was unconscious.
We need some other way to define consciousness and to understand its disruption by the biological actions of general anaesthetics.
Early in the 20th century, we thought anaesthetics worked by dissolving into the fatty parts of the outside of brain cells (the cell membrane) and interfering with the way they worked.
But we now know anaesthetics directly affect the behaviour of a wide variety of proteins necessary to support the activity of neurones (nerve cells) and their coordinated behaviour.
For this reason the only way to develop an integrated understanding of the effects of these multiple, and individually insufficient, neuronal protein targets is by developing testable, mathematically formulated theories.
These theories need to not only describe how consciousness emerges from brain activity but to also explain how this brain activity is affected by the multiple targets of anaesthetic action.
Despite the tremendous advances in the science of anaesthesia, after almost 200 years we are still waiting for such a theory.
Until then we are still looking for the missing link between the physical substance of our brain and the subjective content of our minds.
Reality television shows based on surgical transformations, such as The Swan and Extreme Makeover, were not the first public spectacles to offer women the ability to compete for the chance to be beautiful.
In 1924, a competition ad in the New York Daily Mirror asked the affronting question “Who is the homeliest girl in New York?” It promised the unfortunate winner that a plastic surgeon would “make a beauty of her”. Entrants were reassured that they would be spared embarrassment, as the paper’s art department would paint “masks” on their photographs when they were published.
Cosmetic surgery instinctively seems like a modern phenomenon. Yet it has a much longer and more complicated history than most people likely imagine. Its origins lie in part in the correction of syphilitic deformities and racialised ideas about “healthy” and acceptable facial features as much as any purely aesthetic ideas about symmetry, for instance.
In her study of how beauty is related to social discrimination and bias, sociologist Bonnie Berry estimates that 50% of Americans are “unhappy with their looks”. Berry links this prevalence to mass media images. However, people have long been driven to painful, surgical measures to “correct” their facial features and body parts, even prior to the use of anaesthesia and discovery of antiseptic principles.
Some of the first recorded surgeries took place in 16th-century Britain and Europe. Tudor “barber-surgeons” treated facial injuries, which as medical historian Margaret Pelling explains, was crucial in a culture where damaged or ugly faces were seen to reflect a disfigured inner self.
With the pain and risks to life inherent in any kind of surgery at this time, cosmetic procedures were usually confined to severe and stigmatised disfigurements, such as the loss of a nose through trauma or epidemic syphilis.
The first pedicle flap grafts to fashion new noses were performed in 16th-century Europe. A section of skin would be cut from the forehead, folded down, and stitched, or would be harvested from the patient’s arm.
A later representation of this procedure in Iconografia d’anatomia
published in 1841, as reproduced in Richard Barnett’s Crucial Interventions, shows the patient with his raised arm still gruesomely attached to his face during the graft’s healing period.
As socially crippling as facial disfigurements could be and as desperate as some individuals were to remedy them, purely cosmetic surgery did not become commonplace until operations were not excruciatingly painful and life threatening.
In 1846, what is frequently described as the first “painless” operation was performed by American dentist William Morton, who gave ether to a patient. The ether was administered via inhalation through either a handkerchief or bellows. Both of these were imprecise methods of delivery that could cause an overdose and kill the patient.
The removal of the second major impediment to cosmetic surgery occurred in the 1860s. English doctor Joseph Lister’s model of aseptic, or sterile, surgery was taken up in France, Germany, Austria and Italy, reducing the chance of infection and death.
By the 1880s, with the further refinement of anaesthesia, cosmetic surgery became a relatively safe and painless prospect for healthy people who felt unattractive.
The Derma-Featural Co advertised its “treatments” for “humped, depressed, or… ill-shaped noses”, protruding ears, and wrinkles (“the finger marks of Time”) in the English magazine World of Dress in 1901.
A report from a 1908 court case involving the company shows that they continued to use skin harvested from – and attached to – the arm for rhinoplasties.
The report also refers to the non-surgical “paraffin wax” rhinoplasty, in which hot, liquid wax was injected into the nose and then “moulded by the operator into the desired shape”. The wax could potentially migrate to other parts of the face and be disfiguring, or cause “paraffinomas” or wax cancers.
Advertisements for the likes of the the Derma-Featural Co were rare in women’s magazines around the turn of the 20th century. But there were frequently ads published for bogus devices promising to deliver dramatic face and body changes that might reasonably be expected only from surgical intervention.
Various models of chin and forehead straps, such as the patented “Ganesh” brand, were advertised as a means for removing double chins and wrinkles around the eyes.
Bust reducers and hip and stomach reducers, such as the JZ Hygienic Beauty Belt, also promised non-surgical ways to reshape the body.
The frequency of these ads in popular magazines suggests that use of these devices was socially acceptable. In comparison, coloured cosmetics such as rouge and kohl eyeliner were rarely advertised. The ads for “powder and paint” that do exist often emphasised the product’s “natural look” to avoid any negative association between cosmetics and artifice.
The racialised origins of cosmetic surgery
The most common cosmetic operations requested before the 20th century aimed to correct features such as ears, noses, and breasts classified as “ugly” because they weren’t typical for “white” people.
At this time, racial science was concerned with “improving” the white race.
In the United States, with its growing populations of Jewish and Irish immigrants and African Americans, “pug” noses, large noses and flat noses were signs of racial difference and therefore ugliness.
Sander L. Gilman suggests that the “primitive” associations of non-white noses arose “because the too-flat nose came to be associated with the inherited syphilitic nose”.
American otolaryngologist John Orlando Roe’s discovery of a method for performing rhinoplasties inside the nose, without leaving a tell-tale external scar, was a crucial development in the 1880s. As is the case today, patients wanted to be able to “pass” (in this case as “white”) and for their surgery to be undetectable.
In 2015, 627,165 American women, or an astonishing 1 in 250, received breast implants. In the early years of cosmetic surgery, breasts were never made larger.
Breasts acted historically as a “racial sign”. Small, rounded breasts were viewed as youthful and sexually controlled. Larger, pendulous breasts were regarded as “primitive” and therefore as a deformity.
In the age of the flapper, in the early 20th century, breast reductions were common. It was not until the 1950s that small breasts were transformed into a medical problem and seen to make women unhappy.
Shifting views about desirable breasts illustrate how beauty standards change across time and place. Beauty was once considered as God-given, natural or a sign of health or a person’s good character.
When beauty began to be understood as located outside of each person and as capable of being changed, more women, in particular, tried to improve their appearance through beauty products, as they now increasingly turn to surgery.
As Elizabeth Haiken points out in Venus Envy, 1921 not only marked the first meeting of an American association of plastic surgery specialists, but also the first Miss America pageant in Atlantic City. All of the finalists were white. The winner, sixteen-year-old Margaret Gorman, was short compared to today’s towering models at five-feet-one-inch tall, and her breast measurement was smaller than that of her hips.
There is a close link between cosmetic surgical trends and the qualities we value as a culture, as well as shifting ideas about race, health, femininity, and ageing.
Last year was celebrated by some within the field as the 100th anniversary of modern cosmetic surgery. New Zealander Dr Harold Gillies has been championed for inventing the pedicle flap graft during World War I to reconstruct the faces of maimed soldiers. Yet as is well documented, primitive versions of this technique had been in use for centuries.
Such an inspiring story obscures the fact that modern cosmetic surgery was really born in the late 19th century and that it owes as much to syphilis and racism as to rebuilding the noses and jaws of war heroes.
The surgical fraternity – and it is a brotherhood, as more than 90% of cosmetic surgeons are male— conveniently places itself in a history that begins with reconstructing the faces and work prospects of the war wounded.
In reality, cosmetic surgeons are instruments of shifting whims about what is attractive. They have helped people to conceal or transform features that might make them stand out as once diseased, ethnically different, “primitive”, too feminine, or too masculine.
The sheer risks that people have been willing to run in order to pass as “normal” or even to turn the “misfortune” of ugliness, as the homeliest girl contest put it, into beauty, shows how strongly people internalise ideas about what is beautiful.
Looking back at the ugly history of cosmetic surgery should give us the impetus to more fully consider how our own beauty norms are shaped by prejudices including racism and sexism.