The outlook was not promising in 1864 for President Abraham Lincoln’s reelection.
Hundreds of thousands of Americans had been killed, wounded or displaced in a civil war with no end in sight. Lincoln was unpopular. Radical Republicans in his own party doubted his commitment to Black civil rights and condemned his friendliness to ex-rebels.
Momentum was building to replace him on the ballot with Treasury Secretary Salmon P. Chase. A pamphlet went viral arguing that “Lincoln cannot be re-elected to the Presidency,” warning that “The people have lost all confidence in his ability to suppress the rebellion and restore the Union.” An embarrassed Chase offered Lincoln his resignation, which the president declined.
The fact remained that no president had won a second term since Andrew Jackson, 32 years and nine presidents earlier. And no country had held elections in the midst of civil war.
Arguments for postponing
Some urged that the June Republican convention be postponed until September to give the Union one more shot at military victory. Other Republicans went further, arguing that the country should “postpone … a Presidential election for four years more … (until) the rebellion will not only be subdued, but the country will be tranquillized and restored to its normal condition.”
Holding the election during civil war would render “the vote … fraudulent,” argued the New York Sunday Mercury, in a widely reprinted article. The nation would “flame up in revolution, and the streets of our cities would run with blood.”
But Lincoln’s party renominated him. He was a canny political strategist who calculated that nominating Democratic Unionist and military Governor of Tennessee Andrew Johnson for vice president would attract disaffected Democrats and speed national reunification.
Johnson proved to be a disastrous choice for Black civil rights, but in 1864 his candidacy shrewdly balanced the ticket.
Yet a military victory that could also help Lincoln’s standing and prospects was elusive. General Ulysses S. Grant led the Overland Campaign against Confederates, led by General Robert E. Lee, across much of eastern Virginia that spring. After 55,000 Union casualties – about 45% of Grant’s army – Grant laid siege to Petersburg.
By the time Democrats met in August to nominate General George B. McClellan, there was still no end in sight to the war. Lincoln had removed McLellan from command of the Union Army of the Potomac in 1862, but the general was still a commissioned officer. Yet McClellan’s party was in disarray. He opposed a peace settlement with the Confederacy while the Democratic Party platform committed him to it.
Defeat ‘seems exceedingly probable’
Without scientific polling, Lincoln and his advisers predicted defeat.
At the end of August, Lincoln wrote to his Cabinet, “it seems exceedingly probable that this Administration will not be re-elected. Then it will be my duty to so co-operate with the President elect, as to save the Union between the election and the inauguration; as he will have secured his election on such ground that he can not possibly save it afterwards.”
Abraham Lincoln understood that the war for the Union was about the integrity of a constitutional republic, not the president or the party. It was about “a new birth of freedom” and not about him. And that meant his victory in the election was less important to him than the fate of the entire country.
Yet Lincoln also made contingency plans in the event he lost, asking Frederick Douglass to help free enslaved people in rebel-held areas.
Soldiers vote absentee
It was a bitter campaign. Lincoln’s opponents tarred him with racist and bestial characterizations. Republicans fought back, charging Democrats with being treasonous.
But no slogan discrediting the opposition was as effective in building support for Lincoln as the September Union military victories at Mobile Bay and Atlanta.
Even on the eve of the election, there were still calls to delay or cancel the vote.
Lincoln, who would go on to win, assured those critics, “We cannot have free government without elections; and if the rebellion could force us to forego, or postpone a national election it might fairly claim to have already conquered and ruined us.”
Military historians have written tirelessly about the strategic errors during this critical phase of the “Italian campaign,” which reduced the abbey to a “mass of ruins.”
Situated on the Germans’ defensive “Gustav Line,” which connected the Tyrrhenian and Adriatic Seas, the abbey stood in the way of the Allies’ march towards Rome. But was its destruction really necessary?
Many of the campaign’s closest participants didn’t think so. Writing after the war, American army General Mark W. Clark considered the attack an unnecessary measure.
Yet all was not lost. Pre-emptive measures fuelled by a growing trans-Atlantic concern for the protection of its ancient library, archive and treasures spared the abbey an even greater disaster: the complete loss of its cultural identity and heritage.
Both Allied and Axis forces, engaged in a larger war against each other, scrambled to protect Monte Cassino’s library and artifacts. A politicized struggle emerged in the process, with both sides wanting to be seen and remembered as guardians of Europe’s cultural and religious inheritance.
Rise to prominence
Monte Cassino was the fountainhead of the western monastic tradition.
Established by Saints Benedict and Scholastica around the year 529, the abbey grew throughout the Middle Ages into one of the most important religious, political, cultural and intellectual centres in western Europe.
Under Abbot Desiderius (1058-87), who physically expanded the abbey’s scriptorium and its scribal activity, Monte Cassino assumed a prominent place in the annals of western history, culture and learning.
The abbey’s so-called “Golden Age” didn’t last forever. Yet the achievements of this era furnished a rich historical legacy.
More than just bricks and mortar
Saving the abbey from wartime destruction became a priority for both Allied and Axis forces.
Appealing to the Italian people by radio, leaflets “and any other means available,” American army General George Marshall sought to remove all movable works of art from harm’s way. The destruction of immovable works was also to be avoided, “insofar as possible without handicapping military operations.”
Italy’s cultural inheritance was at stake.
Practical limits to protection
There were practical limits to the protection available. The lives of fighting men, military strategists repeatedly argued, should take precedence over ancient buildings.
But as Eisenhower admitted, “the choice is not always so clear-cut as that.” He recognized
there were times when “military necessity” could justify the complete annihilation of “some honoured site.” But it was the imperative of high commanders, he contended, to “spare without any detriment to operational needs” whatever monuments could be saved.
Added to this list of artifacts were priceless artistic works by Titian, Raphael, Bruegel and da Vinci, among others, as well as various ancient vases, tapestries, sculptures, reliquaries (containers for holy relics) and crucifixes.
The whole salvage operation was an improbable feat in diplomacy, secular and ecclesiastical collaboration and logistics in the midst of war. But there are lingering questions about the Germans’ intervention — how both they and Allied forces sought to represent it in historical records.
Whatever the answer, the Italian Director General of the Fine Arts, writing on Dec. 31, 1943, thanked German military and political authorities for their collaborative efforts in safeguarding the “national artistic patrimony.”
Fifteen seconds before 5.30am on July 16 1945, above an area of New Mexico desert so unforgivingly dry that earlier travellers christened it the Jornada del Muerto (Journey of the Dead Man), a new sun flashed into existence and rose rapidly into the sky. It was a little before dawn.
This strange, early daybreak was the Trinity Test: humanity’s first encounter with the atomic bomb. Within a month two bombs were dropped on Japan: the first, “Little Boy”, a uranium weapon, at Hiroshima; the second, “Fat Man”, a plutonium weapon of the implosion design tested at Trinity, on Nagasaki. Casualty estimates varywidely, but perhaps as many as 150-250,000 people died as a direct result of these two events. The following half century was one of intense nuclear testing, the residue of which might be the signature for the proposed new epoch of the Anthropocene.
The extraordinary story of the Manhattan Project, which led to this point, has been told many times. It begins with the realisation that atomic weapons, releasing vast amounts of energy via a nuclear chain reaction, were possible. It includes a 1939 letter, signed by Albert Einstein, alerting President Roosevelt to the dangers of a German atomic bomb programme, and tells how, following the United States’ entry into the second world war after the Japanese attack on Pearl Harbor, the programme accelerated rapidly under the control of General Leslie Groves.
The Manhattan Project absorbed the British and Canadian “Tube Alloys” atomic programme, and drew on a dazzling array of scientific talent. More than a purely scientific endeavour, it was an engineering and industrial enterprise on a massive scale, employing about 130,000 people at its peak, and perhaps half a million cumulatively.
This article is part of Conversation Insights
The Insights team generates long-form journalism derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.
Site Y was a town built from scratch to build the atomic bomb at Los Alamos, New Mexico. Here, under the scientific directorship of J Robert Oppenheimer – a complex, charismatic figure (so famous after the war that he was instantly recognisable by his porkpie hat) – scientists, including many who’d fled Nazi persecution in Europe and were acutely aware of what a Nazi bomb might mean, built the “gadget” tested at Trinity.
By then, though, circumstances had changed. In late 1944, as Allied forces advanced across Europe, it became apparent that the German bomb programme had stalled years before. After Franklin Roosevelt’s death in April 1945 and Germany’s defeat in May, the Trinity Test was prioritised so Harry Truman, the new president, would have news of it when he met Joseph Stalin and Winston Churchill at the Potsdam conference.
Trinity is a striking moment. Scientists, military personnel and observers gathered in observation bunkers 10,000 yards from ground zero, at a base camp ten miles away, and at Compañia Hill, 20 miles away. Overnight, thunder, lightning and rain swept across the area, imperilling the test.
Don Hornig, the last man to “babysit” the bomb in its metal shack at the top of a 100ft tower, recalls passing the time by reading an anthology of humorous writing, Desert Island Decameron, by the light of a 60-watt bulb. He hoped the wet tower would act as a lightning rod if there were any lightning strikes. The alternative was sobering, but he appears to have been philosophical: “It would set the bomb off. And in that case, I’d never know about it! So I read my book.”
At a 2am conference, Groves threatened hard-pressed project meteorologist, Jack Hubbard, insisting he sign his forecast predicting conditions would clear by dawn and promising to “hang” him if they didn’t. Groves then roused New Mexico’s governor by phone, warning him he might have to declare martial law if things went wrong. By 4am the skies were clearing.
As 5.30 approached, people readied themselves with welder’s glass to view the test. At Compañia Hill, the physicist, Edward Teller, passed around sun cream. At S-10000, the main control bunker, an exhausted Oppenheimer leaned against a post to steady himself as the final seconds ticked away, and was heard to mutter: “Lord, these affairs are hard on the heart.”
The story of the Manhattan Project often ends with the controversial use of the bomb on Japan, or goes on to tell about the leaking of atomic secrets by Klaus Fuchs and the first Soviet atomic test in 1949. It might add that Oppenheimer, frequently portrayed as a tragic figure, had his security clearance revoked amid the anti-communist hysteria of the early 1950s.
A new world
Now, 75 years on, it’s worth isolating Trinity from this complex history to ask what that early morning moment in the remote desert meant. It was here, after all, that humans first encountered phenomena that were to haunt the cold war imagination, and still shape how many imagine potential nuclear futures: the atomic flash, the mushroom cloud and radioactive fallout.
Although this was a new human experience (Norris Bradbury, who succeeded Oppenheimer as director of the Los Alamos National Laboratory, noted that “the atom bomb did not fit into any preconceptions possessed by anybody”), it was processed through cultural traditions with long histories. It’s become an origin story in nuclear mythologies.
This fascination with Trinity shows how it’s not only an important historical moment, but a critical cultural one too. As the old sun crept above the horizon a few minutes after the test, many present were in little doubt it was rising on a new world.
The brightest light
In both eyewitness accounts and in fiction, Trinity is described as a moment of rupture and rapture: rupture because it marks the transition from a pre-nuclear to a nuclear age; rapture because the encounter with dazzling light and a power overwhelming the senses has the quality of religious experience.
Of course, there can be distortion in such accounts. The popular tendency to see the atomic bomb as the definitive nuclear technology marginalises fields like nuclear medicine and ignores the intellectual richness of the nuclear sciences.
And there are other candidates for the beginning of the nuclear age: Hiroshima, for sure, but also perhaps the creation of the first self-sustaining chain reaction by Enrico Fermi’s team in Chicago in 1942, Lise Meitner and Otto Frisch’s description of fission in 1939, James Chadwick’s discovery of the neutron in 1932, and Ernest Rutherford’s “splitting” (depending how one defines this) of the atom in 1917. The very notion of a singular beginning to the nuclear age is a fiction: each moment exists only in the context of others.
Yet, Trinity was experienced as a new dawn. This is particularly apparent in the recurring metaphor of the explosion as a sun. For William Laurence of the New York Times, observing the test from 20 miles away at Compañia Hill, it was:
Sunrise such as the world has never seen, a great green super-sun climbing in a fraction of a second to a height of more than 8,000 feet, rising ever higher until it touched the clouds, lighting up earth and sky all around with a dazzling intensity.
Ernest Lawrence, inventor of the cyclotron, a type of particle accelerator, noted the transition “from darkness to brilliant sunshine, in an instant”.
Perhaps the description by Isidor Rabi, discoverer of nuclear magnetic resonance (used in MRI scans), is the most compelling:
The brightest light I have ever seen or that I think anyone has ever seen. It blasted; it pounced; it bored its way right through you. It was a vision that was seen with more than the eye.
The experience is corporeal here: the light has heft and is felt by the body. Its revelatory characteristics are picked up in literature of the Trinity Test. In Lydia Millet’s novel, Oh Pure and Radiant Heart, the flash is a “sear of lightness”. In Joseph Kanon’s thriller, Los Alamos, the protagonist “closed his eyes for a second, but it was there anyway, this amazing light, as if it didn’t need sight to exist”. In John Canaday’s poem, Victor Weisskopf, “a sun erupted”.
Laurence, whose reporting on the bomb won a Pulitzer, saw Trinity as crystallising a new relation with the universe. There, he wrote, “an elemental force [was] freed from its bonds after being chained for billions of years” as, for the first time, humans used an energy source that “does not have its origin in the sun”. “All seemed to feel”, wrote Brigadier General Thomas Farrell, General Groves’s deputy, “that they had been present at the birth of a new age – the Age of Atomic Energy”.
Fire from the gods
Stories of human acquisition of knowledge and power have deep roots in western culture. In Greek myth, Prometheus steals fire from the gods and is punished by being chained to a rock, his liver torn out daily by an eagle, only to grow back that he might be tormented again. One of the most substantial biographies of Oppenheimer names him, in its title, The American Prometheus.
In 1946, reflecting on the moment of the Trinity Test, Oppenheimer himself saw the analogy: “We thought of the legend of Prometheus, of that deep sense of guilt in man’s new powers, that reflects his recognition of evil, and his long knowledge of it.”
The most famous of Oppenheimer’s words to describe Trinity, the lines from the Hindu scripture, the Bhagavad Gita, “Now I am become Death, the Destroyer of Worlds” – first appearing in print in 1948 but frequently repeated subsequently – reinforce this sense of an encounter with divine forces. They are, for instance, the final words in Tom Morton-Smith’s play, Oppenheimer. They are invoked, too, though not actually spoken or sung, when the chorus sings lines from the Gita in John Adams’ opera, Doctor Atomic.
So much part of the mythology are these words, that it’s sometimes erroneously assumed Oppenheimer actually said them at Trinity. His brother Frank’s recollection was that he simply said: “It worked”. It’s important, too, to be wary of where the mythmaking might take us. As the nuclear historian Alex Wellerstein points out, the words from the Gita are unlikely to be the hubristic statement of Oppenheimer’s triumph they might seem. They are often contrasted with the rather blunter assessment of Kenneth Bainbridge, in charge of the test, who commented to Oppenheimer, “Now we are all sons of bitches”.
The phrase’s attraction is, I think, its ambiguity. It’s portentous, but open to interpretation, gesturing toward something important in humanity’s encounter with greater powers (perhaps a Faustian bargain struck between the purity of physics and the real-world horror of military technology) without quite stating it. A similar suggestiveness surely accounts, too, for the proliferation of the famous (but possibly erroneous) story that Trinity was named by Oppenheimer for a metaphysical poem by John Donne:
Batter my heart, three-person’d God, for you
As yet but knock, breathe, shine, and seek to mend;
That I may rise and stand, o’erthrow me, and bend
Your force to break, blow, burn, and make me new.
It opens up interesting creative possibilities. In her novel Trinity, Louisa Hall imagines Donne’s poem to be one admired by Jean Tatlock, with whom Oppenheimer had an intense relationship, but who died in 1944. In Doctor Atomic, the poem’s words comprise the lyrics of the moving aria closing the first act.
Unsurprisingly, Christian traditions of the acquisition of knowledge, and of the relation with God, are also invoked at Trinity. Oppenheimer famously stated in a lecture in 1947 that “the physicists have known sin”, a statement controversial among his colleagues.
There is, then, a furious mythmaking around both Trinity and Oppenheimer. It transforms Oppenheimer from an actual person into a compelling tragic figure. It transforms the atomic bomb into a technology that symbolises broader anxieties about the relations between ourselves, our technologies and the Earth.
Beauty and terror
Stories about the atomic explosion also conjure up the aesthetic tradition of the sublime, perhaps the dominant means through which encounters with nature have been processed in western societies since the Romantic period. In the art of the sublime, extremity of experience – the wildness and grandeur of nature one might encounter in a storm at sea, for instance – is emphasised.
The sublime evokes both beauty and terror. For Farrell, Groves’s deputy, the explosion was “magnificent, beautiful” and “terrifying”. In Ellen Klages’ young adult novel, The Green Glass Sea, a witness describes Trinity, saying “It was beautiful. It was terrifying”. These are experiences of awe in the sense defined by the Oxford English Dictionary: “a feeling of fear or dread, mixed with profound reverence, typically as inspired by God or the divine”.
Farrell said of the test that it appeared as “that beauty that great poets dream about but describe most poorly and inadequately”. He is, in fact, remarkably eloquent, as this description of the desert landscape, lit by Trinity, shows:
The whole country was lighted by a searing light with the intensity many times that of the midday sun. It was golden, violet, grey and blue. It lighted every peak, crevasse and ridge of the nearby mountain range with clarity and beauty that cannot be described but must be seen to be imagined.
Pearl Buck’s novel about the Manhattan Project, Command the Morning (1959), seems to draw on this description. Stephen Coast, a (fictional) project scientist, sees:
The sky burst into blinding light. Miles away the mountains were black and then glittered into brilliant relief in the searing light. Colour splashed over the landscape, yellow, purple, crimson, grey. Every fold in the mountain sprang into bold lines, every valley was revealed, every peak stood stark.
The proliferation of adjectives chase after the experience as if they can’t keep up with the boiling profusion of colours. Characteristically, here, the sublime exceeds language’s capacity to capture it.
Trinitite and transmutation
Of course, what’s important about eyewitness and literary descriptions is not merely that they fit Trinity into established aesthetic traditions, but that the fit is uncomfortable. There are religious connotations to the dazzling light and overwhelming power of the explosion, but the forces encountered aren’t divine. Feelings aroused by the sublime are displaced uncannily when the source is technology, not nature.
In an essay on the atomic sublime, the scholar, Peter Hales, shows how the threat of the mushroom cloud was eventually somewhat tamed by being mediated through the aesthetics of the sublime. Trinity, though, provides a compelling origin story in nuclear mythologies precisely because in 1945 it was too new to be contained by that tradition. Even the familiar term, “mushroom cloud”, wasn’t yet readily available to name what rose into the sky (Frisch thought it both “a bit like a strawberry … slowly rising into the sky from the ground, with which it remained connected by a lengthening stem of whirling dust”, and “a red-hot elephant standing balanced on its trunk”).
Trinity is unsettling. The experience evoked intimations of the world’s end that were later frequently associated with nuclear weapons. George Kistiakowsky, who led the group building explosive lenses for the gadget, said Trinity was “the nearest thing to doomsday that one could possibly imagine”.
As the mushroom cloud boiled upwards, one military official, perhaps spooked by Enrico Fermi’s mischievous taking of bets on whether the explosion would ignite the atmosphere and, if so, whether it would destroy the whole world or just New Mexico (a possibility actually discussed, but ruled out well in advance of the test), apparently lost faith in the “long-hairs”, as the scientists were sometimes referred to by the soldiers at Los Alamos. “My God,” he’s said to have exclaimed, “the long-hairs have lost control!”.
Frequently, then, Trinity is a story about entering an unsettling new era. The Green Glass Sea captures this beautifully. The desert sand was melted by the test into a glassy substance, dubbed trinitite or Alamogordo glass. The novel’s young protagonist traverses this beautiful, alien world, that came into being 75 years ago:
The ground sloped gently downward into a huge green sea. Dewey took a few more steps and saw that it wasn’t water. It was glass. Translucent jade-green glass, everywhere, colouring the bare, empty desert as far ahead as she could see.
As the UK marks the 80th anniversary of that escape, we shall hear much of the author JB Priestley’s first “postscript” for BBC Radio on Wednesday June 5. That broadcast coined the phrase “Little Ships” and even acknowledged Priestley’s own part in shaping understanding of Dunkirk. He asked listeners: “Doesn’t it seem to you to have an inevitable air about it – as if we had turned a page in the history of Britain and seen a chapter headed ‘Dunkirk’?”
But there was nothing inevitable about it.
Before pledging to “fight them on the beaches”, Winston Churchill himself reminded the House of Commons in the same speech that “wars are not won by evacuations”. He acknowledged that the BEF had courted disaster before depicting its escape as “a miracle of deliverance”. That the British public regards it as a triumph owes much to the work of British newspaper journalists and the Royal Navy press officers who briefed them.
How the ‘miracle’ came about
Dunkirk was not reported in eyewitness accounts from the beaches. The few war correspondents who struggled back with the retreating armies had no means by which to communicate. Reports, such as Evelyn Montague’s The Miracle of the BEF’s Return for the Manchester Guardian of Saturday June 1 1940, were penned by journalists invited to witness the Royal Navy’s delivery of evacuated soldiers to the ports of south-east England. There, they were briefed with patriotic fervour and naval pride as well as facts.
The first sentence of Montague’s piece gives a flavour of the mood that was inspired:
In the grey chill dawn today in a south-eastern port, war correspondents watched with incredulous joy the happening of a miracle.
The reporter – a grandson of the famous Guardian editor and owner C.P. Scott – did not fail to give the Royal Navy credit. Having described a waterfront hotel in which “every armchair held its sleeping soldier or sailor, huddled beneath overcoat or ground sheet”, Montague turned to the scene in the port:
As the rising sun was turning the grey clouds to burnished copper the first destroyer of the day slid swiftly into the harbour, its silhouette bristling with the heads of the men who stood packed shoulder to shoulder on its decks.
Back in 1940, the Times did not award reporters bylines. Its report of the BEF’s return on June 1 was by “Our Special Correspondent”. He too witnessed the scenes in a south-eastern port (security censorship forbade more precise identification). The men, he wrote, were “weary but undaunted”. Protected by “the ceaseless patrol maintained by British warships and aeroplanes in the English Channel”, men who had displayed “steadiness under a cruel test” were “pouring onto the quays”.
The Daily Mirror’s Bernard Gray, writing in its stablemate, the Sunday Pictorial, gave his verdict in a column on June 2 headlined simply “The Whole Magnificent Story”. “There have been many glorious episodes in the history of Britain”, he opined, “but, if that great English historian Macaulay were able to select from 2,000 years the most glorious week in the annals of the British Empire, this last seven days would surely be the week he would have chosen.”
Gray did not hesitate to offer comparisons:
Never mind the defeat of the Armada. Forget even the Battle of Waterloo, the epic of Trafalgar. For this week has seen the British Empire at its mightiest – in defeat.
Standing “in the streets of an English Channel Port”, G. Ward Price of the Daily Mail was similarly enthralled in his front-page piece, Rearguard Battles On, on June 1: “It is a picture of staggering heroism, fighting spirit and determination that never weakened in the face of overwhelming odds in men and material.”
A defeat, however ‘glorious’
It took Hilaire Belloc, the Anglo-French author of Cautionary Tales for Children, to recognise in his column for the Sunday Times (The Evacuation and After, June 2) that the withdrawal from Belgium and the collapse of Britain’s key ally, France, constituted a “catastrophe”.
In his defining examination of the elements that comprise Britain’s “received story” of 1940, The Myth of the Blitz, Scottish historian and poet Angus Calder noted that elements of the way the story was reported were misleading. However, Calder agreed that “Dunkirk was indeed a great escape”.
I celebrate the work British newspapers did to stiffen resolve and sustain morale at this time of grave national peril. In a democracy fighting totalitarianism, newspapers must balance their obligation to hold power to account and their duty to the national cause. The newspapers surveyed here certainly colluded in the creation of myths about Dunkirk, but their readers might not have welcomed any efforts to report Dunkirk any other way.
After all, myths are not lies and this one was studded with harsh facts. In Bernard Gray’s words for the Sunday Pictorial, Dunkirk was glorious despite the truth that: “The British Army has not won a battle. The British Army has retreated. The British Army has had to leave the Battlefield.”
For me, David Low captured the prevailing mood in his famous “Very Well, Alone” cartoon for the Evening Standard just a few weeks later on June 18. It depicts a British soldier alone before a raging sea and gesturing with a raised fist towards the Nazi-occupied continent from which German troops were expected to arrive at any moment.
Nearly all the 1.3 million people sent to Auschwitz, the Nazi death camp in occupied Poland, were murdered – either sent to the gas chambers or worked to death. Life expectancy in many of these camps was between six weeks and three months.
Today, however, the topic is being explored in depth, allowing us to better understand not only how Jews died during the Holocaust, but also how they lived.
During the late 1980s, I conducted a study of Jewish men and women who had been part of Auschwitz’s “Canada Commando,” the forced labor detail responsible for sorting through the possessions inmates had brought with them to the camp and preparing those items for reshipment back to Germany for civilian use.
Since the barracks were the only place in the camp where one could find almost unlimited food and clothing, this forced labor troop was named after Canada – a country seen as a symbol of wealth.
Examining the behavior of the men and women of the Canada Commando, I noted an interesting difference. Among the items of clothing sorted there were fur coats. While both male and female prisoners in the Canada Commando tried to sabotage this work, acts punishable by death, their methods differed.
Male prisoners would usually rip the lining and seams of the coat to shreds, keeping only the outer shell intact. At first use, the coat would come apart, leaving the German who wore it coatless in the winter.
The few surviving women in the commando whom I interviewed did not use this tactic. Rather, they told me, they decided together to insert handwritten notes into the coat’s pockets that read something along the lines of: “German women, know that you are wearing a coat that belonged to a woman who has been gassed to death in Auschwitz.”
The women, in other words, chose psychological sabotage. The men, physical.
Coping with hunger
One of the most central experiences of all camp prisoners during the Holocaust was hunger. While both men and women suffered from hunger during incarceration, male and female prisoners used disparate coping methods.
While men would regale each other with tales of the fantastic meals they would enjoy once liberated, women would often discuss how they had cooked they various dishes they loved before the war, from baking fluffy cakes to preparing traditional Jewish blintzes. Cara de Silva’s 1996 book, “In Memory’s Kitchen,” movingly documents how this phenomenon played out among women prisoners in the Terezin camp.
The differences between men’s and women’s coping methods may have derived from the gendered behavior in their lives before the war, in which men ate and women cooked – at least in the middle and lower classes.
In the case of women, this may also have been a female socialization process meant to solve two dilemmas simultaneously: the psychological need to engage – at least verbally – with food, and the educational need to prepare the young girls in the camp for culinary and household tasks after the war.
Under normal circumstances, mothers would have taught their daughters by example – not story.
Motherhood under Nazi rule
Various historical studies make mention of motherly sacrifices during the Holocaust, such as women who chose to accompany their children to death so that they would not be alone during their last moments on Earth.
During the “selections” at Auschwitz – when prisoners were sent either to live or die – prisoners arriving were usually divided by sex, with the elderly, mothers and small children being separated from men and older boys. The mothers with small children, along with the elderly, were automatically sent to death.
Borowsky writes about a number of young mothers who hid from their children during the selection, in an attempt to buy themselves a few additional days or possible hours of life.
If a German soldier found a small child alone at a “selection,” Borowsky writes, he would take the child up and down the rows of prisoners while screaming, “This is how a mother abandons her child?” until he tracked down the hapless woman and condemned them both to the gas chambers.
At first, the female Auschwitz survivors I’ve interviewed said they’d never heard of any such thing. Eventually, however, after I returned to the question several times via different topics, a few women admitted to hearing that a handful mothers who arrived in Auschwitz with small children did indeed try to hide to save their own lives.
Historians are not judges. I do not mention the actions made in mortal fear to condemn these women but rather to contribute, 75 years later, to our understanding of Jewish life and death under Nazi terror. Doing requires relinquishing preconceived notions about both men and women, mapping out a broader canvas of the grim reality at Auschwitz.