Tag Archives: modern
Eighty-five years ago, on April 5, 1933, President Franklin D. Roosevelt signed an executive order allocating US$10 million for “Emergency Conservation Work.” This step launched one of the New Deal’s signature relief programs: the Civilian Conservation Corps, or CCC. Its mission was to put unemployed Americans to work improving the nation’s natural resources, especially forests and public parks.
Today, when Americans talk about “big government,” the connotation is almost always negative. But as I show in my history of the Corps, this agency infused money into the economy at a time when it was urgently needed, and its work had lasting value.
Corps workers planted trees, built dams and preserved historic battlefields. They left trail networks and lodges in state and national parks that are still widely used today. The CCC taught useful skills to thousands of unemployed young men, and inspired later generations to get outside and help conserve America’s public lands.
The spiritual value of outdoor work
Roosevelt had sketched out much of his concept for the CCC well before his inauguration on March 4, 1933. Proposing the corps on March 21, he asserted that it would be “of definite, practical value” to the nation and the men it enrolled:
“The overwhelming majority of unemployed Americans, who are now walking the streets and receiving private or public relief, would infinitely prefer to work. We can take a vast army of these unemployed out into healthful surroundings. We can eliminate to some extent at least the threat that enforced idleness brings to spiritual and moral stability.”
Congress enacted the bill on March 31, and Roosevelt signed it that day. Although there was no precedent for such a vast mobilization, enrollment started a week later in New York, Baltimore, Washington, D.C., Pittsburgh and other major cities, then fanned out across the country. By midsummer, some 250,000 men aged 18 to 25 had signed up. Their six-month term might be spent at one camp or several; it might be located across the continent or, rarely, just across town.
Another day, another dollar
CCC recruits came from families on relief. Agents from local welfare offices screened prospects, then passed them along to the Army for a physical examination and a final decision. The Army also managed the huge task of transporting successful applicants to hundreds of work camps. The corps established operations in all 48 states and the territories of Puerto Rico, Alaska, Hawaii and the Virgin Islands, as well as a separate American Indian division.
Most enrollees were young unmarried men, but the CCC also created special companies of war veterans. This policy was Roosevelt’s response to the 1932 Bonus March, in which thousands of World War I veterans camped out in Washington, D.C., demanding early payment on promised military service bonuses, only to be evicted at gunpoint by order of then-president Herbert Hoover. (Some scholars believe this debacle helped clinch Roosevelt’s election later that year.)
CCC recruits could only bring a single trunk; tools were provided on-site. Many Corps members packed musical instruments, and some brought their dogs, which became company mascots. At the start many recruits slept in tents and bathed in nearby rivers. Those without experience in the great outdoors learned key lessons fast, such as how to avoid using poison ivy for toilet paper. Some succumbed to homesickness and dropped out, but most adjusted, forming baseball teams, music combos and boxing leagues.
Although the CCC was a civilian organization, the camps were run by the Army and bore some of its hallmarks. Dining facilities were called mess halls, beds had to be made tightly enough to bounce a quarter off them, and workers woke to the sound of reveille and went to sleep with taps. Commanding officers had final say over most issues.
At work sites, the Agriculture and Interior departments – custodians of U.S. public lands – were in charge. CCC members planted 3 billion trees, earning the nickname “Roosevelt’s tree army.” This work revitalized U.S. national forests and created shelter belts across the Great Plains to reduce the risk of dust storms. The corps also surveyed and treated forests to control insect pests and created forest fire prevention systems. Over its decade of operation, 42 enrollees and five supervisors died fighting forest fires.
Corps members created and landscaped 711 state parks, and built lodges and hiking trails in dozens of national parks and monument areas. Many of these facilities are still in use today. Attractions including the Grand Canyon, Grand Teton and Yellowstone National Parks, and Civil War battlefields at Gettysburg and Shiloh bear signatures of CCC work.
For their labors, corps members received $30 a month – but as a condition of enrollment, the CCC sent $22 to $25 each pay period home to their families. Still, at Depression prices, $5 was enough to visit nearby dance halls and meet girls once or twice a week. These forays sometimes ended in fights with jealous local men, but also led to many lifelong marriages.
In total, close to 3 million workers and their families received support from the CCC between 1933 and 1942. The corps also provided jobs for well over 250,000 salaried employees, including reserve military officers who ran the camps and so-called “local experienced men” – unemployed foresters who lived near the camps and were hired mainly to help supervise enrollees on the job.
Camps also hired unemployed teachers to offer informal evening classes. Some 57,000 enrollees learned to read and write during their CCC stints. Camps offered many other classes, from standard subjects like history and arithmetic to vocational skills such as radio, carpentry and auto repair.
Like other New Deal programs, the CCC had flaws. Party patronage heavily influenced hiring of salaried personnel. Although the law creating the CCC banned racial discrimination, black enrollment was capped. Many African-American enrollees were housed in “colored camps” and could only go into town for recreation and romance if black communities existed to serve them.
The CCC also discriminated socially, enrolling young men with families but excluding rootless transients who wandered from town to town in search of work and food. These men could have reaped great benefits from the CCC, but its leaders imagined an unbridgeable cultural gap between young men who came from families and others who came from the byroads. And the corps only enrolled men, although Eleanor Roosevelt convinced her husband to let her and Labor Secretary Frances Perkins organize a smaller network of “She-She-She” camps for jobless women.
Congress terminated funding for the CCC in 1942, after the United States entered World War II, although Roosevelt argued that it still played an essential role. Many men who had gained physical strength and learned to handle Army discipline in the CCC later entered the armed forces.
The tree army’s legacy
Beyond its physical impact, the corps helped to broaden public support for conservation. In the 1940s and 1950s, youth groups such as the Oregon-based Green Guards volunteered in local forests clearing flammable underbrush, cutting fire breaks and serving as fire lookouts. Others, such as the Student Conservation Association, advocated for wilderness protection and conservation education. Hundreds of former CCC enrollees helped lead these efforts. Today many teenagers work in national parks, forests and wildlife refuges every summer.
Although it is hard to picture a CCC-style initiative winning political support today, some of its ideas still resonate. Notably, the Obama administration’s economic stimulus plan and some proposals for upgrading U.S. infrastructure present federal spending on projects that benefit society as a legitimate way to stimulate economic growth. The CCC combined that strategy with the idea that America’s natural resources should be protected so that everyone could enjoy them.
This year is the 170th anniversary of one of the most significant events in world history: the discovery of gold at Sutter’s Mill in Coloma, California. On January 24, 1848, while inspecting a mill race for his employer John Sutter, James Marshall glimpsed something glimmering in the cold winter water. “Boys,” he announced, brandishing a nugget to his fellow workers, “I believe I have found a gold mine!”
Marshall had pulled the starting trigger on a global rush that set the world in motion. The impact was sudden – and dramatic. In 1848 California’s non-Indian population was around 14,000; it soared to almost 100,000 by the end of 1849, and to 300,000 by the end of 1853. Some of these people now stare back at us enigmatically through daguerreotypes and tintypes. From Mexico and the Hawaiian Islands; from South and Central America; from Australia and New Zealand; from Southeastern China; from Western and Eastern Europe, arrivals made their way to the golden state.
Looking back later, Mark Twain famously described those who rushed for gold as
a driving, vigorous restless population … an assemblage of two hundred thousand young men – not simpering, dainty, kid-gloved weaklings, but stalwart, muscular, dauntless young braves…
“The only population of the kind that the world has ever seen gathered together”, Twain reflected, it was “not likely that the world will ever see its like again”.
Arriving at Ballarat in 1895, Twain saw first-hand the incredible economic, political, and social legacies of the Australian gold rushes, which had begun in 1851 and triggered a second global scramble in pursuit of the precious yellow mineral.
Eureka! X-ray vision can find hidden gold
“The smaller discoveries made in the colony of New South Wales three months before,” he observed, “had already started emigrants towards Australia; they had been coming as a stream.” But with the discovery of Victoria’s fabulous gold reserves, which were literally Californian in scale, “they came as a flood”.
Between Sutter’s Mill in January 1848, and the Klondyke (in remote Northwestern Canada) in the late 1890s, the 19th century was regularly subject to such flooding. Across Australasia, Russia, North America, and Southern Africa, 19th century gold discoveries triggered great tidal waves of human, material, and financial movement. New goldfields were inundated by fresh arrivals from around the globe: miners and merchants, bankers and builders, engineers and entrepreneurs, farmers and fossickers, priests and prostitutes, saints and sinners.
As the force of the initial wave began to recede, many drifted back to more settled lives in the lands from which they hailed. Others found themselves marooned, and so put down roots in the golden states. Others still, having managed to ride the momentum of the gold wave further inland, toiled on new mineral fields, new farm and pastoral lands, and built settlements, towns and cities. Others again, little attracted to the idea of settling, caught the backwash out across the ocean – and simply kept rushing.
From 1851, for instance, as the golden tide swept towards NSW and Victoria, some 10,000 fortune seekers left North America and bobbed around in the wash to be deposited in Britain’s Antipodean colonies alongside fellow diggers from all over the world.
Gold and global history
The discovery of the precious metal at Sutter’s Mill in January 1848 was a turning point in global history. The rush for gold redirected the technologies of communication and transportation and accelerated and expanded the reach of the American and British Empires.
Telegraph wires, steamships, and railroads followed in their wake; minor ports became major international metropolises for goods and migrants (such as Melbourne and San Francisco) and interior towns and camps became instant cities (think Johannesburg, Denver and Boise). This development was accompanied by accelerated mobility – of goods, people, credit – and anxieties over the erosion of middle class mores around respectability and domesticity.
But gold’s new global connections also brought new forms of destruction and exclusion. The human, economic, and cultural waves that swept through the gold regions could be profoundly destructive to Indigenous and other settled communities, and to the natural environment upon which their material, cultural, and social lives depended. Many of the world’s environments are gold rush landscapes, violently transformed by excavation, piles of tailings, and the reconfiguration of rivers.
As early as 1849, Punch magazine depicted the spectacle of the earth being hollowed out by gold mining. In the “jaundice regions of California”, the great London journal satirised: “The crust of the earth is already nearly gone … those who wish to pick up the crumbs must proceed at once to California.” As a result, the world appeared to be tipping off its axis.
In the US and beyond, scholars, museum curators, and many family historians have shown us that despite the overwhelmingly male populations of the gold regions, we cannot understand their history as simply “pale and male”. Chinese miners alone constituted more than 25% of the world’s goldseekers, and they now jostle with white miners alongside women, Indigenous and other minority communities in our understanding of the rushes – just as they did on the diggings themselves.
Rushes in the present
The gold rushes are not mere historic footnotes – they continue to influence the world in which we live today. Short-term profits have yielded long-term loss. Gold rush pollution has been just as enduring as the gold rushes’ cultural legacy. Historic pollution has had long-range impacts that environmental agencies and businesses alike continue to grapple with.
At the abandoned Berkley pit mine in Butte, Montana, the water is so saturated with heavy metals that copper can be extracted directly from it. Illegal mining in the Amazon is adding to the pressures on delicate ecosystems and fragile communities struggling to adapt to climate change.
The phenomenon of rushing is hardly alien to the modern world either – shale gas fracking is an industry of rushes. In the US, the industry has transformed Williston, North Dakota, a city of high rents, ad hoc urban development, and an overwhelmingly young male population – quintessential features of the gold rush city.
In September last year, the Wall Street Journal reported that a new gold rush was under way in Texas: for sand, the vital ingredient in the compound of chemicals and water that is blasted underground to open energy-bearing rock. A rush of community action against fracking’s contamination of groundwater has followed.
The world of the gold rushes, then, is not a distant era of interest only to historians. For better or worse, the rushes are a foundation of many of the patterns of economic, industrial, and environmental change central to our modern-day world of movement.
Benjamin Mountford and Stephen Tuffnell’s forthcoming edited collection A Global History of Gold Rushes will be published by University of California Press in October 2018. A sample of their work can also be found in the forthcoming volume Pay Dirt! New Discoveries on the Victorian Goldfields (Ballarat Heritage Services, 2018).
Last November, I ran my first marathon, the “Athens Authentic”. I did it mainly because I wanted to follow in the footsteps of the world’s first marathon runner – the ancient Athenian messenger Pheidippides.
The story, as I knew it, went as follows. After their victory over a Persian invasion force at the border village of Marathon, the Athenians sent a messenger called Pheidippides to deliver the news to the city authorities. After running the 42 kilometres back to Athens, Pheidippides gasped “we’ve won!” (nenikēkamen) and promptly died of exhaustion.
It’s a great story, but was it true? The more I looked into it in the weeks leading up to the race, the less certain I was. Was I about to run 42km for a lie?
Different sources and different stories
Our best source for the events of 490 BC, the fifth-century historian Herodotus, doesn’t mention a messenger being sent from Marathon after the battle. He does say, though, that a runner called Pheidippides (or Philippides, in some manuscripts) was sent to Sparta to ask for help before the battle.
This trip is commemorated in the Spartathlon, a 246km event that I haven’t run – and never will.
Our next-oldest source is the fourth-century-BC intellectual Heraklides Pontikos. He apparently did mention a Marathon runner, but gave his name as Thersippos – at least according to the first-century-AD moralist Plutarch.
Plutarch himself is the earliest author to tell the story of a messenger from Marathon dying from exhaustion after proclaiming victory. But his messenger is called Eukles – and his dying word is nikōmen (we win).
The first time we hear this story with a messenger called Pheidippides (or Philippides) is in Lucian, and by that time we’re in the second century AD, around 600 years after the Battle of Marathon. The runner says nikōmen in that version too.
What to make of the different sources
Herodotus was closest in time to the events. And since he does tell the story of Pheidippides’ run to Sparta and back, he would surely have added in the story of the runner’s death if he had known about it.
But if Pheidippides didn’t run the first marathon, did someone else?
Our next candidate is Eukles, the name Plutarch tells us is given to the Marathon runner by the majority of historians. But here there’s an important detail: Eukles, according to Plutarch, ran from the battle “warm, with his weapons”.
If that’s right, Eukles would have run the first marathon after fighting for three hours or so in a desperate battle for his city’s survival. Not only that, but he would have done so bearing the traditional arms and armour of a Greek hoplite (heavy infantryman): spear, shield, helmet and (if he could afford it) breastplate. The whole panoply would have weighed ten or 20 kilograms, up to about one-third of the body weight of the average classical Greek.
Needless to say, this is something else I haven’t done.
Where does this leave Thersippos, the name given by Heraklides Pontikos?
It’s possible, as the Greek historian Christos Dionysopoulos has suggested, that there was a second runner, sent out the morning after the battle when the Athenians realised the Persians that they had pushed back onto their ships could simply use them to sail down the coast and attack Athens through its traditional harbour. That second runner may have been Thersippos.
But the strategic situation they were in was probably clear to the Athenians even as the battle ended. Someone would have to get to Athens before the Persians did, to reassure the populace that the Athenian army was still standing – and hence that there was no reason to surrender the city to the Persians.
They also needed to signal to any would-be defectors to the Persian side that it was the Athenians who were still calling the shots in Athens.
Eukles’ announcement of an Athenian victory – perhaps with his final breath – would have gone part of the way to achieving these goals.
But to really reassure people, and send a strong signal to potential “Medizers” (Persian sympathizers), the army would need to make an appearance in person. So, the Athenian hoplites, fresh from the most important battle of their lives, marched the 40km or so back to Athens, just in time to scare off the Persian fleet, which finally headed back to Persia.
Like Eukles, the Athenian hoplites would have had to bring along their weapons, both because these were valuable possessions and because they needed them to intimidate the Persians. Unlike Eukles, the Athenians probably didn’t run.
The British historian N.G.L. Hammond reckoned they could have walked the distance in six or seven hours – which is not that much longer than it took me to run it.
How long was it, really?
Speaking of the distance, what was it exactly?
I ran 42,195 metres, the standard length for a marathon, and I felt every metre afterwards. But if that was the distance that Eukles ran, why does the modern race make you run a 2km diversion around the burial monument of the Athenians?
The answer is because the modern marathon distance is only loosely based on the distance Eukles ran. The modern distance comes from the 1908 London Olympics, where competitors ran from Windsor Castle to White City Stadium, and then a bit further along the track to finish in front of the royal box.
The 1908 race was thus longer than the first Olympic marathon run in 1896. That course was 40km, the distance between the village of Marathon and the Panathenaic Stadium, where I finished my race.
The annual Athens “authentic” marathon didn’t begin until 1972. By that point 42,195 metres had long become the standard distance. That’s why I had the wonderful opportunity of running 2km extra around the tomb of the Athenians.
So, where did this all leave me as I trudged up the road from Marathon to Athens? (“Up”, by the way, is very much the right word.)
I wasn’t following in the footsteps of Pheidippides – thankfully, since his run to Sparta and back was much longer than the one I was doing. I might have been following in the footsteps of a man named Thersippos, but I was most likely retracing the steps of one called Eukles, albeit without carrying one-third of my body weight in armour.
But something else occurred to me as I eventually slowed to a walk for part of the course. I may have been doing so because I was tired, but I was also making my journey more similar to that of the Athenian hoplites in 490 BC. They walked briskly, after defeating an absolutist invasion force that was seeking to crush their nascent democracy.
Some 2,500 years later, I run-walked slowly over the same ground, unarmed and without a worry on my mind except the next research deadline. And that was good enough for me.
Even at John F. Kennedy’s centennial on May 29, 2017, the 35th president remains an enigma. We still struggle to come to a clear consensus about a leader frozen in time – a man who, in our mind’s eye, is forever young and vigorous, cool and witty.
While historians have portrayed him as everything from a nascent social justice warrior to a proto-Reaganite, his political record actually offers little insight into his legacy. A standard “Cold War liberal,” he endorsed the basic tenets of the New Deal at home and projected a stern, anti-Communist foreign policy. In fact, from an ideological standpoint, he differed little from countless other elected officials in the moderate wing of the Democratic Party or the liberal wing of the Republican Party.
Much greater understanding comes from adopting an altogether different strategy: approaching Kennedy as a cultural figure. From the beginning of his career, JFK’s appeal was always more about image than ideology, the emotions he channeled than the policies he advanced.
Generating an enthusiasm more akin to that of a popular entertainer than a candidate for national office, he was arguably America’s first “modern” president. Many subsequent presidents would follow the template he created, from Republicans Ronald Reagan and Donald Trump to Democrats Bill Clinton and Barack Obama.
A cultural icon
JFK pioneered the modern notion of the president as celebrity. The scion of a wealthy family, he became a national figure as a young congressman for his good looks, high-society diversions and status as an “eligible bachelor.”
He hobnobbed with Hollywood actors such as Frank Sinatra and Tony Curtis, hung out with models and befriended singers. He became a fixture in the big national magazines – Life, Look, Time, The Saturday Evening Post – which were more interested in his personal life than his political positions.
Later, Ronald Reagan, the movie actor turned politician, and Donald Trump, the tabloid fixture and star of “The Apprentice,” would translate their celebrity impulses into electoral success. Meanwhile, the saxophone-playing Bill Clinton and the smooth, “no drama” Obama – ever at ease on the talk show circuit – teased out variations of the celebrity role on the Democratic stage.
After Kennedy, it was the candidate with the most celebrity appeal who often triumphed in the presidential sweepstakes.
A master of the media
Kennedy also forged a new path with his skillful utilization of media technology. With his movie-star good looks, understated wit and graceful demeanor, he was a perfect fit for the new medium of television.
He was applauded for his televised speeches at the 1956 Democratic convention, and he later prevailed in the famous television debates of the 1960 presidential election. His televised presidential press conferences became media works of art as he deftly answered complex questions, handled reporters with aplomb and laced his responses with wit, quoting literary figures like the Frenchwoman Madame de Staël.
Two decades later, Reagan proved equally adept with television, using his acting skills to convey an earnest patriotism, while the lip-biting Clinton projected the natural empathy and communication skills of a born politician. Obama’s eloquence before the cameras became legendary, while he also became an early adopter of social media to reach and organize his followers.
Trump, of course, emerged from a background in reality television and adroitly employed Twitter to circumvent a hostile media establishment, generate attention and reach his followers.
The vigorous male
Finally, JFK reshaped public leadership by exuding a powerful, masculine ideal. As I explore in my book, “JFK and the Masculine Mystique: Sex and Power on the New Frontier,” he emerged in a postwar era colored by mounting concern over the degeneration of the American male. Some blamed the shifting labor market for turning men from independent, manual laborers into corpulent, desk-bound drones within sprawling bureaucracies. Others pointed to suburban abundance for transforming men into diaper-changing denizens of the easy chair and backyard barbecue. And many thought that the advancement of women in the workplace would emasculate their male coworkers.
Enter Jack Kennedy, who promised a bracing revival of American manhood as youthful and vigorous, cool and sophisticated.
In his famous “New Frontier” speech, he announced that “young men are coming to power – men who are not bound by the traditions of the past – young men who can cast off the old slogans and delusions and suspicions.”
In a Sports Illustrated article titled “The Soft American,” he advocated a national physical fitness crusade. He endorsed a tough-minded realism to shape the counterinsurgency strategies that were deployed to combat Communism, and he embraced the buccaneering style of the CIA and the Green Berets. He championed the Mercury Seven astronauts as sturdy, courageous males who ventured out to conquer the new frontier of space.
JFK’s successors adopted many of these same masculine themes. Reagan positioned himself as a manly, tough-minded alternative to a weak, vacillating Jimmy Carter. Clinton presented himself as a pragmatic, assertive, virile young man whose hardscrabble road to success contrasted with the privileged, preppy George H.W. Bush. Obama impressed voters as a vigorous, athletic young man who scrimmaged with college basketball teams – a contrast to the cranky, geriatric John McCain and a stiff, pampered Mitt Romney.
More recently, of course, Trump’s outlandish masculinity appealed to many traditionalists unsettled by a wave of gender confusion, women in combat, weeping millennial “snowflakes” and declining numbers of physically challenging manufacturing jobs in the country’s post-industrial economy. No matter how crudely, the theatrically male businessman promised a remedy.
So as we look back at John F. Kennedy a century after his birth, it seems ever clearer that he ascended the national stage as our first modern president. Removed from an American political tradition of grassroots electioneering, sober-minded experience and bourgeois morality, this youthful, charismatic leader reflected a new political atmosphere that favored celebrity appeal, media savvy and masculine vigor. He was the first American president whose place in the cultural imagination dwarfed his political positions and policies.
Just as style made the man with Kennedy, it also remade the American presidency. It continues to do so today.
The Battle of Gettysburg was a turning point in the American Civil War, and Gen. George Pickett’s infantry charge on July 3, 1863, was the battle’s climax. Had the Confederate Army won, it could have continued its invasion of Union territory. Instead, the charge was repelled with heavy losses. This forced the Confederates to retreat south and end their summer campaign.
Pickett’s Charge consequently became known as the Confederate “high water mark.” Countless books and movies tell its story. Tourists visit the battlefield, re-enactors refight the battle and Civil War roundtable groups discuss it. It still reverberates in ongoing American controversies over leaders statues, Confederate flags and civil rights.
Why did the charge fail? Could it have worked if the commanders had made different decisions? Did the Confederate infantry pull back too soon? Should Gen. Robert E. Lee have put more soldiers into the charge? What if his staff had supplied more ammunition for the preceding artillery barrage? Was Gen. George Meade overly cautious in deploying his Union Army?
Politicians and generals began debating those questions as soon as the battle ended. Historians and history buffs continue to do so today.
Data from conflict used to build model
That debate was the starting point for research I conducted with military historian Steven Sondergren at Norwich University. (A grant from Fulbright Canada funded my stay at Norwich.) We used computer software to build a mathematical model of the charge. The model estimated the casualties and survivors on each side, given their starting strengths.
We used data from the actual conflict to calibrate the model’s equations. This ensured they initially recreated the historical results. We then adjusted the equations to represent changes in the charge, to see how those affected the outcome. This allowed us to experiment mathematically with several different alternatives.
The first factor we examined was the Confederate retreat. About half the charging infantry had become casualties before the rest pulled back. Should they have kept fighting instead? If they had, our model calculated that they all would have become casualties too. By contrast, the defending Union soldiers would have suffered only slightly higher losses. The charge simply didn’t include enough Confederate soldiers to win. They were wise to retreat when they did.
We next evaluated how many soldiers the Confederate charge would have needed to succeed. Lee put nine infantry brigades, more than 10,000 men, in the charge. He kept five more brigades back in reserve. If he had put most of those reserves into the charge, our model estimated it would have captured the Union position. But then Lee would have had insufficient fresh troops left to take advantage of that success.
Ammunition ran out
We also looked at the Confederate artillery barrage. Contrary to plans, their cannons ran short of ammunition due to a mix-up with their supply wagons. If their generals had better coordinated those supplies, the cannons could have fired twice as much. Our model calculated that this improved barrage would have been like adding one more infantry brigade to the charge. That is, the supply mix-up hurt the Confederate attack, but was not decisive by itself.
Finally, we considered the Union Army. After the battle, critics complained that Meade had focused too much on preparing his defences. This made it harder to launch a counter-attack later. However, our model estimated that if he had put even one less infantry brigade in his defensive line, the Confederate charge probably would have succeeded. This suggests Meade was correct to emphasize his defense.
Pickett’s Charge was not the only controversial part of Gettysburg. Two days earlier, Confederate Gen. Richard Ewell decided against attacking Union soldiers on Culp’s Hill. He instead waited for his infantry and artillery reinforcements. By the time they arrived, however, it was too late to attack the hill.
Was Ewell’s Gettysburg decision actually wise?
Ewell was on the receiving end of a lot of criticism for missing that opportunity. Capturing the hill would have given the Confederates a much stronger position on the battlefield. However, a failed attack could have crippled Ewell’s units. Either result could have altered the rest of the battle.
A study at the U.S. Military Academy used a more complex computer simulation to estimate the outcome if Ewell had attacked. The simulation indicated that an assault using only his existing infantry would have failed with heavy casualties. By contrast, an assault that also included his later-arriving artillery would have succeeded. Thus, Ewell made a wise decision for his situation.
Both of these Gettysburg studies used mathematics and computers to address historical questions. This blend of science and humanities revealed insights that neither specialty could have uncovered on its own.
That interdisciplinary approach is characteristic of “digital humanities” research more broadly. In some of that research, scholars use software to analyze conventional movies and books. Other researchers study digital media, like computer games and web blogs, where the software instead supports the creative process.
The link below is to an article that takes a look at the origin of the modern French flag.
The Modern Computer Mouse
On this day in 1981, the modern computer ‘mouse’ comes into the public arena, being shipped with computers sold by Xerox PARC (there had been earlier mice).