Consequences of the War of 1812

Here is the final post in our series on the War of 1812, dealing with the situation of Britain, the United States, Canada, and Native Americans of the western frontier in the aftermath of the war.

After the Treaty of Ghent took effect in February 1815, the U.S. and Britain were officially at peace. But so had they been in 1812, when the war started; was anything different?

On the surface, the answer was clearly “no.” Neither the U.S. nor Great Britain gave up any territory during the war or as a result of the peace. That meant Britain was still sitting on the western frontier (at that time, Michigan, Wisconsin, Illinois, etc.). The British were free to continue to harrass U.S. settlement of its territories, and to ally with Native Americans to do so.

But they did not. The British no longer needed to keep the U.S. off-balance and in check. Now that there was no war between Britain and France for the Americans to join with France in fighting, Britain stopped doing all the things that had led the Americans to declare war: impressing U.S. sailors, capturing U.S. ships, harassing U.S. settlement. Britain concentrated its defensive efforts on maintaining Canada, and left the U.S. alone. Indeed, Britain was now anxious to engage in its profitable trade with the U.S. once again, and had no desire to weaken the new nation.

The U.S., for its part, was glad to go back to the status quo land-wise, no longer certain of its ability or desire to conquer Canada. With British pressure off the western frontier, the U.S. could focus on re-establishing its strength and reputation after the disastrous and embarrassing losses of the war. Washington DC was rebuilt and a modern navy was constructed—no more relying on gunboats to defend the U.S. coast or forts.

The areas of the U.S. that suffered after the war were New England and the Deep South. New England had opposed the war vigorously throughout and had been seen to ally itself with Britain; after the war, which most Americans saw as a massive victory (mostly because of the Battle of New Orleans), there was hostility toward the traitorous region. New England states had held a conference from December 1814-January 1815 at which they asked the federal government to give them back full control over their militia and their finances (they didn’t want to participate in the blockade or war taxation). Word spread that the New England conference was actually a secession conference, that New England wanted to leave the Union, and popular anger at the region was inflamed. It would take a few decades for New England to regain its standing in the eyes of the nation. New York took over as the most important city in the northeast, and Boston and New England took a backseat to that thriving metropolis.

In the Deep South, slaveholders had seen their fantasy that enslaved black Americans loved slavery exploded before their eyes by the numbers of enslaved people who ran away to join the British war effort. Promised their freedom if they did so, black Americans put themselves at great risk to aid the British. (They would be cruelly disappointed by their ally, for Britain launched a few very feeble efforts to resettle black Americans in howling wildernesses in Canada and overseas.) Slaveholders tried to convince themselves and the nation that this was an anomaly, but Denmark Vesey’s and Nat Turner’s slave uprisings in 1822 and 1831 showed it was not, and the South clamped down on enslaved Americans even harder.

Native Americans were losers in the war on a par with enslaved black Americans. The British withdrew their financial and military aid from Native Americans on the western frontier, who were left to face increasing white settlement with no leader to unify them and no money or ammo to fight.  Native Americans either moved west or lived in segregation with white settlers. Their plight would worsen when the hero of New Orleans, Andrew Jackson, became president; an inveterate “Indian” hater, Jackson would set out to destroy all Native American groups within the U.S., most famously when he overturned a Supreme Court order protecting the Cherokees and sent them on their death march from Georgia to Oklahoma in 1838-9.

So at the end of the war we see the U.S. in a position to grow stronger and richer thanks to the constant threat of French or British harrassment being removed. Britain is the undisputed superpower of the world, and has no need to hassle the U.S. Slavery is threatened but viciously preserved in the southern U.S., the northeastern U.S. loses its pre-eminence over New York, and Native Americans are miminalized in the western U.S.

The War of 1812 did not have to happen. If the U.S. could  have held off from entering into a trade agreement with France that was bound to provoke Great Britain to war, if the U.S. could have made itself as invisible as possible, suffering insults at sea and at home, from 1794 to 1814, the Napoleonic Wars would have ended on cue and suddenly the pressure would have been off and the nation could have gone straight to being Britain’s good trading partner and skipped the mostly disastrous war.

But 20 years is a long time to be insulted and invisible, and really, if the U.S. had allowed Britain to push it around entirely for 20 years, would the U.S. have seemed so desirable a partner by 1815? Perhaps not. The war itself strengthened the U.S. in important ways. The war taught the states that they needed to shake off their chronic unwillingness give the federal government any money and put out the cash needed to build an Army and Navy to defend itself. It taught the U.S. that it was not yet a major player in world affairs. It taught the U.S. that diplomacy was as important as an army and navy. Last, the War of 1812, despite the complaints and isolation of New England during the war, taught the U.S. that it was one unit, not just a group of unaffiliated states. It lived or died as a cooperative unit. The “Era of Good Feelings” that followed the war was the result of feeling that the states had been more closely welded together into a nation. The continuing fight over slavery would take over 40 years to rip that nation apart again.

The War of 1812 is not well-remembered today. It is a blip between the Revolution and the Mexican or even the Civil War. But the U.S. had a very great deal to lose in the War of 1812, and came very close to losing it all. This near-miss is worth a closer look.

The Burning of Washington and the Battle of New Orleans

Welcome to part 3 of our series on the War of 1812. Here we focus on two epic moments in that conflict. The first gave us our national anthem, the second gave us a controversial president.

We covered the attack on Washington briefly in the last post, Overview of the War of 1812. The British navy had been terrorizing the Atlantic coast, particularly the Chesapeake Bay area, from the start of the war. The U.S. had few warships with which to challenge the British, who sometimes sent detachments to coastal towns offering them the choice of paying a fine or being bombarded. The British moved up the Chesapeake Bay in the summer of 1814, heading not really toward Washington but toward Baltimore.

Baltimore was a thriving port and an important U.S. city. The British plan was to destroy Washington for the symbolic value of it, then overcome nearby Baltimore to drive home the final nail in the coffin of U.S. resistance. On August 24 a battle was fought at Bladensburg, Maryland, just miles from Washington, between desperate Americans and the determined British. It was a defeat for the Americans. President James Madison had left the White House to watch the battle from a short distance away, and when it became apparent that the British were victorious, and heading directly for the capital, a messenger was sent to the White House to let the First Lady, Dolly Madison, know that she had to leave immediately.

First Lady Madison did not do so. With nerves of steel, she collected important documents from the president’s office, and with the British vanguard in sight, she finally took the portrait of George Washington from the president’s walls and fled, her carriage just escaping the attack on her home.

The British intent was to destroy the city as completely as possible. One British soldier, George Gleig, happily described the scene in an 1826 history: “[We] proceeded, without  a moment’s delay, to burn and destroy everything in the most distant degree connected with government. In this general devastation were included the Senate House, the President’s palace [the White House], an extensive dockyard and arsenal… the blazing of houses, ships, and stores, the report of exploding magazines, and the crash of falling roofs informed them, as they proceeded, of what was going forward. You can conceive nothing finer than the sight which met them as they drew near to the town. The sky was brilliantly illuminated by the different conflagrations, and a dark red light was thrown upon the road, sufficient to permit each man to view distinctly his comrade’s face.”

Washington was quickly vanquished, and British sights set on Baltimore. The attack was two-pronged: a land attack on North Point and a siege of Fort McHenry in the harbor. Major General Samuel Smith stopped the British at North Point, in an unexpected and certainly unusual American victory. All now waited to see how the siege would go at the important fort. Major George Armistead was in charge of U.S. defenses there. Bombardment of the fort by British ships began on September 13th. Nearly 2,000 cannonballs were launched at the fort over 24 hours, but damage was light.

The British commander decided to land troops west of Fort McHenry, hoping to lure the U.S. army away from North Point, but Armistead discovered them and opened fire, scattering the landing party of British soldiers. Early on the morning of September 14, the giant American flag that local seamstress Mary Pickersgill and her daughter had made was raised over the fort, to replace the one torn apart the night before. Seeing that the fort still stood in American hands, British land forces withdrew and returned to the ships. British General Cochrane then withdrew the fleet to prepare for the attack on New Orleans.

Francis Scott Key was an American lawyer who had gone on a mercy mission to the British to gain the release of an American doctor who had been captured but had previously tended British soldiers. Key was on a truce ship in Baltimore Harbor during the bombardment. When morning dawned on the 14th, and Key saw his country’s flag still flying over Fort McHenry, he wrote the words of “The Star-Spangled Banner” on the back of a letter in a paroxsym of joy. It became the U.S. national anthem in 1931.

Now to the Battle of New Orleans. The goal of the British was not just to capture the port city, but to do so and then lay claim to all of the territory included in the Louisiana Purchase.

Cochrane’s fleet arrived from the failed attack on Baltimore on December 12, 1814. They anchored in the Gulf of Mexico and planned their attack to capture Lakes Pontchartrain and Borgne surrounding New Orleans. Once again, the U.S. had only gunboats to defend its territory. Lake Borgne was captured by the British on December 14.

On December 23, the British reached the Mississippi River, only six miles south of New Orleans; rather than attack immediately, their commander waited for reinforcements and was surprised by U.S. soldiers under Andrew Jackson. After a brief but devastating attack, the Americans pulled back to a canal four miles from New Orleans and fortified it. The British made small attacks on the earthworks on December 28, but the first heavy attack came on January 1, 1815. The earthworks were partially destroyed, and the British ran out of ammunition. British Major-General Pakenham decided to wait for reinforcements before launching another attack.

It came on January 8. The British advanced early that morning in a heavy fog, but that fog lifted as they came upon the earthworks and the Americans began to fire. Lt. Col. Thomas Mullins, leading a British regiment, had forgotten to bring the ladders his men would need to scale the defenses, and as the British stalled in front of the earthworks they were mowed down by American fire. As different groups of British soldiers crossed the battlefield, one managed to briefly overtake a section of the earthworks but could not hold it without reinforcements. The Americans, however, received reinforcements from the 7th Infantry, and before the battle was over most of the British officers  leading the charge were dead.

The victory was significant, but the battle for the city was not over. On January 9, the British began a 10-day bombardment of Fort St. Philip. The fort held, and the British withdrew to Biloxi, Mississippi. They were preparing to attack the port city of Mobile when word came that the war had in fact ended before the Battle of New Orleans had even begun. Jackson was a hero, and Americans rejoiced.

Next time: after the war

Overview of the War of 1812

Welcome to part 2 of our series on the War of 1812. Here we look at the fighting of the war.

For very different reasons, neither the U.S. nor Great Britain really hit the ground running once war was declared. Britain was in the midst of its French wars, and its navy was blockading Europe while its army was fighting the Peninsular War in Spain. There were barely 6,000 British soldiers in all of Canada when the War of 1812 began. Because no soldiers could be spared from the battles in Europe, Britain took a defensive position with its army in Canada and sent a few warships across the Atlantic to do battle on the coast.

The U.S. was unprepared for war because it was poor, the U.S. Army had fewer than 12,000 soldiers in it, and when the federal government tried to expand the army, Americans resisted. They were happy to serve at their pleasure in their local state militias, but would not volunteer for the Army. Another major problem was the refusal of New England to join the war effort. The embargo on trade with France and England, first imposed by Jefferson and reinstituted by Madison in 1808, wrecked the trade economy New England was based on, ruining banks, merchants, and the livelihoods of countless ships’ crews. Thus New England did not support the war when it came, seeing it as another plan by Washington to ruin the New England economy. Without the fountainhead of revolutionary spirit—and shipbuilding—on board, whipping up support for the war would be difficult.

The United States’ real aim was to capture British Canada. The U.S. had always hoped to incorporate Canada into the union, and during the congress to write the Articles of Confederation in 1781 had left the door open to Canadians, saying that any time they wanted to join the union they would be immediately accepted as a state. The Canadians had not taken the U.S. up on this deal, and now the U.S. hoped to incorporate them by force.

In August 1812, the American army under General William Hull invaded Canada but was defeated, and the British chased him back to what is now Michigan, promptly capturing both Fort Dearborn and Fort Detroit with the aid of Canadian militia and Native American forces led by Tecumseh. Losing Detroit was a blow. It was the most powerful American fort in the western territories. If the U.S. continued to lose its forts in the west, it would be easy for Britain to claim those lands permanently (which was the British plan). Another invasion attampt ended in defeat for the Americans  in October at the Battle of Queenston Heights.

Despite the attractions of conquering Canada, the U.S. was forced to turn its attentions west and east rather than  north. In the west, the Americans were struggling to keep control of the frontier from the British and Native Americans. In the east, they were trying to end the British blockade of their coast and prevent the capture of their capitol city, Washington DC. The U.S. raced to build warships to take on the British Navy, but until those ships were ready the Americans had to rely on small gunboats, which was disastrous. The famous attack on Washington, which we’ll cover in the next post in more detail, was only the most important and devastating of many British attacks on the east coast. The British Navy even sent messages to seaside towns in the Chesapeake region offering them the choice of paying huge bribes to the British or being burned down. The U.S. federal government was powerless to protect its own territory east or west, and would have to rely on a small, inexperienced army and an at-first mostly civilian navy to win the day.

That private navy provided the one bit of success the Americans had in the first year of the war. Privateers were sent out to attack British shipping, from Maine to the West Indies. Owners of  private merchant ships had a long history of smuggling, stretching back to the 1760s, especially in New England. While New Englanders didn’t support the war, they couldn’t ignore a chance to make back some of the money they were losing in the embargo. Privateering, then, did most of the damage to British interests in the first year of the war.

In the west, future President William Henry Harrison led an attempt to retake Detroit, but part of his army lost the battle of Frenchtown or River Raisin in January 1813; 60 American prisoners were killed there by North American allies of the British.  In May 1813, the British moved to capture Fort Meigs in Ohio, and while U.S. army forces were defeated by Native American forces there, the fort managed to stave off capture, and a siege set in which led many Native American soldiers to eventually leave the area. Hoping to keep his invaluable Native American fighting force, British General Procter mounted a second attack on Fort Meigs in July, but it was rebuffed. Procter and Tecumseh then attacked Fort Stephenson, also in Ohio, but suffered a serious defeat, and the war in Ohio ended.

Naval battles in the first year of the war were not fought on the Atlantic Coast but on the Great Lakes, those watery territories between the U.S. and Canada. The famous American Captain Oliver Hazard Perry won the Battle of Lake Erie on September 10, 1813, carrying his battle flag which read “DON’T GIVE UP THE SHIP”  and reporting back to Harrison after his victory “We have met the enemy and they are ours.” Perry’s motto “Don’t give up the ship!” (the dying words of one of his fellow officers) would be used by U.S. sailors during WWII.

The victory gave the U.S. control of Lake Erie, which prompted the British to flee from Fort Detroit. Emboldened, the U.S. made another invasion attempt on Canada under Harrison at the Battle of the Thames in October 1813. At last the Americans had a victory, winning the battle and stripping the British of their most valuable ally, Tecumseh. When Tecumseh was killed in the battle, the alliance of Native Americans that he inspired and controlled dissolved. The British tried to keep the allies together, but they were unable to provide them with weapons from the east now that Lake Erie was in American hands. A further attempt to conquer Canada, however, ended in defeat at Crysler’s Farm in November 1813, and the U.S. gave up the attempt permanently, happy to focus on keeping control of the western forts.

1814 brought important changes. The Napoleonic Wars finally ended, leaving Britain free to send more men and ships to fight in America but also giving them less reason to do so. Now that there was no need to worry about the U.S. allying in battle against Britain with France, there was no need to blockade the American coast or impress U.S. sailors or seize U.S. ships. The U.S., for its part, no longer believed it was possible to conquer Canada, and desperately needed to remove the British threat from its coast and western frontier.

The Treaty of Ghent was signed in what is now Belgium on December 24, 1814, ending the war. News of the peace took two months to reach America, during which time fighting went on and the British lost a very important battle to take the vital western port city of New Orleans on January 8, 1815 (which we will also look at in more depth in the next post).

So we see that the actual fighting of the war took place mostly in the west, as Britain tried to take possession of  the U.S. territory it had refused to leave after the Revolutionary War, and that the great naval battles really took place on the Great Lakes, and that the British did the most damage to American morale and self-confidence on the Chesapeake coast.

Next time: the burning of Washington and the Battle of New Orleans

What caused the War of 1812?

Welcome to the first in a series on the War of 1812—the United States’ most forgotten war (even more forgotten than the Korean War). Here we look at its causes.

The years following the end of the American Revolutionary War were turbulent. France underwent its own revolution beginning in 1789, and that nation quickly descended into terror. Great Britain organized seven different international coalitions between 1793 and 1815 to overthrow the French revolutionary government, which was led from 1797 by Napoleon Bonaparte.

During this period, many Americans thought it only right that the United States go to war on France’s behalf, returning the favor France had done them by coming to the Americans’ aid during the American Revolution. The full extent of the Terror in France was not known to most Americans, and even those like Thomas Jefferson who did know about the despotic rulers in Paris admired their spirit, believing it to be truly revolutionary. The terror, they reasoned, was a temporary over-exuberance of revolutionary spirit and would soon settle down. Under the leadership of George Washington, however, the U.S. would not enter a foreign war. Washington knew the nation had no money to fight a war, and was still fighting to bring its own citizens under the control of the federal government (see the Whiskey Rebellion of 1794).

But the U.S. could not keep out of the war. Both Britain and France saw the U.S. as a powerful tool to use for their own benefit. Britain, hoping to keep the U.S. from allying with France, offered the Jay Treaty, which the U.S. ratified in 1794. In it, Britain promised to remove its soldiers from six forts in the Great Lakes region (which was U.S. territory), and to pay over $10 million to U.S. shipowners whose ships had been seized by Britain in 1793-4. The ships had been taken as part of Britain’s ongoing efforts to sabotage U.S. growth and expansion (Britain was also helping Native Americans fight U.S. settlement in Ohio). The seizure of the ships had led the U.S. to embargo trade with Britain.

Afraid that the embargo was a sign that the U.S. would ally formally with France, Britain offered the Jay Treaty. The U.S. accepted it (over Jefferson’s strenuous objections), and gave Britain most favored nation trading status in return.

In its turn, France saw ratification of the Jay Treaty as a sign that the U.S. would formally ally itself with Britain. Outraged, France retaliated against the U.S., seizing 300 U.S. ships bound for British trading ports. Worse, when the U.S. sent envoys to Paris to negotiate the ships’ return, three French agents representing their government demanded humiliating bribes from the Americans that would have to be paid just for the privilege of speaking to the French: 50,000 pounds sterling (the U.S. still used the pound as one of its currencies, especially in trade with Great Britain), a $10 million loan, $250,000 for the personal use of the French foreign minister, and a letter of apology for the Jay Treaty from President John Adams.

When news of this insult reached the U.S., Americans demanded that President Adams declare war on France. The “X, Y, Z Affair,” as it was known, was too infuriating to bear. But Adams, like Washington before him, skilfully refused to be drawn into war, and managed to settle the dispute through diplomacy. Adams knew the U.S. was still in no shape to get involved in a war between the two superpowers of the day.

The price of British peace was high. British navy ships routinely stopped U.S. trade and Merchant Marine ships and impressed their crews (this meant forcing the sailors to work basically as slaves aboard British ships). Impressed men never saw their homes again. They were forced to labor for the British navy—often to impress other Americans. Britain also continued to work with Native Americans in Canada and northwestern territories of the U.S. to overthrow the federal government and stop U.S. settlement. According to both the treaty ending the Revolution and the Jay Treaty, the British were supposed to withdraw soldiers from U.S. territory, but did not. The British also tried to stop the U.S. from trading with France.

By 1808, James Madison was president of the United States. One of his first international actions was to stop trade with Britain and France. This was finessed in May 1810 to a statement that the U.S. would trade with whichever nation accepted U.S. neutrality. In France, Napoleon seemed to accept this deal, but he did so only to get the U.S. to embargo trade with Britain. With Britain still in mind as the natural enemy of the U.S., many Americans became “War Hawks” at this point, urging war with Britain. In Congress, War Hawks like John C. Calhoun and Henry Clay pushed Madison to declare war.

Madison knew the odds of winning a war against Britain were no better than they had been in Washington’s or Adams’ day. But continued British impressment and ship seizures, combined with France’s seeming support of U.S. neutrality, led him to bow to public and political pressure.  On June 1, 1812, Madison asked Congress to declare war on Britain. It was just 29 years after the Treaty of Paris had ended the Revolutionary War.

Next time: How and where the war was fought

The federal government invents Social Security

Our final post in the series on whether the federal government is capable of guarding the public health and well-being focuses on Social Security.

The reputation of the federal Social Security program is tarnished today because it is being strained by huge numbers of retirees and near-retirees, and there are justifiable fears that it will go bankrupt. But this cannot make us forget how important, how groundbreaking the program was. What, after all, is the fuss all about? Why care if Social Security goes bankrupt? The answer is that the Social Security program created and managed by the federal government was the first, and remains the only, safety net for elderly and other at-risk members of our U.S. citizenry.

The Social Security Act of 1935 was a response to the Great Depression. In the 1930s, the only form of financial support for the elderly was a government pension. You received a pension if you had served in the U.S. armed forces or worked for the U.S. government. This, of course, meant that only men could receive pensions. Widows and children of pensioned men could receive their male relative’s pension once he died, but only if they applied for it. And men who were not veterans or former federal employees had nothing unless their employers offered pensions, which was not usual.

These pensions were nothing to write home about. They were extremely small. Elderly people, widows and children with pensions lived very meagerly, and those without pensions had to have relatives willing to support them and even take them in. If you had no pension and no family to fall back on, you were forced to beg for public charity. End of story.

After the stock market crash in October 1929, many elderly, widows, and children lost their pensions and/or the support of their families. Their families had lost their income and were now penniless as well. It is estimated that by 1934 over half of all elderly Americans were unable to support themselves financially. That’s over half of Americans over 65 living on charity—charity that was drying up fast. Thirty states set up state pensions to try to relieve elderly poverty, but the states themselves were poor and the relief was slight, and only about 3% of elderly Americans were receiving any state money by 1935, when the Social Security Act was passed.

There was resistance to the idea of Social Security. Americans had convinced themselves that they weren’t a people who accepted charity, or even a helping hand, especially from the government. People were reluctant to admit that they had no family to depend on for help. One of the ingenious components of the Act was that it paid the elderly with money taxed on wages, taxes that would begin to be collected in 1937 so payments could begin in 1942. In other words, workers paid into the fund, so that when they retired, they would simply be taking back money they had set aside, rather than taking charity from others. This overcame the reluctance to lose face by taking a handout.

In a way, it wasn’t even the payments the elderly received that were so groundbreaking. It was the idea that the federal government, the government of any nation, would make it one of its responsibilities to provide for people in their old age. Government policies for the poor up to that date had consisted of various “poor laws,” which usually mandated prison for those poor who were deemed able to work but did not have jobs and those unable to work, or work farms/workhouses where the poor performed slave labor. If workers were to be taken care of once they grew too old to work, which was not a popular idea at all, then the companies they had worked for should provide a pension, but no one thought those companies should be forced to do so. Basically, no country thought the elderly poor needed or deserved special care, and in the U.S. there was an especially powerful idea that Americans could take care of themselves that foiled any attempt to help the vulnerable.

The Social Security Act included all workers, male and female. It was expanded in 1939 to include widows and children of working men. These people—the elderly, widows, and their children—quickly came to depend on Social Security, and the whole nation supported the idea that they should be reimbursed in their old age for the work they did in their youth. There was no shame attached to accepting Social Security by the 1950s, and the program came to be an accepted part of the American system.

Social Security was well-managed by the government that created it, although it is in serious danger now simply because of our massive population growth. It is perhaps the most important of the government programs put in place in the U.S. for the protection and care of its citizens. It is proof, along with federal highway safety programs and the FDA, of the ability and desire of the federal government to protect the public health and well-being. The fears expressed in 2009 about the federal government becoming involved in health care are just another example of Americans wishing to believe that we are different from all other nations and peoples, that we alone can always take care of ourselves without any help, and that we alone need to keep our federal government constantly at bay, as if it were a dangerous threat to our liberty.

But it is our federal government, our system of representative democracy, that truly makes us unique by creating our liberty. We should give it every opportunity to protect our equality of opportunity (that is, access to good and affordable health care) and justice for all (who seek health care). Our government is as good and as just as we demand it to be, and it is only by continually engaging with it, not fending it off, that we remain American.

Federal regulation of car safety–a success!

Last time in this series on successful federal management of public health and safety, we looked at Ralph Nader’s expose of automakers’ decision to put style ahead of safety. Now we see the federal government step in.

The 1966 Highway Safety Act mandated that the states create their own highway safety programs to reduce accidents, develop (or improve) emergency care for car accidents (this was when the paramedic program or EMS really came on the scene), and created the Department of Transportation (DOT), including the National Highway Traffic Safety Administration (NHTSA), to oversee these efforts. From now on, drivers would not be blamed for all car accidents.

We have the NHTSA to thank for crash-test dummies, fuel economy standards, safety belts, air bags, auto recalls, and consumer reports (not Consumer Report itself, but the concept of giving car buyers objective analyses of how safe cars are).  These are safety features we take for granted today, but I remember the 1970s, when older cars I rode in didn’t have seat belts, and even when cars did have them, drivers misled by automakers believed that the belts wouldn’t help in an accident, and that the best way to stay safe while driving was to not make mistakes that led to an accident—remnants of the “it’s the driver’s fault” mentality pushed by automakers prior to 1966.

Automakers have continued to fight the federal government on safety, delaying HID and halogen headlights, air bags, and safety features to promote seat belt use, such as those pinging alarms you get when you don’t have yours on.

In all, federal regulation of car and road safety has contributed significantly to American health and well-being. Next time, we’ll begin our conclusion to this series with perhaps the biggest federal health-and-well-being program of them all: Social Security.

Next: How big is Social Security?

Ralph Nader, car safety, and the federal response

Ralph Nader’s landmark book Unsafe at Any Speed: The Designed-In Dangers of the American Automobile is the focus of part 4 of our series on the federal government’s management of public health and well-being.

The book came out in 1965, and each of its chapters covered one problem with car safety (an overview can be found at Unsafe at Any Speed).  For instance, the most famous chapter is on the Chevrolet Corvair, and it’s called “The One-Car Accident.” From 1960-3 the Corvair was built with a faulty rear engine and suspension design that led to accidents. Nader also pointed out how shiny chrome dashboards reflected the sun into drivers’ eyes, non-standard shift controls leading to fatal mistakes, and expensive styling changes carmakers prioritized while stating that safer design would bankrupt them. Nader’s strongest point was that automakers knew how dangerous their cars could be, but did nothing about it because of the cost and the fear of arousing public anger.

GM tried to paint Nader as a lunatic. According to testimony in the 1970 case Nader brought against GM, “…[GM] cast aspersions upon [his] political, social, racial and religious views; his integrity; his sexual proclivities and inclinations; and his personal habits; (2) kept him under surveillance in public places for an unreasonable length of time; (3) caused him to be accosted by girls for the purpose of entrapping him into illicit relationships (4) made threatening, harassing and obnoxious telephone calls to him; (5) tapped his telephone and eavesdropped, by means of mechanical and electronic equipment, on his private conversations with others; and (6) conducted a ‘continuing’ and harassing investigation of him.”

Despite this attack, Nader persevered in speaking to the public, and that public’s outcry led to the development and passage of the 1966 Highway Safety Act.

Next time: the federal government gets behind the wheel of car safety

Federal management of our health and well-being: car safety

Here’s part 3 of our small series, inspired by the health-care debate, on whether the federal government can properly look after our health and well-being. We turn here from food and drug safety to cars.

The safety of the cars manufacted by U.S. automakers was completely unmonitored by anyone before the 1960s. For decades Americans drove cars that not only were often unsafe, but were under absolutely no pressure to be safe. There was no consumer protection service for drivers. If your car was dangerous, that was your problem. Causes of accidents were not investigated with an eye to forcing car manufacturers to improve their products. In 1958 the UN established an international “forum” for vehicle regulation, but the U.S. refused to join it. As is so often the case, manufacturers assumed—and protested loudly—that any oversight would be fatal to them, that bankruptcy was the only possible outcome of regulation, and that U.S. consumers did not want safety regulations.

By 1965, all this de-regulation had created a situation where, according to a report released the next year by the National Academies of Science, car accidents were the leading cause of death “in the first half of life’s span” (from the “History of US National Highway Traffic Safety Administration NHTSA” website at http://www.usrecallnews.com/2008/06/history-of-the-u-s-national-highway-traffic-safety-administration-nhtsa.html).

The Big Three responded as they always had—by saying that all accidents were the result of driver error or bad roads. Since the 1920s, U.S. car manufacturers had pushed what they called the “Three E’s”—Engineering, Enforcement and Education”. As Ralph Nader put it (much more about him later) “Enforcement” and “Education” were directed at drivers, and “Engineering” was directed at all those bad roads causing accidents.

With the federal government still reluctant to step in and regulation car manufacturer safety standards—just as Congress, lobbied relentlessly by criminal food manufacturers had refused to step in to regulate food and drug safety—it took a bombshell book to shake up the status quo.

Next: Ralph Nader and Unsafe at Any Speed

The FDA and government regulation of food safety

Part 2 in the series, basically Truth v. Myth, on whether the federal government can be trusted to compassionately and capably protect the public health and well-being, in which we continue our look at the founding of the Food and Drug Administration.

We’ve seen the dangerous and criminal state of food production in the U.S. by the turn of the 20th century. Starting in the late 1800s, some U.S. food manufacturers were petitioning the government to regulate their industry. These were manufacturers that actually spent the money to produce good-quality food, and they were afraid of being driven out of business by those companies that saved a fortune by pasting ashes together, canning it and calling it potatoes. (It’s amazing how many things were canned early on. Potatoes are one example. Even in the 1930s—I saw a movie from the late-30s where a woman says she’s running to the store to buy a can of potato salad.)

Farmers also protested that they took the blame for adulterated butter, eggs, and milk even though they sent good quality material to the manufacturers. “Shady processors …deodorized rotten eggs, revived rancid butter, [and] substituted glucose for honey” (Young, “The Long Struggle for the 1906 Law“).

During the Populist era of reform, one bill for federal food manufacture standards managed to clear the Senate but was blocked in the House. Congressional representatives, well-paid by shady manufacturers’ lobbies, blocked every clean food and drug law that came to them. One bit of progress was that in 1890-1 meat was set aside for special inspection after a scare in Europe over tainted pork from America led to a ban of U.S. meat on the continent. Later in that decade, tainted beef sickened U.S. soldiers in Cuba fighting the Spanish-American War, causing a furore at home. The culmination of the meat uproar was the famous publication, in 1906, of Upton Sinclair’s novel The Jungle, which described in a brutally unsparing section how meat was processed at the great packing houses in Chicago, detailing the rats, feces, human fingers, and floor sweepings that were incorporated into the finished product. American consumers boycotted meat, with sales falling by half.

This was enough to get Congress to finally act on President Roosevelt’s December 1905 demand for a pure food bill. It wasn’t as easy as it should have been–there was still plenty of resistance in both houses. But thanks to pressure from the president and Dr. Harvey Wiley, a longtime pure food advocate who would be placed in charge of its enforcement, the Pure Food and Drugs Act was passed in 1906.

There was protest from criminal food manufacturers. Whiskey producers complained the loudest, as they were the largest producers of quack medicines. Quack medicines actually accounted for more advertising dollars than any other food or product in the nation in 1906. Their manufacturers claimed the federal government had no right to “police” what consumers chose to buy. This, of course, ran on the incorrect presumption that consumers knew what was in the products they consumed and decided to take the risk (Janssen, “The Story of the Laws behind the Labels”).

Many manufacturers of impure foods claimed that being forced to list ingredients on their labels would put them out of business. Their secret recipes would be exposed! The cost of printing long labels would bankrupt them! Such “technical” labels would turn off customers! Of course, none of this came to pass. Americans were grateful for protection from fraudulent food and medicine, and the Act would go through a few more iterations. The Bureau of Chemistry created to enforce the Act would become the Food and Drug Administration in 1937, and cosmetics would be added to its charges in 1938 when the Food, Drug, and Cosmetic Act was signed by FDR.

The FDA was weakened in the late 20th and early 21st centuries. Food supplements, like vitamins and diet potions, are not subject to FDA scrutiny, and are the new quack medicines, just as dangerous and fradulent as 19th-century snake oil. The organization has had its funding cut by deregulation-minded politicians who wanted to re-establish a completely free marketplace for food and drugs.

This negative turn of events merely proves that bad things happen to our food and drugs supply when the federal government relaxes or impairs its oversight of that market. A strong FDA is a vital necessity to American manufacturers and consumers, and a shining example of the power of good federal management of consumer health and well-being to drastically improve both.

Next time, the federal government and car safety.

The federal government–can it run health care? Check with the FDA

The uproar over the proposed health care legislation that is ongoing in the summer of 2009 is puzzling to the historian. Americans who oppose the legislation seem to feel, when you boil their arguments down, that the main problem is that they don’t want the government running any health care program. The government has neither the experience nor the ability, nor even the humanity to oversee any health care program. (This, of course, when the federal government already runs a health program, namely Medicare.)

This lack of faith in government programs is odd. It’s historically unfounded in three major consumer areas: food safety, car safety, and social security. These are three 20th-century areas which the federal government completely overhauled, improved, and maintains well to this day. We’ll look at all three, starting with food safety and the founding of the FDA (Food and Drug Administration).

In 1906, the federal government passed the Pure Food and Drugs Act. By that time, Washington had been petitioned for decades to create and enforce food safety laws. It’s hard to imagine today what food was like at the turn of the 20th century. We think of that time as a time of pure, wholesome, real food—the kind of food we’re trying to get back to now, in a 21st century filled with pre-packaged, trans-fat adulterated food substitutes.

But the early 20th century was actually little different from—and in many ways, much worse than—today. Here is a description of a meal served by a respectable woman to house guests in the early 1900s, which Dr. Edward A. Ayers  included in his article “What the Food Law Saves Us From: Adulterations, Substitutions, Chemical Dyes, and Other Evils”: 

“We had a savory breakfast of home-made sausage and buckwheat cakes. The coffee, bought ‘ground,’ had a fair degree of coffee, mixed in with chicory, peas, and cereal. There was enough hog meat in the ‘pure, homemade’ sausage to give a certain pork flavor and about one grain of benzoic acid [a chemical preservative] in each [serving]. I also detected saltpetre, which had been used to freshen up the meat. [Either] corn starch or stale biscuit had been used as filler…

“The buckwheat cakes looked nice and brown [from the] caramel [used to color them]…. and added one more grain of benzoic acid to my portion. The maple syrup [was] 90 percent commercial glucose… one-third a grain of benzoic acid and some cochineal [red dye derived from insects] came with the brilliant red ketchup. [At lunch] I figure about seven grains of benzoic acid and rising. …The ‘home-made’ quince jelly, one of the ‘Mother Jones Pure Jellies’…worked out as apple juice, glucose, gelatin, saccharin, and coal tar.

“I had to take a long walk after lunch; having overheard the order for dinner, I figured on about 10 to 15 grains more of benzoic acid reaching my stomach by bedtime.” 

I looked up benzoic acid, which is still used today as a preservative, and found that the World Health Organization’s limit on the stuff is 5 mg per each kilogram of a person’s body weight per day. I don’t know how much a “grain” of benzoic acid was, but I think the poor houseguests described above were getting way more than that.

Why was food so awful in America at that time? Progress. As Arthur Wallace Dunn described it in his 1911 article “Dr. Wiley and Pure Food…”,

“During the preceding quarter of a century [from the 1880s to 1911], the whole system of food supply [in the U.S.] had changed. Foods were manufactured and put up in packages and cans in large quantities at central points where immense plants had been erected. To preserve the food all manner of ingredients were used, and to increase the profits, every known device and adulteration and misrepresentation was adopted. Many states passed strict pure food [and drug] laws, but they were powerless to [control interstate shipping–any state could ship its impure foods and drugs to another state]. Besides, many state laws were not uniform and were easily evaded.”

The Industrial Revolution brought mass production of foods to the U.S. As the population rapidly shifted from rural to urban, more people were in towns and cities where food was not locally grown, but shipped in to them to buy in grocery stores. Meat was shipped across the country with minimal or no refrigeration. Rotten foods were canned to disguise their state. Chemicals were added in enormous amounts to all types of food. Coal tar—yes, from real, black coal—was slightly altered and used to brighten the colors of ketchup, peas, coffee, margarine, and more.  Here is a short list of such “adulterations,” again from Dr. Ayers’ article:

Jams and Jellies: apple juice mixed with glucose and gelatine

Milk: diluted with water, bulked back up with starch or chalk (yes, chalkboard chalk)

Butter: made of carrot juice, turmeric, and coal tar

Worse yet, margarine, or “butterine”: made from oleo oil [this comes from beef fat], lard, coloring, and cottonseed or peanut oil

Filled cheese: skim milk injected with fat from oil

This was the state that modernization had brought American food production to. Eager to make as large a profit as possible, many food manufacturers basically used scraps, glucose, and oil to make a wide range of foods. Drugs were just as bad or even worse, with unhealthy or even fatal miracle cures constantly on the market. Coca-Cola contained not only real cocaine, but unimaginable amounts of caffeine.

Food manufacturers were not required to label their products. Most canned goods had the name of the product, the name of the manufacturer, and a lovely drawing on them. That was it. No list of ingredients. No expiration date.

How did Americans survive? Through the intervention of the federal government.

Part 2–the Pure Food and Drugs Act of 1906