Silverites, Goldbugs and the Cross of Gold

Part 2 of our series on William Jennings Bryan’s famous 1896 Cross of Gold Speech provides background on the issue at the heart of that momentous address to the Democratic National Convention.

When gold was discovered in California in 1848, there was a Gold Rush that opened the west and changed the nation. When silver was discovered in the west in the 1860s, however, there was no Silver Rush. For decades the federal government had valued silver at 16:1 against gold—that is, it took 16 ounces of silver to equal 1 ounce of gold in value. It was much more lucrative to find gold than silver.

But the U.S. was not on the gold standard. Anyone could turn in gold or silver in any form—jewelry, bars, coins, etc.—to a U.S. Mint and receive dollars for their metal. Gold and silver could both be turned in for dollars, and this is called bi-metallism. Our currency was backed by silver and gold.

This system was threatened, however, by the Gold Rush. Gold flooded the market, making silver relatively scarce. While the Mint still offered the 16:1 ratio, silver could be sold privately for more—12:1, 10:1, etc. People stopped taking their silver to the Mint and began hoarding it or selling to it private or foreign buyers.

Such was the situation when silver was discovered in Nevada in the 1860s. While there was no Silver Rush, silver did begin to flood the market, and those private buyers and great 12:1 deals for silver dried up. Now you had to take 20:1 or 25:1 deals. But the U.S. Mint was still offering 16:1, and people who found themselves with too much silver on their hands flocked back to the Mint to turn it in for dollars. As a result, more silver dollars were minted.

All of this silver being turned in for dollars was good news for westerners, rural farmers, and the poor because it put more dollars into circulation. You can’t spend your silver jewelry, but you’ll spend the dollars you get for it. More money in circulation means there’s less of a need for people to borrow money, and that drives down interest rates. Farmers who needed to buy the new farm equipment that the Industrial Revolution was making necessary could buy it without going into debt with a loan. Poor people could buy more goods. These were the Silverites, who welcomed the liquidity of bi-metallism during a silver boom.

But not everyone was happy. The heart of business in the U.S. was in the east, on Wall Street and in the big industrial cities, and eastern banks had made fortunes loaning money to westerners, especially farmers, and charging high interest rates. With the boom in silver, that was diminished, and big business cried foul to the government through its lobbyists. These were the Goldbugs, who wanted to make dollars scarce by stopping the conversion of silver to dollars.

The situation came to a head in 1873. All that basically worthless silver pouring into the federal government for a decade had caused an economic crisis. The dollar was being backed more and more by silver, and less and less by gold. And since silver had lost so much value, the dollar might lose its value abroad. If a European won’t buy your silver, they’re not going to accept your silver-backed dollars. So Congress passed the Coinage Act of 1873, which stated that the dollar would no longer be backed by silver, eliminated the silver dollar, and severely limited how much silver Mints were allowed to accept from the public. Bi-metallism was over.

Silverites called it the  “Crime of ’73,” and claimed that justice was thwarted by rich businessmen. Goldbugs celebrated this embrace of the gold standard and claimed it was “sound money” policy.

Now you see what Bryan is driving at. He was from Nebraska, a western farming state whose people were hurting from the clampdown on silver. In his speech he is saying that he will not let the U.S. crucify the common man on a cross of gold—he will not let the government stay on the gold standard at the expense of the poor, the farmer, the western rancher or small businessman. If elected president, he will bring back bi-metallism, the dollar will be backed by gold and silver, and there will be more dollars in circulation, reducing debt.

Next time: close-reading the speech

The “Cross of Gold” speech: what is it about?

Welcome to a series on William Jennings Bryan’s famous 1896 Cross of Gold speech. This speech, delivered at the Democratic National Convention, helped win the Bryan, former Representative to Congress for Nebraska, the presidential nomination of the Democratic party. It’s a very famous speech and it was powerfully delivered, and was so popular that for decades after the convention Bryan was asked to deliver the Cross of Gold speech, and did.

But let’s start by being frank: this speech suffers, for the 21st-century reader, from two major drawbacks: first, and foremost, it never makes clear what on Earth the problem is that it’s addressing; and second, it is written in the bombastic 19th-century style that thrives on rhetorical flourishes and long, drawn-out analogies. Thus it’s hard for modern-day readers to make much headway through Cross of Gold. One might read the entire speech and not understand what issue Bryan is addressing. The reason for this is that by the time he gave this speech, the issue of coining silver v. remaining on the gold standard had been a violently contested political, social, and economic issue for decades. Bryan’s audience didn’t need a lesson on what the issue was. Everyone in that convention hall knew what their party’s stand was on silver, and all Bryan had to do was to reinforce the righteousness of that stance by talking about how it would help the farmer and other “common men”. It would be like giving a speech today where you just kept saying “Tea Party ideas”—your audience would know what that shorthand means. You wouldn’t have to explain it. You could just talk about how a) harmful or b) good those ideas were, depending on your political stance.

But today, we know little about the savage war over the coinage of silver, and this has created a terrible vacuum where we continue to study Bryan’s famous speech with almost no background on what it was addressing and no conception of what it means. It has become a ritual with no meaning. Let’s rectify that here.

We’ll move into the background of the speech next time with a history of the battle between Silverites and Goldbugs, as they were called, and the principles they were fighting over. It is actually fascinating, and focuses on themes that are still very much front-and-center in 21st-century U.S. politics, including “class warfare”, business v. individual rights, how much control the federal government should have, financial booms and busts, and more.

Next time: Silverites v. Goldbugs

Taxation = Slavery

As always, when history is being made in the present, or the present is clearly marked in a historical cycle, we delve into it here on the HP.

In this case, it is the debate in Congress over whether to raise the debt ceiling or default. The main sticking point has been the refusal of a sizable minority of Republicans, mostly belonging to the Tea Party faction, to allow the federal government to collect tax revenue. This group demands tax breaks for the wealthy, including corporations, and the maintenance of tax loopholes that allow millions of dollars of tax revenue to go uncollected.

This is not the place to go into the details of their platform, or the response by moderate Republicans and Democrats. Here, the issue is the extreme instransigence of the Republican minority on the issue of taxation. It has become, to them, a crime for the government to raise taxes or even to collect taxes. To them, there is no compromise on taxation: you are either for it (and therefore un-American) or against it. Again, we’ll leave aside for this post the historical fallacy of anti-tax advocates calling themselves “Tea Party”; read about that here. For now, we’ll focus on the black-and-white issue they have turned taxation into. It’s hard to think of a time when Congress was so completely divided, so unwilling and unable to compromise on an issue; when you look back at our history, only one comparable time comes up—the slavery debates of the late 1850s.

You could not compromise on slavery during those Congresses. You were for it or against it, and this divide worked its way into many other, seemingly unrelated issues, and the uncompromisable issue of slavery could not be resolved. Congress could no longer function to govern the country, and civil war ensued at the 1860 election.

Today, Congress’ refusal to accept compromise on taxation is quite similar to the earlier Congress’ refusal to accept compromise on slavery. But there are two key differences: first, the American people were becoming just as divided over slavery as their representatives; second, slavery really is an issue you can’t seriously compromise on.

Americans in the 1850s didn’t want to fight a war over slavery, but they were rapidly becoming more polarized over it. Even those who didn’t particularly want abolition for morality’s sake blamed slavery for all of America’s ills, and would have gotten rid of it for economic or political reasons. Their representatives’ furor over slavery was not out of line, then, with Americans’ feelings about slavery. It does not seem accurate to make that claim today. Many Tea Party Congress members have said their constituents contacted them to say it’s okay to raise taxes to avoid default, but those members refused to do so out of principle. The extreme polarization in Congress today does not really have its roots in how Americans are feeling.

And taxation is not slavery. It’s not a black-and-white, moral issue that no one can take a moderate stance on. The government raises taxes in order to provide services. It’s a very simple and fundamental tenet of government. We have representation to our government to decide what services and how much taxation, not to stop the collection of tax revenue.

The taxation issue is part of a larger move to reduce the federal government to a negative function: the federal government will not provide social services (no Medicare, Social Security, Head Start, etc.), will not regulate business (protect the environment, police Wall Street, etc.), will not really legislate (instead, Constitutional Amendments will be put in place to handle social issues), amd will not extend civil rights to immigrants, gay people, etc. All it would do under this plan, apparently, is fund wars.

No one really wants to live in that world. It is undemocratic, and unself-sustaining. This experiment with such negative chaos is a dangerous one. The first experiment ended in civil war; it remains to be seen where we are headed in the next 20 years.

The 2010 Census data is in!!

It’s a very exciting historical moment when census data is published. It is a real example of the historic present; you see where your every day lived reality fits into much bigger, much longer historical frames—where you are in an era. We’re going to take a look at the census data from a few angles. The first step is to dive in to the raw data, which you can do in a fascinating way at Mapping the U.S. Census. Rollover a county to see general data, enter an address, zip code, or city at top right to get amazingly detailed maps–for example, if you put in your zip code just that area comes up (your very own “census tract”).  Take a look at where you live, or have lived, and see the changes.

Then take a look at Prof. John Logan’s census analysis . Logan is a sociologist at Brown University who has studied census data for decades. He has interesting analysis on segregation and the impact of race—as in, what difference does it make if Asians begin moving to white neighborhoods, as opposed to Latinos, as opposed to black people?

Next time: How we are sorted

The federal government invents Social Security

Our final post in the series on whether the federal government is capable of guarding the public health and well-being focuses on Social Security.

The reputation of the federal Social Security program is tarnished today because it is being strained by huge numbers of retirees and near-retirees, and there are justifiable fears that it will go bankrupt. But this cannot make us forget how important, how groundbreaking the program was. What, after all, is the fuss all about? Why care if Social Security goes bankrupt? The answer is that the Social Security program created and managed by the federal government was the first, and remains the only, safety net for elderly and other at-risk members of our U.S. citizenry.

The Social Security Act of 1935 was a response to the Great Depression. In the 1930s, the only form of financial support for the elderly was a government pension. You received a pension if you had served in the U.S. armed forces or worked for the U.S. government. This, of course, meant that only men could receive pensions. Widows and children of pensioned men could receive their male relative’s pension once he died, but only if they applied for it. And men who were not veterans or former federal employees had nothing unless their employers offered pensions, which was not usual.

These pensions were nothing to write home about. They were extremely small. Elderly people, widows and children with pensions lived very meagerly, and those without pensions had to have relatives willing to support them and even take them in. If you had no pension and no family to fall back on, you were forced to beg for public charity. End of story.

After the stock market crash in October 1929, many elderly, widows, and children lost their pensions and/or the support of their families. Their families had lost their income and were now penniless as well. It is estimated that by 1934 over half of all elderly Americans were unable to support themselves financially. That’s over half of Americans over 65 living on charity—charity that was drying up fast. Thirty states set up state pensions to try to relieve elderly poverty, but the states themselves were poor and the relief was slight, and only about 3% of elderly Americans were receiving any state money by 1935, when the Social Security Act was passed.

There was resistance to the idea of Social Security. Americans had convinced themselves that they weren’t a people who accepted charity, or even a helping hand, especially from the government. People were reluctant to admit that they had no family to depend on for help. One of the ingenious components of the Act was that it paid the elderly with money taxed on wages, taxes that would begin to be collected in 1937 so payments could begin in 1942. In other words, workers paid into the fund, so that when they retired, they would simply be taking back money they had set aside, rather than taking charity from others. This overcame the reluctance to lose face by taking a handout.

In a way, it wasn’t even the payments the elderly received that were so groundbreaking. It was the idea that the federal government, the government of any nation, would make it one of its responsibilities to provide for people in their old age. Government policies for the poor up to that date had consisted of various “poor laws,” which usually mandated prison for those poor who were deemed able to work but did not have jobs and those unable to work, or work farms/workhouses where the poor performed slave labor. If workers were to be taken care of once they grew too old to work, which was not a popular idea at all, then the companies they had worked for should provide a pension, but no one thought those companies should be forced to do so. Basically, no country thought the elderly poor needed or deserved special care, and in the U.S. there was an especially powerful idea that Americans could take care of themselves that foiled any attempt to help the vulnerable.

The Social Security Act included all workers, male and female. It was expanded in 1939 to include widows and children of working men. These people—the elderly, widows, and their children—quickly came to depend on Social Security, and the whole nation supported the idea that they should be reimbursed in their old age for the work they did in their youth. There was no shame attached to accepting Social Security by the 1950s, and the program came to be an accepted part of the American system.

Social Security was well-managed by the government that created it, although it is in serious danger now simply because of our massive population growth. It is perhaps the most important of the government programs put in place in the U.S. for the protection and care of its citizens. It is proof, along with federal highway safety programs and the FDA, of the ability and desire of the federal government to protect the public health and well-being. The fears expressed in 2009 about the federal government becoming involved in health care are just another example of Americans wishing to believe that we are different from all other nations and peoples, that we alone can always take care of ourselves without any help, and that we alone need to keep our federal government constantly at bay, as if it were a dangerous threat to our liberty.

But it is our federal government, our system of representative democracy, that truly makes us unique by creating our liberty. We should give it every opportunity to protect our equality of opportunity (that is, access to good and affordable health care) and justice for all (who seek health care). Our government is as good and as just as we demand it to be, and it is only by continually engaging with it, not fending it off, that we remain American.

Federal regulation of car safety–a success!

Last time in this series on successful federal management of public health and safety, we looked at Ralph Nader’s expose of automakers’ decision to put style ahead of safety. Now we see the federal government step in.

The 1966 Highway Safety Act mandated that the states create their own highway safety programs to reduce accidents, develop (or improve) emergency care for car accidents (this was when the paramedic program or EMS really came on the scene), and created the Department of Transportation (DOT), including the National Highway Traffic Safety Administration (NHTSA), to oversee these efforts. From now on, drivers would not be blamed for all car accidents.

We have the NHTSA to thank for crash-test dummies, fuel economy standards, safety belts, air bags, auto recalls, and consumer reports (not Consumer Report itself, but the concept of giving car buyers objective analyses of how safe cars are).  These are safety features we take for granted today, but I remember the 1970s, when older cars I rode in didn’t have seat belts, and even when cars did have them, drivers misled by automakers believed that the belts wouldn’t help in an accident, and that the best way to stay safe while driving was to not make mistakes that led to an accident—remnants of the “it’s the driver’s fault” mentality pushed by automakers prior to 1966.

Automakers have continued to fight the federal government on safety, delaying HID and halogen headlights, air bags, and safety features to promote seat belt use, such as those pinging alarms you get when you don’t have yours on.

In all, federal regulation of car and road safety has contributed significantly to American health and well-being. Next time, we’ll begin our conclusion to this series with perhaps the biggest federal health-and-well-being program of them all: Social Security.

Next: How big is Social Security?

Ralph Nader, car safety, and the federal response

Ralph Nader’s landmark book Unsafe at Any Speed: The Designed-In Dangers of the American Automobile is the focus of part 4 of our series on the federal government’s management of public health and well-being.

The book came out in 1965, and each of its chapters covered one problem with car safety (an overview can be found at Unsafe at Any Speed).  For instance, the most famous chapter is on the Chevrolet Corvair, and it’s called “The One-Car Accident.” From 1960-3 the Corvair was built with a faulty rear engine and suspension design that led to accidents. Nader also pointed out how shiny chrome dashboards reflected the sun into drivers’ eyes, non-standard shift controls leading to fatal mistakes, and expensive styling changes carmakers prioritized while stating that safer design would bankrupt them. Nader’s strongest point was that automakers knew how dangerous their cars could be, but did nothing about it because of the cost and the fear of arousing public anger.

GM tried to paint Nader as a lunatic. According to testimony in the 1970 case Nader brought against GM, “…[GM] cast aspersions upon [his] political, social, racial and religious views; his integrity; his sexual proclivities and inclinations; and his personal habits; (2) kept him under surveillance in public places for an unreasonable length of time; (3) caused him to be accosted by girls for the purpose of entrapping him into illicit relationships (4) made threatening, harassing and obnoxious telephone calls to him; (5) tapped his telephone and eavesdropped, by means of mechanical and electronic equipment, on his private conversations with others; and (6) conducted a ‘continuing’ and harassing investigation of him.”

Despite this attack, Nader persevered in speaking to the public, and that public’s outcry led to the development and passage of the 1966 Highway Safety Act.

Next time: the federal government gets behind the wheel of car safety

Federal management of our health and well-being: car safety

Here’s part 3 of our small series, inspired by the health-care debate, on whether the federal government can properly look after our health and well-being. We turn here from food and drug safety to cars.

The safety of the cars manufacted by U.S. automakers was completely unmonitored by anyone before the 1960s. For decades Americans drove cars that not only were often unsafe, but were under absolutely no pressure to be safe. There was no consumer protection service for drivers. If your car was dangerous, that was your problem. Causes of accidents were not investigated with an eye to forcing car manufacturers to improve their products. In 1958 the UN established an international “forum” for vehicle regulation, but the U.S. refused to join it. As is so often the case, manufacturers assumed—and protested loudly—that any oversight would be fatal to them, that bankruptcy was the only possible outcome of regulation, and that U.S. consumers did not want safety regulations.

By 1965, all this de-regulation had created a situation where, according to a report released the next year by the National Academies of Science, car accidents were the leading cause of death “in the first half of life’s span” (from the “History of US National Highway Traffic Safety Administration NHTSA” website at http://www.usrecallnews.com/2008/06/history-of-the-u-s-national-highway-traffic-safety-administration-nhtsa.html).

The Big Three responded as they always had—by saying that all accidents were the result of driver error or bad roads. Since the 1920s, U.S. car manufacturers had pushed what they called the “Three E’s”—Engineering, Enforcement and Education”. As Ralph Nader put it (much more about him later) “Enforcement” and “Education” were directed at drivers, and “Engineering” was directed at all those bad roads causing accidents.

With the federal government still reluctant to step in and regulation car manufacturer safety standards—just as Congress, lobbied relentlessly by criminal food manufacturers had refused to step in to regulate food and drug safety—it took a bombshell book to shake up the status quo.

Next: Ralph Nader and Unsafe at Any Speed

The FDA and government regulation of food safety

Part 2 in the series, basically Truth v. Myth, on whether the federal government can be trusted to compassionately and capably protect the public health and well-being, in which we continue our look at the founding of the Food and Drug Administration.

We’ve seen the dangerous and criminal state of food production in the U.S. by the turn of the 20th century. Starting in the late 1800s, some U.S. food manufacturers were petitioning the government to regulate their industry. These were manufacturers that actually spent the money to produce good-quality food, and they were afraid of being driven out of business by those companies that saved a fortune by pasting ashes together, canning it and calling it potatoes. (It’s amazing how many things were canned early on. Potatoes are one example. Even in the 1930s—I saw a movie from the late-30s where a woman says she’s running to the store to buy a can of potato salad.)

Farmers also protested that they took the blame for adulterated butter, eggs, and milk even though they sent good quality material to the manufacturers. “Shady processors …deodorized rotten eggs, revived rancid butter, [and] substituted glucose for honey” (Young, “The Long Struggle for the 1906 Law“).

During the Populist era of reform, one bill for federal food manufacture standards managed to clear the Senate but was blocked in the House. Congressional representatives, well-paid by shady manufacturers’ lobbies, blocked every clean food and drug law that came to them. One bit of progress was that in 1890-1 meat was set aside for special inspection after a scare in Europe over tainted pork from America led to a ban of U.S. meat on the continent. Later in that decade, tainted beef sickened U.S. soldiers in Cuba fighting the Spanish-American War, causing a furore at home. The culmination of the meat uproar was the famous publication, in 1906, of Upton Sinclair’s novel The Jungle, which described in a brutally unsparing section how meat was processed at the great packing houses in Chicago, detailing the rats, feces, human fingers, and floor sweepings that were incorporated into the finished product. American consumers boycotted meat, with sales falling by half.

This was enough to get Congress to finally act on President Roosevelt’s December 1905 demand for a pure food bill. It wasn’t as easy as it should have been–there was still plenty of resistance in both houses. But thanks to pressure from the president and Dr. Harvey Wiley, a longtime pure food advocate who would be placed in charge of its enforcement, the Pure Food and Drugs Act was passed in 1906.

There was protest from criminal food manufacturers. Whiskey producers complained the loudest, as they were the largest producers of quack medicines. Quack medicines actually accounted for more advertising dollars than any other food or product in the nation in 1906. Their manufacturers claimed the federal government had no right to “police” what consumers chose to buy. This, of course, ran on the incorrect presumption that consumers knew what was in the products they consumed and decided to take the risk (Janssen, “The Story of the Laws behind the Labels”).

Many manufacturers of impure foods claimed that being forced to list ingredients on their labels would put them out of business. Their secret recipes would be exposed! The cost of printing long labels would bankrupt them! Such “technical” labels would turn off customers! Of course, none of this came to pass. Americans were grateful for protection from fraudulent food and medicine, and the Act would go through a few more iterations. The Bureau of Chemistry created to enforce the Act would become the Food and Drug Administration in 1937, and cosmetics would be added to its charges in 1938 when the Food, Drug, and Cosmetic Act was signed by FDR.

The FDA was weakened in the late 20th and early 21st centuries. Food supplements, like vitamins and diet potions, are not subject to FDA scrutiny, and are the new quack medicines, just as dangerous and fradulent as 19th-century snake oil. The organization has had its funding cut by deregulation-minded politicians who wanted to re-establish a completely free marketplace for food and drugs.

This negative turn of events merely proves that bad things happen to our food and drugs supply when the federal government relaxes or impairs its oversight of that market. A strong FDA is a vital necessity to American manufacturers and consumers, and a shining example of the power of good federal management of consumer health and well-being to drastically improve both.

Next time, the federal government and car safety.

The federal government–can it run health care? Check with the FDA

The uproar over the proposed health care legislation that is ongoing in the summer of 2009 is puzzling to the historian. Americans who oppose the legislation seem to feel, when you boil their arguments down, that the main problem is that they don’t want the government running any health care program. The government has neither the experience nor the ability, nor even the humanity to oversee any health care program. (This, of course, when the federal government already runs a health program, namely Medicare.)

This lack of faith in government programs is odd. It’s historically unfounded in three major consumer areas: food safety, car safety, and social security. These are three 20th-century areas which the federal government completely overhauled, improved, and maintains well to this day. We’ll look at all three, starting with food safety and the founding of the FDA (Food and Drug Administration).

In 1906, the federal government passed the Pure Food and Drugs Act. By that time, Washington had been petitioned for decades to create and enforce food safety laws. It’s hard to imagine today what food was like at the turn of the 20th century. We think of that time as a time of pure, wholesome, real food—the kind of food we’re trying to get back to now, in a 21st century filled with pre-packaged, trans-fat adulterated food substitutes.

But the early 20th century was actually little different from—and in many ways, much worse than—today. Here is a description of a meal served by a respectable woman to house guests in the early 1900s, which Dr. Edward A. Ayers  included in his article “What the Food Law Saves Us From: Adulterations, Substitutions, Chemical Dyes, and Other Evils”: 

“We had a savory breakfast of home-made sausage and buckwheat cakes. The coffee, bought ‘ground,’ had a fair degree of coffee, mixed in with chicory, peas, and cereal. There was enough hog meat in the ‘pure, homemade’ sausage to give a certain pork flavor and about one grain of benzoic acid [a chemical preservative] in each [serving]. I also detected saltpetre, which had been used to freshen up the meat. [Either] corn starch or stale biscuit had been used as filler…

“The buckwheat cakes looked nice and brown [from the] caramel [used to color them]…. and added one more grain of benzoic acid to my portion. The maple syrup [was] 90 percent commercial glucose… one-third a grain of benzoic acid and some cochineal [red dye derived from insects] came with the brilliant red ketchup. [At lunch] I figure about seven grains of benzoic acid and rising. …The ‘home-made’ quince jelly, one of the ‘Mother Jones Pure Jellies’…worked out as apple juice, glucose, gelatin, saccharin, and coal tar.

“I had to take a long walk after lunch; having overheard the order for dinner, I figured on about 10 to 15 grains more of benzoic acid reaching my stomach by bedtime.” 

I looked up benzoic acid, which is still used today as a preservative, and found that the World Health Organization’s limit on the stuff is 5 mg per each kilogram of a person’s body weight per day. I don’t know how much a “grain” of benzoic acid was, but I think the poor houseguests described above were getting way more than that.

Why was food so awful in America at that time? Progress. As Arthur Wallace Dunn described it in his 1911 article “Dr. Wiley and Pure Food…”,

“During the preceding quarter of a century [from the 1880s to 1911], the whole system of food supply [in the U.S.] had changed. Foods were manufactured and put up in packages and cans in large quantities at central points where immense plants had been erected. To preserve the food all manner of ingredients were used, and to increase the profits, every known device and adulteration and misrepresentation was adopted. Many states passed strict pure food [and drug] laws, but they were powerless to [control interstate shipping–any state could ship its impure foods and drugs to another state]. Besides, many state laws were not uniform and were easily evaded.”

The Industrial Revolution brought mass production of foods to the U.S. As the population rapidly shifted from rural to urban, more people were in towns and cities where food was not locally grown, but shipped in to them to buy in grocery stores. Meat was shipped across the country with minimal or no refrigeration. Rotten foods were canned to disguise their state. Chemicals were added in enormous amounts to all types of food. Coal tar—yes, from real, black coal—was slightly altered and used to brighten the colors of ketchup, peas, coffee, margarine, and more.  Here is a short list of such “adulterations,” again from Dr. Ayers’ article:

Jams and Jellies: apple juice mixed with glucose and gelatine

Milk: diluted with water, bulked back up with starch or chalk (yes, chalkboard chalk)

Butter: made of carrot juice, turmeric, and coal tar

Worse yet, margarine, or “butterine”: made from oleo oil [this comes from beef fat], lard, coloring, and cottonseed or peanut oil

Filled cheese: skim milk injected with fat from oil

This was the state that modernization had brought American food production to. Eager to make as large a profit as possible, many food manufacturers basically used scraps, glucose, and oil to make a wide range of foods. Drugs were just as bad or even worse, with unhealthy or even fatal miracle cures constantly on the market. Coca-Cola contained not only real cocaine, but unimaginable amounts of caffeine.

Food manufacturers were not required to label their products. Most canned goods had the name of the product, the name of the manufacturer, and a lovely drawing on them. That was it. No list of ingredients. No expiration date.

How did Americans survive? Through the intervention of the federal government.

Part 2–the Pure Food and Drugs Act of 1906