Chapter 5
Free Market Fools
I know a planet where there is a certain red-faced gentleman. He has never smelled a flower. He has never looked at a star. He has never loved anyone. He has never done anything in his life but add up figures. And all day he says over and over, just like you: “I am busy with matters of consequence!” And that makes him swell up with pride. But he is not a man—he is a mushroom.
Antoine de Saint-Exupéry, The Little Prince
Game Theory
What is freedom? The history of the last eight centuries has been, in a very real way, the history of the struggle to define the word “freedom.”
The concept first emerged in its modern historic context in 1215, when the feudal lords who owned much of the land and controlled most of the economy of Great Britain confronted King John on a plain named Runnymede and essentially told him that they would allow him to continue to act as king only if he signed a document they’d drawn up called the Magna Carta. It guaranteed that before the king could imprison one of them he would have to show probable cause, attested to by witnesses and sworn testimony, that the person had committed a crime—a right known as habeas corpus.
For four hundred years the right of habeas corpus extended only to the British nobility, but a series of revolts in the 1600s extended it to “commoner” knights working for the king and to a few others. Over the next hundred years, these rights were more broadly applied in Great Britain and other European nations.
The Enlightenment of the seventeenth and eighteenth centuries brought about a new concept of “freedom” that included the right of average people to own private property. John Locke extended this in his Second Treatise of Government as the right to “life, liberty, and estate”—estate meaning “private property.” By the time of the American Revolution the right of the ownership of private property was so ingrained as a basic part of “freedom” that Jefferson opted to ignore it entirely in the Declaration of Independence, writing instead that freedom consisted of the rights to “life, liberty, and the pursuit of happiness.”
Throughout these first six centuries since the Magna Carta, a baseline in the definition of “freedom” as it slowly evolved always included the notion that the government—whether it was a king, the nobles, a parliament, or a representative democratic republic—provided the soil in which “freedom” could grow by protecting the rights of the people from certain predators among them, particularly economic predators. The state enacted and enforced laws that controlled the way banks could operate (usury—the practice of overcharging interest—was controlled in various ways throughout this period and right up to 1978 in the United States, when it was struck down by conservatives on the U.S. Supreme Court), enforced laws against fraud and economic coercion, and protected domestic industries (encouraging some and discouraging others) by taxation and tariff policies.
Thus, the definition of “freedom” evolved somewhat between the thirteenth and twentieth centuries, but was always largely grounded in the notion that much of freedom had to do with individuals being free from harassment or imprisonment by government, or by exploitation by other, more powerful individuals (or groups of individuals).
As America and much of the modern world industrialized in the late nineteenth century, though, a new definition of “freedom” began to take hold. Historically, wealth was held by a small number of people, and one of the dimensions of “freedom” was the protection, or at least the ideal of protection, of the average person from exploitation by those of great wealth (read Dickens for examples).
But in the twentieth century several new—and, ultimately, bizarre—concepts of freedom were developed.
The first to come along was communism, as defined by Karl Marx and Friedrich Engels, and implemented in the former Soviet Union by Vladimir Lenin, Joseph Stalin, and their successors up through Mikhail Gorbachev, who oversaw the disintegration of that communist state. Under communism, “freedom” no longer meant freedom from oppression by the state or freedom to own private property—the latter was even banned—but instead the freedom never to worry about the necessities of life. In the communist state, while choices were dramatically limited, one’s needs were (at least in theory) met from cradle to grave. Everybody had food, shelter, education, medical care, transportation, a job, and a safe retirement. The core theory of communism was that “the people” were the rulers of themselves, and there were no economic elites.
Shortly after communism was introduced as a form of governance following the 1917 Bolshevik Revolution, its antithesis rose in Italy, Spain, and Germany, in the early 1930s—fascism. In a fascist state the right of private property was absolute, and the state was highly activist in providing for the needs of the people. But the levers of governance were very much controlled by the economic elites, as the 1983 American Heritage Dictionary’s definition of fascism spelled out: “A system of government that exercises a dictatorship of the extreme right, typically through the merging of state and business leadership, together with belligerent nationalism.” Fascism promised freedom through a strong control of the life circumstances of the average person—much like communism (cradle-to-grave security, etc.)—but its core governing concept was that the business elite of a nation was far more qualified to run the country than were “the people.”
Mussolini, for example, was quite straightforward about all this. In a 1923 pamphlet titled “The Doctrine of Fascism,” he wrote, “If classical liberalism spells individualism, Fascism spells government.” But not a government of, by, and for We the People—instead, it would be a government of, by, and for the most powerful corporate interests in the nation.
In 1938, Mussolini brought his vision of fascism into full reality when he dissolved Parliament and replaced it with the Camera dei Fasci e delle Corporazioni—the Chamber of the Fascist Corporations. Corporations were still privately owned, but now, instead of having to sneak their money to politicians and covertly write legislation, they were openly in charge of the government.
Vice President Henry Wallace bluntly laid out his concern about the same happening here in America in a 1944 New York Times article:
If we define an American fascist as one who in case of conflict puts money and power ahead of human beings, then there are undoubtedly several million fascists in the United States. There are probably several hundred thousand if we narrow the definition to include only those who in their search for money and power are ruthless and deceitful. … They are patriotic in time of war because it is to their interest to be so, but in time of peace they follow power and the dollar wherever they may lead.
By the 1950s, both communism and fascism had largely fallen into disrepute, particularly among Western thinkers. Those ideologies’ brands of “freedom” had failed to deliver anything that really resembled freedom; indeed, they produced varieties of freedom’s antithesis.
At the same time, the cold war was ramping up and Americans, in particular, were beginning to worry that a nuclear war with the Soviet Union could lead to the end not just of freedom, not just of America, but of civilization. At the same time, science, technology, and, particularly, computers were all in vogue. From Dow Chemical’s motto “Better Living Through Chemistry” to The Graduate’s whispered “Plastics!” to the military’s reliance on computers to track Soviet missiles and bombers, the most widespread belief was that technology would save us, science would make a better life, and technology was the ultimate bulwark of freedom.
In this context, the U.S. government hired a military think tank, the Rand Corporation, to advise them on how to deal with the Soviet nuclear threat. One of Rand’s most brilliant mathematicians, John Nash, put forward a startling new theory.
Humans, Nash suggested, were totally predictable. Each of us was motivated by the singular impulse of selfishness, and if you could simply put into a computer all the varied ways a person could be selfish, all the varied forms of fulfillment of that selfishness, and all the possible strategies to achieve those fulfillments, then you could, in all cases, accurately predict another person’s behavior.
Not only that, groups of individuals as small as a family and as large as a nation would behave in the same predictable way. Nash developed an elaborate mathematical formula to articulate this, and predicted the outcome of a series of “games” that would prove his new “game theory.”
One, for example, was called the Prisoner’s Dilemma. In this game, one person has some ill-gotten gain of great value, such as a diamond worth millions successfully stolen from a museum. The person needs to convert the diamond into a usable medium of exchange—cash—and so turns to an underworld figure in an organized crime syndicate to make the exchange.
The problem, though, is that neither person is trustworthy—one is a thief and the other a professional criminal. So, how to make the exchange without the mobster simply killing the thief and taking the diamond?
The game plays out with the thief and the mobster each identifying a field, unknown to the other, in which to bury his booty. At a predetermined time after the burials are successfully completed, the thief and the mobster talk on the phone and give each other the location of the fields, so each can dig up the other’s offering.
According to Nash, the outcome should always be that each will betray the other. If you’re the thief and you decide to trust the mobster and tell him the true location of your field, you might end up with nothing as the mobster could still lie to you about his field’s location and go get your diamond. On the other hand, if you never bury the diamond and instead lie to him about where it is buried, you have two possible outcomes: you could end up with both the diamond and the money (if he trusts you and tells you the correct field where the money is buried) or at the very least end up not losing the diamond (if he betrays you and sends you to a phony field).
Game theory became the basis of the U.S. military’s operation of the cold war, filling computers with millions of bits of data about Soviet assets and behaviors, and predicting that in every case the Soviets would behave in a way consistent with betrayal. As a result, even when they said they didn’t have a missile, we assumed they did, and built one of our own to match it. They then saw this, and built their own in response. We saw that, assumed they had built two and that one was hidden, and we built four more. And so on, until we had more than twelve thousand missiles and they had more than seven thousand (that we knew of).
Game theory swept the behavioral and scientific world, and particularly appealed to economists. Friedrich von Hayek was an Austrian economist who had left his homeland before the Nazi annexation of Austria in 1938 and moved to London, where he taught and wrote at the London School of Economics. His book The Road to Serfdom asserted, in a vein surprisingly similar to Nash’s, that self-interest controlled virtually all human behavior, and the only true and viable indicator of what was best for individuals at any moment was what they had or were attempting to get. This “market” of acquisitive behaviors, acted out by millions of people in a complex society the size of Britain or the United States, produced hundreds of millions of individual “decisions” every moment, a number that was impossible even to successfully measure in real time, much less feed into any computer in the world to predict best responses.
Instead of the Rand Corporation’s computers to organize society, von Hayek suggested that there was already an ultimate computer, one that existed as a virtual force of nature, the product and result of all these individual buying and selling behaviors, which he called the “free market.”
Around the same time, a young woman named Alisa Zinov’yevna Rosenbaum, who had left Russia in 1926 (where her father had lost his small business to the Bolshevik Revolution), began writing fiction under the pen name of Ayn Rand. In 1936 she wrote a virulently anticommunist novel, We the Living, which received mixed reviews in the United States, her adopted home.
But in 1956, about the time of Nash’s development of game theory and the rise of the cold war, her magnum opus, Atlas Shrugged, was published and became a huge hit. Her hero, John Galt, delivers a lengthy speech about objectivism, her political theory, which was startling similar to the enlightened self-interest of von Hayek and the always selfish individual of John Nash and the Rand Corporation.
Freedom was being redefined.
Instead of being a positive force, the result of society working together in one way or another to provide for the basic needs of the individual, family, and culture as a foundation and stepping-off point (remember Maslow’s hierarchy from the introduction?), freedom was increasingly being seen as the individual’s ability and right to totally selfish self-fulfillment, regardless of the consequences to others (within certain limitations) or the individual’s failure to participate in and uplift society as a whole.
Freedom was a negative force in this new world view of von Hayek, his student Milton Friedman (father of the “Chicago School” philosophy of libertarian economics), and Ayn Rand’s objectivism. It was as much the freedom “from” as it was the freedom “to”: freedom from social obligation; freedom from taxation; freedom from government assistance or protection (“interference”); freedom to purely consider one’s own wants and desires, because if every individual followed only his own selfish desires, the mass of individuals doing so in a “free market” would create a utopia.
This was a radical departure from eight centuries of the conceptualization of freedom. Instead of providing the soil in which freedom would grow, these new visionaries (some would say reactionaries, some revolutionaries) saw government as the primary force that stopped freedom.
They claimed that their vision of a truly free world, where government constrained virtually nothing except physical violence and all markets were “free”—markets being the behavior of individuals or collectives of individuals (corporations)—had never been tried before on the planet. Their opponents, the classical liberals, said that indeed their system had been tried, over and over again throughout history, and in fact was itself the history of every civilization in the world during its most chaotic and feudal time. Lacking social contracts and interdependence, “Wild West” societies were characterized by both physical and economic violence, with those who were the most willing to exploit and plunder rising to the economic (and, eventually, political) top. They were called robber barons.
“But they weren’t ‘robber barons,’” said the executive director of the Ayn Rand Institute, Dr. Yaron Brook, on my radio program in 2008. “They were the heroes who built America!”
And so went the argument. In the late 1970s and early 1980s, think tanks funded by wealthy individuals and large transnational corporations (particularly military and oil companies) combined forces with politicians and authors to “win the battle of ideas” in the United States and Great Britain. “Greed is good,” was their mantra, combined with the belief that if only the “markets” were “freed,” and government got completely out of the way, then all good things would happen. This movement brought to power Margaret Thatcher in the United Kingdom and Ronald Reagan in the United States.
And thus began a series of Great Experiments. In Chile a democratically elected government was overthrown and its leader, Salvador Allende Gossens, was murdered with help from the U.S. CIA and several American corporations. Augusto José Ramón Pinochet Ugarte ruled the country with an iron fist until 1990. Advised by economists from Friedman’s Chicago School, he implemented “free market reforms,” including the privatization of Chile’s social security system. The result was a huge success for a small class of bankers and businessmen, and the economy grew, particularly in the large corporate sector. But poverty dramatically grew as well, the middle class began to evaporate, and those living on social security were devastated. Throughout the 1990s, people saw the value of their private social security accounts actually decline, in part because of the fees that bankers were skimming off the top, in part because the country’s economic growth had been unspectacular.[xxii] A national insurance program with shared risks and minimal administrative costs had been replaced by a national mandatory savings account system with results that depended on the market and did little for those truly in need.
Following the failure in Chile, the Chicago Boys focused their attention on Britain and the United States. Reagan and Thatcher undertook aggressive movements to destroy organized labor, with Reagan busting the powerful professional air traffic controllers’ union, PATCO, while Thatcher busted the coal miners’ union, that nation’s most powerful. Both turned the labor-protective apparatus of government from a labor department designed to protect organized labor into bureaucracies that would assist corporations in busting unions. Additionally, both “freed” markets by starting the process of dropping tariffs and reducing regulations on businesses.
The result in both cases was, in retrospect, disastrous. Industry fled both countries, in the new “flat” world, to where labor was cheapest (at the moment it’s China), labor unions were destroyed, the middle class was squeezed, well-paying jobs were replaced with “Do you want fries with that?” and social mobility dropped to levels not seen since the robber baron era a hundred years earlier.
The fall of the Soviet Union provided more opportunities for this “shock therapy,” as it was called by the Chicago Boys. But when it was tried in country after country, the result was the same as in the feudal days of a millennium earlier—the emergence of powerful and wealthy ruling elites, the destruction of the middle class, and an explosion of grinding and terrifying poverty.
Undaunted, the Chicago Boys needed a new country to experiment on. George W. Bush, whose entire cabinet (with the possible exception of Colin Powell) was made up of people who shared the von Hayek/Friedman/Rand viewpoint, decided that Iraq would be perfect. The country was invaded, its vast social security, educational, and governmental sector was completely eliminated (to the point where people who had been in the political party of the former regime, the Baathists, were no longer allowed employment), its national industries were sold off to the highest transnational bidder, all barriers to trade (tariffs, domestic content requirements, etc.) were eliminated, multinational corporations were told they could do business in Iraq and remove 100 percent of their profits, and all food, social security, and educational subsidies were eliminated. This was all done by an ideologue former prep school roommate of Bush’s, L. Paul Bremer, and all the followers of this “negative liberty” worldview sat back to watch, fully expecting Iraq to self-organize into a highly functioning “free market democracy.”
When it didn’t seem to be working, former Searle CEO and multimillionaire corporate executive Donald Rumsfeld, then defense secretary, said simply, “Freedom’s messy!” That disaster continues unabated to this day, and seems to be being replicated in Afghanistan.
In recent economic history we find a similar story of deregulatory “shock therapy” right here in the United States. In 1989 L. William Seidman, the chairman of the Federal Deposit Insurance Corporation (FDIC), was appointed by George H. W. Bush to run the Resolution Trust Corporation (RTC) to take over, bail out, and reboot the nation’s savings and loans. In that capacity he helped invent “tranched credit-rated securitization,” in which banks aggregate subprime loans into a single “securitized” package, then slice—or “tranche” (the French word for “slice”) —them into pieces and sell off those tranches to investors all over the world at a good profit.
Tranched subprime loans and the derivatives based on them were the dynamite that exploded in September and October of 2008, bringing the world’s banking system to its knees.
In an October 2008 speech at Grand Valley State University[xxiii] (which Seidman helped start), he explained his role in creating these securities, although, he pointed out, when he was selling them from the S-and-Ls through the RTC he always made sure they were clean because the RTC, and ultimately the S-and-Ls, kept a share of them themselves.
“I remember when we invented this [in 1989],” Seidman said, “that Alan Greenspan said, ‘well this is a great new innovation because we are spreading the risk all over the world.’” But at least they were relatively safe instruments from 1989 until 2004.
To make matters worse, Seidman said, in 2004 the investment banking industry went to the Securities and Exchange Commission (SEC), which regulated them, and asked the SEC to abandon their requirement that banks couldn’t borrow more than twelve dollars (to buy these tranches and sell them at a profit) for every dollar they had in capital assets. They argued that the twelve-to-one ratio limit was antiquated, and suggested that they could self-regulate with oversight from the SEC. The SEC, being run by “free market” guys appointed by George W. Bush, agreed. As Seidman said, “They [the banks then] went from 12 times leverage to 30 to 35 times leverage. At that same time, the derivative market was developing. A derivative is typically a bet on the future value of a stock or mortgage or some other underlying asset from which its value is “derived.” Mortgage insurance—betting that a mortgage won’t fail (a “credit default swap”)—for example, is a form of a derivative.
“The problem was that there was no regulatory agency for these,” Seidman said, “and it was proposed that regulation or at least disclosure be required. The industry fought it and the Federal Reserve, under Alan Greenspan, vehemently opposed [transparency or regulation]. I can remember Alan saying, ‘Look, these are sophisticated contracts between knowledgeable buyers and knowledgeable sellers, and no regulator can do as well as they’ll do, so what do you need a regulator for? The market will regulate these.’
“And he [Greenspan] won the day. Alan was the key person responsible for the fact that we didn’t even know how many of those contracts there were, and it was in the trillions.”
At the end of his speech, Seidman noted the worldwide crash brought about by the SEC’s deregulation of the banks and the Fed’s unwillingness to regulate the derivative market at all, saying that this was “just part of believing that the market will regulate itself if you just let those good people go out there and bargain on their own.”
Noting that Alan Greenspan was a “good friend” of his, Seidman said, “He spent ten years with Ayn Rand and he believed that people were economic machines. They were never fraudulent, they never got mad, and they were perfect economic machines. Unfortunately it turns out that’s not a very accurate description of how people behave, particularly when they can make a whole bundle of money in a hurry and get out before they get caught.”
As if to punctuate the entire disastrous thirty-year experiment, in Adam Curtis’s three-hour BBC special The Trap, he points out that when the results of game theory were tested on the Rand Corporation’s secretaries, they always chose the “trusting” course—the complete opposite of the hypothesis of Nash and the utopian thinkers who engineered Reaganomics, Thatchernomics, libertarianism, and objectivism.
It turns out that when given the Prisoner’s Dilemma or other game theory models, only two groups of people behaved as Nash predicted: the first were psychopaths, like Nash himself, who after developing game theory in the 1950s was institutionalized for almost a decade for severe paranoid schizophrenia and has since renounced his own invention. The second were economists.
Everybody else, it seems, is willing to trust in the innate goodness of others.
Why Do CEOs Make All That Money?
Americans have long understood how socially, politically, and economically destabilizing are huge disparities in wealth. For this reason, the U.S. military and the U.S. civil service have built into them systems that ensure that the highest-paid federal official (including the president) will never earn more than twenty times the salary of the lowest-paid janitor or army private. Most colleges have similar programs in place, with ratios ranging from ten-to-one to twenty-to-one between the president of the university and the guy who mows the lawn. From the 1940s through the 1980s, this was also a general rule of thumb in most of corporate America; when CEOs took more than “their fare share” they were restrained by their boards so that the money could instead be used by the company for growth and to open new areas of opportunity. The robber baron J. P. Morgan himself suggested that nobody in a company should earn more than twenty times the lowest-paid employee (although he exempted stock ownership from that equation).
But during the “greed is good” era of the 1980s, something changed. CEO salaries began to explode at the same time that the behavior of multinational corporations began to change. There was a mergers-and-acquisitions mania in the air, and as big companies merged to become bigger, they shed off “redundant” parts. The result was a series of waves of layoffs, as entire communities were decimated, divorce and suicide rates exploded, and America was introduced to the specter of the armed “disgruntled employee.” Accompanying the consolidation of wealth and power of these corporations was the very clear redefinition of employment from “providing a living wage to people in the community” to “a variable expense on the profit and loss sheet.” Companies that manufactured everything from clothing to television sets discovered that there was a world full of people willing to work for fifty cents an hour or less: throughout America, factories closed, and a building boom commenced among the “Asian Tigers” of Taiwan, South Korea, and Thailand. The process has become so complete that of the millions of American flags bought and waved after the World Trade Center disaster, most were manufactured in China. Very, very, very few things are still manufactured in America.
And it wasn’t unthinking, unfeeling “corporations” who took advantage of the changes in the ways the Sherman Antitrust Act and other laws were enforced by the Reagan, Bush, Clinton, and Bush administrations. It took a special type of human person.
In his book Toys, War, and Faith: Democracy in Jeopardy, Maj. William C. Gladish suggests that this special breed of person is actually a rare commodity, and thus highly valued. He notes that corporate executives make so much money because of simple supply and demand. There are, of course, many people out there with the best education from the best schools, raised in upper-class families, who know how to play the games of status, corporate intrigue, and power. The labor pool would seem to be quite large. But, Gladish points out, “There’s another and more demanding requirement to meet. They must be willing to operate in a runaway economic and financial system that demands the exploitation of humanity and the environment for short-term gain. This is a disturbing contradiction to their children’s interests and their own intelligence, education, cultural appreciation, and religious beliefs.
“It’s this second requirement,” Gladish notes, “that drastically reduces the number of quality candidates [for corporations] to pick from. Most people in this group are not willing to forsake God, family, and humanity to further corporate interest in a predatory financial system. For the small percentage of people left, the system continues to increase salaries and benefit packages to entice the most qualified and ruthless to detach themselves from humanity and become corporate executives and their hired guns.”
Sociopathic Paychecks
One of the questions often asked when the subject of CEO pay comes up is, “What could a person such as William McGuire or Rex Tillerson (the CEOs of UnitedHealth and ExxonMobil, respectively) possibly do to justify a $1.7 billion paycheck or a $400 million retirement bonus?”
It’s an interesting question. If there is a “free market” of labor for CEOs, then you’d think there would be a lot of competition for the jobs. And a lot of people competing for the positions would drive down the pay. All UnitedHealth’s stockholders would have to do to avoid paying more than $1 billion to McGuire is find somebody to do the same CEO job for half a billion. And all they’d have to do to save even more is find somebody to do the job for a mere $100 million. Or maybe even somebody who’d work the necessary sixty-hour weeks for only $1 million.
So why is executive pay so high?
I’ve examined this with both my psychotherapist hat on and my amateur economist hat on, and only one rational answer presents itself: CEOs in America make as much money as they do because there really is a shortage of people with their skill set. Such a serious shortage that some companies have to pay as much as $1 million a week or a day to have somebody successfully do the job.
But what part of being a CEO could be so difficult—so impossible for mere mortals—that it would mean that there are only a few hundred individuals in the United States capable of performing it?
In my humble opinion, it’s the sociopath part.
CEOs of community-based businesses are typically responsive to their communities and decent people. But the CEOs of the world’s largest corporations daily make decisions that destroy the lives of many other human beings. Only about 1 to 3 percent of us are sociopaths—people who don’t have normal human feelings and can easily go to sleep at night after having done horrific things. And of that 1 percent of sociopaths, there’s probably only a fraction of a percent with a college education. And of that tiny fraction there’s an even tinier fraction that understands how business works, particularly within any specific industry.
Thus there is such a shortage of people who can run modern monopolistic, destructive corporations that stockholders have to pay millions to get them to work. And being sociopaths, they gladly take the money without any thought to its social consequences.
Today’s modern transnational corporate CEOs—who live in a private-jet-and-limousine world entirely apart from the rest of us—are remnants from the times of kings, queens, and lords. They reflect the dysfunctional cultural (and Calvinist/Darwinian) belief that wealth is proof of goodness, and that goodness then justifies taking more of the wealth.
In the nineteenth century in the United States, entire books were written speculating about the “crime gene” associated with Irish, and later Italian, immigrants, because they lived in such poor slums in the East Coast’s biggest cities. It had to be something in their genes, right? It couldn’t be just a matter of simple segregation and discrimination!
The obverse of this is the CEO culture and, in the larger world, the idea that the ultimate CEO—the president of the world’s superpower—should shove democracy or anything else down the throats of people around the world at the barrel of a gun.
Democracy in the workplace is known as a union. The most democratic workplaces are the least exploitative, because labor has a power to balance capital and management. And looking around the world, we can clearly see that those cultures that most embrace the largest number of their people in an egalitarian and democratic way (in and out of the workplace) are the ones that have the highest quality of life. Those that are the most despotic, from the workplace to the government, are those with the poorest quality of life.
Keep reading with a 7-day free trial
Subscribe to The Hartmann Report to keep reading this post and get 7 days of free access to the full post archives.