Discover more from The Hartmann Report
Is the Greatest Threat to Humanity Something Called an Algorithm?
Algorithms used in social media are not tuned for what is best for society. They don’t ask themselves, “Is this true?” or “Will this information help or hurt humanity?"
The man who coined the term “virtual reality” and helped create Web 2.0, Jaron Lanier, recently told a reporter for The Guardian there’s an aspect to the internet that could endanger the literal survival of humanity as a species. It’s an amazing story, and I believe he’s 100% right.
Humans are fragile creatures. We don’t have fangs or claws to protect ourselves from other animals that might want to eat us. We don’t have fur or a pelt to protect us from the elements.
What we do have, however, that has allowed us to conquer the planet and survive for eons is our interconnection with each other, something we generally refer to as society, community, and culture.
Humans are social animals. Our ability to share information with each other in ways that are meaningful and credible has been the key to our survival.
For hundreds of thousands of years, it was scouts, neighbors, and family members reporting predators or prey, animal or human, just around the other side of the mountain or on the perimeter of the nighttime fire, that kept our ancestors safe.
Over the millennia, we developed elaborate social constructs or “rules of society” to enhance our confidence in the information we’re getting from our fellow humans, because that information may be essential to our survival.
When important information is twisted, distorted, or lied about it can put us at risk. And that’s what’s happening right now across multiple social media platforms, causing people to question global warming and other science (Covid vaccines, for example) while engaging in behavior destructive to a democratic, peaceful, functioning world.
These rules or Commandments about truthful communication are at the core of every religion, every culture, and every society from the most technologically sophisticated to those “primitives” still living in jungles, forests, and wild mountain areas.
They’re built into our deepest and most ancient oral traditions, stretching back to dim antiquity, known by every person in every culture around the world.
We in western culture can all recite the story of The Little Boy Who Cried Wolf, Eve’s lie to her god about consuming the forbidden fruit, and the consequences of courtiers’ lies about The Emperor’s New Clothes. Every other culture on Earth has their versions of the same stories.
We know, remember, and pass along these stories because truthful information is essential to the survival of family, tribe, community, nation, and ultimately humanity itself.
They’re even built into the language of our religions. Discussing this article I was then writing in first draft, my dear friend Rabbi Hillel Zeitlin was telling me on a Zoom call yesterday how the Torah calls the inanimate mineral world Domem or “silent” while the realm of humans is known as Midaber or “speaking.”
“When the Torah describes the infusion of the soul into man,” he said, “one of the most ancient commentators, Onkelos, described it as ‘the speaking spirit’ or Ruach Memalela.”
And that “speaking spirit” — our ability to communicate with others — carries with it an obligation to tell the truth: the Bible is filled with stories of disasters that came about because of untrue information (as are the Koran, Bhagvadgita, and holy books of every other religion) .
This explains why:
—We universally disparage lies and liars: it’s often the first lesson parents teach their young children.
—When information is particularly critical to our survival or quality of life, we build into law severe penalties for lying (called “perjury”).
—We honor people who have been particularly effective at finding important truthful information and sharing it with our highest honors, things like Nobel and Pulitzer prizes.
—We built protection for a free, open, and accountable press into our Constitution 231 years ago so future generations of Americans could rely on competent and full-spectrum information when making decisions about leadership, governance, and policy.
Now all of that — based in our ability to trust in the accuracy of information we use to select leaders and determine policy — is under threat from something that’s invisible to us and most people don’t even realize exists.
Possibly the greatest threat to humanity at this moment is something called an algorithm.
An algorithm is a software program/system that inserts itself between humans as we attempt to communicate with each other. It decides which communications are important and which are not, which communications will be shared and which will not.
As a result, in a nation where 48% of citizens get much or most of their news from social media, the algorithm driving social media sites ultimately decides which direction society will move as a result of the shared information it encourages or suppresses across society.
When you log onto social media and read your “feed,” you’re not seeing (in most cases) what was most recently posted by the people you “follow.” While some of that’s there, the algorithm also feeds you other posts it thinks you’ll like based on your past behavior, so as to increase your “engagement,” aka the amount of time you spend on the site and thus the number of advertisements you will view.
As a result, your attention is continually tweaked, led, and fine-tuned to reflect the goal of the algorithm’s programmers. Click on a post about voting and the algorithm then leads you to election denial, from there to climate denial, from there to Qanon.
Next stop, radicalization or paralysis. But at least you stayed along for the ride and viewed a lot of ads in the process": that’s the goal of the algorithm.
Algorithms used in social media are not tuned for what is best for society. They don’t follow the rules that hundreds of thousands of years of human evolution have built into our cultures, religions, and political systems.
They don’t ask themselves, “Is this true?” or “Will this information help or hurt this individual or humanity?“
Instead, the algorithms’ sole purpose is to make more money for the billionaires who own the social media platform.
If telling you that, as Donald Trump recently said, climate change “may affect us in 300 years” makes for more engagement (and more profit for the social media site) than does telling the truth about fossil fuels, it will get pushed into more and more minds.
No matter that such lies literally threaten human society short-term and possibly the survival of the human race long-term.
As Jaron Lanier told The Guardian:
“People survive by passing information between themselves. We’re putting that fundamental quality of humanness through a process with an inherent incentive for corruption and degradation. The fundamental drama of this period is whether we can figure out how to survive properly with those elements or not.”
Speaking of climate change and information/disinformation being spread by algorithms on social media, he added:
“I still think extinction is on the table as an outcome. Not necessarily, but it’s a fundamental drama.”
Climate change is a unique threat to humanity, one like we’ve never seen before. It’s going to take massive work and investment to avoid disaster, and that’s going to require a broad consensus across society about the gravity of the situation. The same could be said about threats to American democracy like the rise of far-right hate and election denial.
Yet social media is filled with content denying climate change and denigrating basic norms and institutions of democracy. This is a threat to America and to humanity itself.
The premise of several books, most famously Shoshana Zuboff’s The Age of Surveillance Capitalism, is that the collection of massive amounts of data about each of us — then massaged and used by “automated” algorithms to increase our engagement — is actually a high-tech form of old fashioned but extremely effective thought control.
She argues that these companies are “intervening in our experience to shape our behavior in ways that favor surveillance capitalists’ commercial outcomes. New automated protocols are designed to influence and modify human behavior at scale as the means of production is subordinated to a new and more complex means of behavior modification.” (Emphasis hers.)
She notes that “only a few decades ago US society denounced mass behavior-modification techniques as unacceptable threats to individual autonomy and the democratic order.” Today, however, “the same practices meet little resistance or even discussion as they are routinely and pervasively deployed” to meet the financial goals of those engaging in surveillance capitalism.
This is such a powerful system for modifying our perspectives and behaviors, she argues, that it intervenes in or interferes with our “elemental right to the future tense, which accounts for the individual’s ability to imagine, intend, promise, and construct a future.” (Emphasis hers.)
So, what do we do about this?
When our Constitution was written, the Framers wanted “To promote the Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries.”
Thus, Article 1, Section 8 of the Constitution gives Congress the power to pass laws protecting both physical and intellectual property, things like inventions as well as creative writing and art. We call these regulations patent, copyright, and trademark laws.
Social media companies have claimed that their algorithms are intellectual properties, inventions, and trade secrets, all things that fall under the rubric of these laws to advance and protect intellectual property and commerce.
And, indeed, the whole point of algorithms is to enhance commerce: to make more money for the social media sites that deploy them.
But are they promoting “the Progress of Science and the useful Arts”? Is amplifying hate and misinformation “useful”?
If not, the power to keep algorithms secret that Congress has given, Congress can also take away.
In my book The Hidden History of Big Brother: How the Death of Privacy and the Rise of Surveillance Threaten Us and Our Democracy, I argue that algorithms should be open-source and thus publicly available for examination.
The reason so many algorithms are so toxic is because they are fine tuned or adjusted to maximize engagement to benefit advertisers, who then pay the social media company.
But if a pay-for-play membership fee was put into place to fund the social media site, like Elon Musk has flirted with, it could significantly diminish the pressure to have a toxic algorithm running things.
Nigel Peacock and I saw this at work for the nearly two decades that we ran over 20 forums on CompuServe back in the 1980s and ’90s. Everybody there paid a membership fee to CompuServe, so we had no incentive to try to manipulate their experience beyond normal moderation. There was no algorithm driving the show.
It would also reduce the amount of screen time and the level of “screen addiction” so many people experience with regard to social media, free up both personal and social media time and resources, all while maintaining revenues for the social media site and reducing the incentives toward misinformation and radicalization.
But lacking a change in business model, the unique power social media holds to change behavior for good or ill — from Twitter spreading the Arab Spring, to Facebook provoking a mass slaughter in Myanmar, to both helping Russia elect Donald Trump in 2016 — cries out for regulation, transparency, or, preferably, both.
Ten months ago, U.S. Senator Ron Wyden, D-Ore., with Senator Cory Booker, D-N.J., and Representative Yvette Clarke, D-N.Y., introduced the Algorithmic Accountability Act of 2022 which would do just that.
“Too often, Big Tech’s algorithms put profits before people, from negatively impacting young people’s mental health, to discriminating against people based on race, ethnicity, or gender, and everything in between,” said Senator Tammy Baldwin, a co-sponsor of the legislation.
“It is long past time,” she added, “for the American public and policymakers to get a look under the hood and see how these algorithms are being used and what next steps need to be taken to protect consumers.”
And, let’s not forget, to protect our democracy, our nation, and our planet.
The people who own our social media, often focused more on revenue than the consequences of their algorithms, don’t seem particularly concerned about these issues.
But we must be.