What Americans and the Media are Missing About the TikTok Crisis
It’s time for “truth in labeling” laws like the processed food industry complies with to apply to social media. The life — and democracy — that gets saved could be your own…
While I agree with the bipartisan House and White House consensus that having TikTok owned by a company that, by law, must share its information with the Chinese Communist Party is a national security threat, there’s a larger part of the issue — which includes a danger to Americans also presented by both Facebook and Xitter — that nobody is seriously discussing.
As I detailed at length in my 2022 book The Hidden History of Big Brother: How the Death of Privacy and the Rise of Surveillance Threaten Us and Our Democracy, that’s the “secret algorithm” that determines what content and which “influencers” and even regular users get promoted or ignored, and to whom that content is pushed.
That algorithm has been proven to be, in many cases for each of the various social media sites, highly toxic, pushing people into conspiracy theories, Nazism and white supremacy, and destroying the self-esteem of young people and children. It has often caused death, both from radicalization (El Paso and Buffalo shooters), actual genocide, and suicide.
We’ve faced a similar problem before, that was also killing Americans every year (as is social media today). And we successfully did something about it.
Imagine somebody invented a set of “secret” food additives that would cause you to crave more and more of whatever they were added to, making people who ate them to constantly gain weight until they were in the midst of an obesity crisis, living on the verge of death from stroke, heart attack, dementia, and diabetes.
Wouldn’t it be reasonable to at least inform people that those additives were in the food they’re eating? Particularly if they were causing thousands of deaths every year and so destroying the self-esteem of their now-obese consumers that their levels of social isolation and suicide rates both increased?
Turns out we’ve already been through this, and — just like today’s social media industry — the processed and fast food industries launched a multimillion dollar, nearly 30-year lobbying campaign to prevent anybody from knowing what or how much of those specific additives were in their products.
In other words, these companies’ executives knew that their products were destructive — would, in fact, kill or destroy the lives of millions of Americans over the coming decades — but did their best to hide that from both the American people and any regulatory agency that might have oversight.
Finally, though, LBJ and a Democratic Congress forced through transparency labeling rules that were later expanded and tightened by the FDA under both the Nixon and Reagan administrations.
Salt, sugar, and fat were that “deadly triad” of additives that make food addictive, which the processed and fast food industries fought so hard — and spent hundreds of millions in lobbying — to conceal from consumers.
Since the late 1960s, when LBJ signed the first labeling laws into existence, both America — and now every other developed country in the world — both inform people of the dangers of those additives and require processed food manufacturers to list those three specific ingredients on their labels.
So, why the hell aren’t we doing the same thing with the “secret sauce” algorithms of the social media industry? After all, like the processed food companies in the 1960s, they’re harming Americans — and harming democracy — while hiding from us and regulatory agencies the details of the mechanism (the “ingredients”) with which they’re doing it.
Yes, it does matter who owns the companies, the current subject of Congressional debate and lots of hand-wringing.
Between Elon Musk’s statements reflecting bizarre misogyny, antisemitism, and racism, and Mark Zuckerberg’s alleged obsession with profits above lives or democracy (and his secret meetings with Trump), both platforms have allegedly bent their users toward toxic conspiracy theories and hatred of racial, religious, and gender groups.
TikTok actively suppresses “negative” information about China, particularly anything touching on democracy, Hong Kong independence, the imprisonment of the Uyghurs, and the Tiananmen Square revolt. Even YouTube regularly pushes people looking for mere Republican content down into Qanon and Nazi rabbit holes.
But the weapon they use isn’t their power as owners to control moderation or ban users. There are allegations of such efforts from whistleblowers, but the real power ByteDance, Meta, and X Holdings wield is contained in the computer code called an algorithm that drives their services.
And all three companies fiercely defend their right to keep those algorithms secret, just like the junk food industry did between the 1940s (when the “deadly triad” was first publicly identified) and the 1960s when Congress and the FDA finally took action to force transparency.
While the field of research into the way social media may cause political radicalization is fairly new, serious scientific examinations of how watching porn can alter behavior go back decades. Turns out, they’re pretty much the same in several ways.
While much of the research, particularly that suggesting that porn viewing leads to antisocial behavior out in the world, is controversial and still the subject of scientific debate, one finding is relatively uncontroversial: that over time, most heavy users of porn will seek out more and more extreme content to get the same satisfaction.
As Norman Doidge, MD wrote in his book The Brain That Changes Itself:
“When pornographers boast that they are pushing the envelope by introducing new, harder themes, what they don’t say is that they must, because their customers are building up a tolerance to the content.”
Humans are novelty-seeking machines. Give us a little buzz — be it with a shot of heroin or cocaine, a sugary drink, or an exhilarating new experience — and curiosity can quickly become a craving. Over time, it takes more and more to give us the same buzz, a process that we generally liken to addiction but applies to all sorts of things, from processed foods to violence in video games and movies to opioids.
In every case, the neurochemical process that draws us initially to these things — novelty-seeking behavior mediated by bursts of “happy chemicals” like dopamine in the brain — is the same: a system originally wired into our hunter-gatherer brains to increase our chances of survival.
The excitement of the hunt — as much as our hunger — drew us out into the dangerous world of jungle, forest, or savanna to find food. And highly concentrated nutrients — like a tree full of honey — gave us an even bigger buzz, guaranteeing we’d search for more.
It’s also how social media works, and the social media companies know it.
A study titled Down the (White) Rabbit Hole: The Extreme Right and Online Recommender Systems found:
“A process is observable whereby users accessing an ER [Extreme Rightwing] YouTube video are likely to be recommended further ER content, leading to immersion in an ideological bubble in just a few short clicks.”
Real-world confirmation was easy for Zeynep Tufekci, a reporter who chronicled her experience in the New York Times. Noting that she didn’t normally watch right-wing extremist content on YouTube, she said that she needed to confirm “a few Trump quotes” during the 2016 election, so she watched several of his speeches on that site.
“Soon I noticed something peculiar,” Tufecki wrote. “YouTube started to recommend and ‘autoplay’ videos for me that featured white supremacist rants, Holocaust denials, and other disturbing content.”
Curious, she created a few new YouTube accounts under fake names and began looking at videos on subjects from Hillary Clinton to Bernie Sanders to seemingly nonpolitical topics like vegetarianism.
Right across the board, she found that “videos on vegetarianism led to videos on veganism” and “videos about jogging led to videos about running ultramarathons.” YouTube just kept cranking up the ante, morphing her Hillary and Bernie watching into conspiracy screeds about 9/11 and other worries of the extreme left.
“It seems as if you are never ‘hard core’ enough for YouTube’s recommendation algorithm,” she wrote. “It promotes, recommends, and disseminates videos in a manner that appears to constantly up the stakes. Given its billion or so users, YouTube may be one of the most powerful radicalizing instruments of the 21st century.”
Algorithms put together by other social media platforms appear to do the same thing, and they’re all proprietary, need-to-know trade secrets and not available for oversight even to government agencies. The fairly well-known example is Facebook’s algorithm leading users to such radical content that they would end up voting for Donald Trump and then invade the US Capitol and seriously injure more than 140 police officers, many ending up in the hospital, with four ultimately dead.
In every case, the algorithm’s goal is to “increase engagement” so that the social media company can sell more ads at a higher price. It’s all about the money, and the money is in the billions.
Facebook’s algorithm, according to CNBC, even placed paid ads for assault weapons next to content filled with inflammatory lies and misinformation about the November 2020 election, a fact that wasn’t lost on 23 members of Congress who grilled Facebook CEO Mark Zuckerberg about it in early 2021.
As a society, we generally try to regulate things that provoke this kind of destructive, brain-seizing response.
Pharmaceuticals and alcohol are tightly regulated, as is gambling, because addiction to them has dire societal consequences. We required warnings on cigarettes and processed food products so people can understand the threats and consequences. And we regulate sex and violence in mainstream media because of their “contagion effects.”
As mentioned, when Congress discovered that processed food manufacturers were using research on addiction to determine how much salt, sugar, and fat to put into their products to produce repeated and increasing consumption — leading to a nationwide obesity, health, and death crisis — they mandated transparency. Food labels now disclose the content of processed food products, including the amount of each of these “addictive” substances.
In a starkly opposite situation, as Facebook whistleblower Frances Haugen told an MIT conference on the impact of social media on society, Meta/Facebook continues to conceal its algorithm from public or even academic or government scrutiny. The result, she says, is that:
“[N]o one gets to see behind the curtain, and they don’t know what questions to ask. So, what is an acceptable and reasonable level of rigor for keeping kids off these platforms, and what data would [the platforms] need to publish to understand whether they are meeting the duty of care?”
Because of Meta’s secrecy, nobody knows the answer. The same, of course, is also true of all the other social media platforms and, arguably, even the search engine companies.
As Public Citizen notes:
“In the race to amass monopoly power in their respective markets, these corporations have developed predatory business practices that harvest user data for profit and facilitated discrimination by race, religion, national origin, age, and gender. Facebook and Google have wielded unprecedented influence over our democratic process. …
“Increased investments in Washington have allowed these monopolists to harm consumers, workers, and other businesses alike, with relatively little accountability to date. A report Public Citizen released in 2019 (covering up to the 2018 election cycle) detailed how Big Tech corporations have blanketed Capitol Hill with lobbyists and lavished members of Congress with campaign contributions.”
Since American history’s most corrupt Supreme Court justice, Clarence Thomas — after taking millions in gifts, homes, and vacations from a politically active conservative with business before the Court — became the tie-breaking vote in Citizens United allowing massive corporations and the morbidly rich to legally bribe judges and members of Congress, Public Citizen points out:
“Big Tech has eclipsed yesterday’s big lobbying spenders, Big Oil and Big Tobacco. In 2020, Amazon and Facebook spent nearly twice as much as Exxon and Philip Morris on lobbying. … Big Tech’s lobbyists are not just numerous, they are also among the most influential in Washington. Among the 10 lobbyists who were the biggest contributors to the 2020 election cycle, half lobby on behalf of at least one of the four Big Tech companies.”
Given how badly six billionaire-owned Republicans on the Supreme Court have corrupted our political system, it’ll be a big lift to reduce the damage social media companies daily do to our mental health, our children’s lives, and America’s political systems in their pursuit of billions in monthly profits.
Nonetheless, a good start toward regulating Big Brother-style social media companies would be to do the same as we do with the processed food companies: require them to publish their algorithms, both in source code and with a plain English explanation, so both consumers and Congress, at the very least, can learn how we’re being manipulated and radicalized for their profit and political gain.
While the debate over the ownership of TikTok by a company beholden to the Chinese Communist Party is both important and legitimate, a far more important debate that’s almost completely ignored — in large part because of millions in lobbying money being spent by the social media industry — are the algorithms that give these platforms the power to hook and radicalize us.
As the mom-led battle against Big Tobacco’s efforts to market to our children successfully showed in the 1990s, an educated and outraged populace can sometimes overcome the millions spent by giant corporations and their CEOs to bribe politicians.
Let your member of Congress know it’s time for “truth in labeling” laws like the processed food industry complies with to apply to social media algorithms. The life — and democracy — that gets saved could be your own.
I spent all day yesterday knocking on the doors of left-leaning and "independent" low propensity voters (and having phenomenal conversations with a number of them, through a process called "deep canvassing"). But there is simply no way to scale up such an impactful program to combat the monstrosity of the problem illustrated here unless we do as you say. (At the very least; I'm afraid "truth in labeling" might not be far enough at this point - people are too addicted.) Also, would anyone trust the labeling anymore? Or would it just be seen as a corrupt arm of a corrupt and controlling government by those who are predisposed to thinking that way?
I don't see why 100% of the population wouldn't support a demand for the release of algorithms. Has there been any large-scale coordinated efforts to directly pressure tech companies to reveal their process? I would think that both the far right and the far left could get behind that demand, each for their own reasons.
This is the problem underneath of our problems — the base-layer problem — and it has been evident for such a long time. It literally makes me insane that intelligent people refuse to acknowledge it.
Compelling the social media grifters to reveal their algorithms is much like having them change their business models to stop stealing our personal data and content without compensating us for it.
The tech oligarchs already control enough Congressional whores to do whatever they want them to do. Therefore, getting them to reveal their algorithms or making them pay for the information that they take from us is not likely to happen anytime soon as long as money is speech and corporations can exploit the rights of our citizens.
Unless, of course, we replace enough of the public sector decision-makers that serve the kleptocrats with competent officials who will promote our general welfare and secure the blessings of liberty for ourselves and our posterity (i.e., fulfill their Constitutional purpose).