October 10, 2021
Array

Whistleblower: How Facebook’s Algorithms Promote Hate and Toxic Content

Prabir Purkayastha

FACEBOOK is in the limelight for both the right reasons and the wrong reasons. The wrong reason is that, what was supposed to be a small configuration change took Facebook, Instagram, WhatsApp down for a few hours last week. It affected billions of users, showing us how important Facebook and other tech giants have become to our lives and even other businesses. Of course, the much more significant issue is the whistleblower, Frances Haugen, a former employee, making tens of thousands of pages of Facebook's internal documents public. It showed that its leadership repeatedly prioritised profits over social good. Facebook's algorithms polarised society, promoted hate and fake news because it drove up "engagement" on its platforms. It is tearing apart communities, and even endangered young teens for not having "perfect" bodies, which matters not a jot to Facebook.

Wall Street Journal has published detailed exposes quoting Facebook's internal documents, and Frances Haugen, the whistleblower who has also appeared in CBS 60 minutes and Congressional hearings. "The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook," Haugen told CBS in 60 Minutes. "And Facebook, over and over again, chose to optimise for its own interests, like making more money."

The 37-old data scientist has filed eight whistleblower complaints against Facebook with the Security and Exchanges Commission (SEC) with the help of a non-profit organisation, Whistleblower Aid. These complaints are backed by hard evidence: tens of thousands of internal Facebook documents she had secretly copied before leaving Facebook.

Why is this big news when these issues have been raised time and again, more prominently after Cambridge Analytica? Did we not always know how Facebook, WhatsApp, and other platforms have become powerful instruments today to promote hatred and divisive politics? Have UN investigators not held Facebook responsible for the genocidal violence against Rohingyas? Did we not see similar patterns during the communal riots in Muzaffarnagar?

The big news is that we now have evidence that Facebook was fully aware of what its platform was doing. We have it from the horse's mouth: internal Facebook documents that Haugen has made public.

By privileging posts that promote "engagement" – meaning people reading, liking or replying to posts on Facebook, WhatsApp, Instagram – Facebook ensured that people stayed on its platform for much longer. The Facebook users could then be "sold" to the advertisers more effectively, showing them more and more ads. Facebook's business model is not promoting news, friendly chit chat among users, or entertaining people. It is selling us to those who can sell us merchandise. And like Google, it has a far better understanding of who we are and what we may buy. This is what gives Facebook 95 per cent of its revenue and makes it one of the five trillion-dollar companies in terms of market capitalisation.

Testifying before Congress, Haugen said Facebook uses artificial intelligence to find dangerous content. The problem is that "Facebook's own research says they cannot adequately identify dangerous content. And as a result, those dangerous algorithms that they admit are picking up the extreme sentiments, the division (s)."

That this was happening is widely known and has been discussed even in our columns. Facebook's answer was that they were setting an independent supervisory board for oversight and employing a large number of fact-checkers. This and other processes would help filter out hate posts and fake news. What they hid was that all of this was simply cosmetic. The driver of traffic, or what you see in your feed, in Facebook's terms – engage with – are driven by algorithms. And these algorithms were geared to promote the most toxic and divisive posts, as this is what drives up engagement. Increasing engagement is the key driver of Facebook's algorithms and defeats any measure to detoxify its content.

Haugen’s Congressional testimony also directs us to what the real problems with Facebook are and what governments must do in order to protect their citizens. "This is not to focus on the individual items of content people post but on Facebook's algorithms. It also brings back the instruments that countries have in their kitty to discipline Facebook" (italics added). These are the "safe harbour" laws that protect intermediaries like Facebook, who do not generate content themselves but provide their platform for what are called user-generated content. In the US, it is Section 230 of the Communications and Decency Act; in India, it is Section 79 of the Information Technology Act.

In the US, a Section 230 overhaul would hold the social media giant responsible for its algorithms. In Haugen's words, "If we had appropriate oversight, or if we reformed (Section) 230 to make Facebook responsible for the consequences of their intentional ranking decisions, I think they would get rid of engagement-based ranking...Because it is causing teenagers to be exposed to more anorexia content, it is pulling families apart, and in places like Ethiopia, it's literally fanning ethnic violence." The key problem is not the hateful content users generate on Facebook; it is Facebook's algorithms that drive this poisonous content on our Facebook feed continuously to maximise its advertising revenue.

Of course, the widespread prevalence of toxic content on Facebook's platforms is helped by its wilful neglect of not having language checkers in non-English and other European languages. Even though Hindi has the fourth highest number of speakers and Bengali fifth, according to Haugen, Facebook does not have enough language checkers in these two languages.

In these columns, we have explained why divisive content and fake news have more virality than others. Haugen, with thousands of pages of Facebook's internal research, confirms what other serious researchers and we have been saying all along. The algorithms that Facebook and other digital tech companies use today do not directly code rules to drive up engagement. They instead use machine learning, or what is loosely called artificial intelligence, to create these rules. It is the objective – increasing engagement – that creates the rules that lead to the display of toxic content on our feeds that is tearing societies apart and damaging democracy. We now have hard evidence, thousands of pages of Facebook's internal research reports, that this is indeed what was happening. Worse, the Facebook leadership and Mark Zuckerberg was fully aware of the problem.

Not all the harm on Facebook's platform was caused by algorithms. From Haugen's documents, we find that Facebook had a "white list" of Facebookers whose content would be promoted even if they violated Facebook guidelines. Millions of such "special" users could violate Facebook's rules with impunity. We had earlier written on evidence from Wall Street Journal how Facebook India protected BJP figures in spite of repeated flags regarding these posts being raised within Facebook itself.

This is not all that Haugen’s treasure trove of Facebook's internal documents reveal. Reminiscent of cigarette companies research on how to hook children to smoking young, Facebook had researched what it called "pre-teens", children in the age group of 9-12. Their research was on how to hook the "pre-teens" to Facebook's platforms so that they would have an unending supply of new consumers. This is despite their internal research showing that Facebook's platforms promoted anorexia and other eating disorders, depression, and suicidal tendencies among the young.

All these should damage Facebook. But it is a trillion-dollar company and one of the biggest in the world. Its fat cash balance, coupled with the power it wields in politics, its ability to "hack" elections, provide the protection that big capital receives under capitalism. The cardinal sin that capital may not tolerate is lying to other capitalists. The internal documents that Haugen has submitted to Security and Exchanges Commission (SEC) could finally lead to pushback against social media giants and lead to their regulation. If not strong regulation, to at least some weak constraints on the algorithms that promote hate.

To end, I am going to quote from a ten-year-old interview with a young tech-whiz. Jeff Hammerbacher, a 28-year-old Silicon Valley tech whiz, said to a leading US tech magazine Wired, "The best minds of my generation are thinking about how to make people click ads." This is what is driving the march of social media giants to their trillions.