Facebook was well aware that hate speech was spreading on its site in India which could exacerbate ethnic violence, and did not deploy resources to curb the phenomenon, US media reported, citing internal documents.
The so-called Facebook Papers, leaked by whistleblower Frances Haugen, have already revealed the impact of Facebook — as well as of WhatsApp and Instagram, both of which it owns — on the deep polarization of politics in the United States and on the mental health of some teenagers.
But there have long been concerns over the social network’s impact in spreading hate speech fueling violence in the developing world, such as the massacre targeting the Rohingya minority in Myanmar.
This weekend the Wall Street Journal, the New York Times and the Washington Post, among others, focused on Facebook’s presence in India, the biggest market for the US-based company and its messaging service WhatsApp in terms of users.
A report by the company’s own researchers from July 2020 showed that the share of inflammatory content skyrocketed starting in December 2019.
“Rumors and calls to violence spread particularly on Facebook’s WhatsApp messaging service in late February 2020,” when clashes between the Hindu majority and Muslim minority left dozens dead, the Wall Street Journal reported.
Facebook had also as early as February 2019 created a fictitious account, that of a 21-year-old woman in northern India, to better understand the user experience, the Washington Post reported, citing an internal memo.
The account followed posts, videos and accounts recommended by Facebook, but a company researcher found it promoted a torrent of fake and inflammatory content.
“I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life,” media quoted the staffer as saying in a 46-page report among the documents released by Haugen.
“Soon, without any direction from the user, the Facebook account was flooded with pro-Modi propaganda and anti-Muslim hate speech,” the Washington Post reported. Prime Minister Narendra Modi, a Hindu nationalist, was campaigning for re-election at the time.
The test also coincided with India launching an air strike on Pakistan over a militant suicide bombing in the disputed Kashmir region.
The unnamed researcher called that experience an “integrity nightmare.”
The content made jingoistic claims about India’s air strikes and included graphic pictures.
These included one image of a man holding a severed head and using language slamming Pakistanis and Muslims as “dogs” and “pigs,” reports said.
– Bad actors, authoritarian regimes –
“Facebook has meticulously studied its approach abroad — and was well aware that weaker moderation in non-English-speaking countries leaves the platform vulnerable to abuse by bad actors and authoritarian regimes,” the Post continued, citing the internal documents.
The documents showed that the vast majority of the company’s budget dedicated to the fight against misinformation is intended for the United States — even though users there represent less than 10 percent of Facebook’s users worldwide.
“We’ve invested significantly in technology to find hate speech in various languages, including Hindi and Bengali,” a Facebook spokesperson said in a statement.
“As a result, we’ve reduced the amount of hate speech that people see by half this year. Today, it’s down to 0.05 percent.” The figure is a percentage of content in all countries.
The company said it was “expanding” its operations into new languages. It has “hate speech classifiers” working in Hindi, Bengali, Tamil and Urdu.
More than 40 civil rights groups warned last year that Facebook had failed to address dangerous content in India.
One Facebook India executive resigned in 2020 after being accused of refusing to apply hate speech policies to the Hindu nationalist ruling party and also sharing an anti-Muslim post.
“Hate speech against marginalized groups, including Muslims, is on the rise globally. So we are improving enforcement,” the spokesperson said.
jum/st/bbk