- Adverts containing AI-manipulated images were submitted to Facebook by civil and corporate accountability groups - Adverts contained known slurs towards Muslims in India, such as “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned” - One advert called for the execution of an opposition leader they falsely claimed wanted to “erase Hindus from India”–
The Facebook and Instagram owner Meta approved a series of AI-manipulated political adverts during India’s election that spread disinformation and incited religious violence, according to a report shared exclusively with the Guardian.
Facebook approved adverts containing known slurs towards Muslims in India, such as “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned”, as well as Hindu supremacist language and disinformation about political leaders.
Another approved advert called for the execution of an opposition leader they falsely claimed wanted to “erase Hindus from India”, next to a picture of a Pakistan flag.
The adverts were created and submitted to Meta’s ad library – the database of all adverts on Facebook and Instagram – by India Civil Watch International (ICWI) and Ekō, a corporate accountability organisation, to test Meta’s mechanisms for detecting and blocking political content that could prove inflammatory or harmful during India’s six-week election.
According to the report, all of the adverts “were created based upon real hate speech and disinformation prevalent in India, underscoring the capacity of social media platforms to amplify existing harmful narratives”.
The adverts were submitted midway through voting, which began in April and would continue in phases until 1 June. The election will decide if the prime minister, Narendra Modi, and his Hindu nationalist Bharatiya Janata party (BJP) government will return to power for a third term.
During his decade in power, Modi’s government has pushed a Hindu-first agenda which human rights groups, activists and opponents say has led to the increased persecution and oppression of India’s Muslim minority.
In this election, the BJP has been accused of using anti-Muslim rhetoric and stoking fears of attacks on Hindus, who make up 80% of the population, to garner votes.
During a rally in Rajasthan, Modi referred to Muslims as “infiltrators” who “have more children”, though he later denied this was directed at Muslims and said he had “many Muslim friends”.
The social media site X was recently ordered to remove a BJP campaign video accused of demonising Muslims.
The report researchers submitted 22 adverts in English, Hindi, Bengali, Gujarati, and Kannada to Meta, of which 14 were approved. A further three were approved after small tweaks were made that did not alter the overall provocative messaging. After they were approved, they were immediately removed by the researchers before publication.
Meta’s systems failed to detect that all of the approved adverts featured AI-manipulated images, despite a public pledge by the company that it was “dedicated” to preventing AI-generated or manipulated content being spread on its platforms during the Indian election.
Five of the adverts were rejected for breaking Meta’s community standards policy on hate speech and violence, including one that featured misinformation about Modi. But the 14 that were approved, which largely targeted Muslims, also “broke Meta’s own policies on hate speech, bullying and harassment, misinformation, and violence and incitement”, according to the report.
Maen Hammad, a campaigner at Ekō, accused Meta of profiting from the proliferation of hate speech. “Supremacists, racists and autocrats know they can use hyper-targeted ads to spread vile hate speech, share images of mosques burning and push violent conspiracy theories – and Meta will gladly take their money, no questions asked,” he said.
Meta also failed to recognise the 14 approved adverts were political or election-related, even though many took aim at political parties and candidates opposing the BJP. Under Meta’s policies, political adverts have to go through a specific authorisation process before approval but only three of the submissions were rejected on this basis.
This meant these adverts could freely violate India’s election rules, which stipulate all political advertising and political promotion is banned in the 48 hours before polling begins and during voting. These adverts were all uploaded to coincide with two phases of election voting.
In response, a Meta spokesperson said people who wanted to run ads about elections or politics “must go through the authorisation process required on our platforms and are responsible for complying with all applicable laws”.
The company added: “When we find content, including ads, that violates our community standards or community guidelines, we remove it, regardless of its creation mechanism. AI-generated content is also eligible to be reviewed and rated by our network of independent factcheckers – once a content is labeled as ‘altered’ we reduce the content’s distribution. We also require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases.”
A previous report by ICWI and Ekō found that “shadow advertisers” aligned to political parties, particularly the BJP, have been paying vast sums to disseminate unauthorised political adverts on platforms during India’s election. Many of these real adverts were found to endorse Islamophobic tropes and Hindu supremacist narratives. Meta denied most of these adverts violated their policies.
Meta has previously been accused of failing to stop the spread of Islamophobic hate speech, calls to violence and anti-Muslim conspiracy theories on its platforms in India. In some cases posts have led to real-life cases of riots and lynchings.
Nick Clegg, Meta’s president of global affairs, recently described India’s election as “a huge, huge test for us” and said the company had done “months and months and months of preparation in India”.
Meta said it had expanded its network of local and third-party factcheckers across all platforms, and was working across 20 Indian languages.
Hammad said the report’s findings had exposed the inadequacies of these mechanisms. “This election has shown once more that Meta doesn’t have a plan to address the landslide of hate speech and disinformation on its platform during these critical elections,” he said.
“It can’t even detect a handful of violent AI-generated images. How can we trust them with dozens of other elections worldwide?”
Jimbo@yiffit.net 6 months ago
Changes name to get away from previous heinous acts
Commits more heinous acts