Facebook has admitted that it played a role in inciting violence during the genocidal campaign against the Rohingya Muslim minority in Myanmar (Burma).
Since 2017, nearly 900,000 people have been displaced, hundreds of villages have been burned to the ground, families have been separated and killed, and hundreds, possibly thousands of women and girls have been raped, including in public mass gang rapes. In their own words, Facebook said that “We agree that we can and should do more,” and that they would invest resources in preventing the spread of hate speech in Myanmar.
Since then, the improvements that they have made include employing more content reviewers who speak the country’s languages , improving their ability to use artificial intelligence to flag examples of hate speech including in Burmese , and establishing a dedicated team to work on the country. Have these changes made a significant difference? Is the platform still susceptible to facilitating incitement to violence, hatred and genocide?
Our investigation provides a disturbing answer to these questions: Facebook’s ability to detect Burmese language hate speech remains abysmally poor.
We collated eight real examples of hate speech directed against the Rohingya, as reported by the United Nations Independent International Fact-Finding Mission on Myanmar in their report to the Human Rights Council.
We submitted each of these hate speech examples to Facebook in the form of an advert in Burmese . Facebook says that before adverts are permitted to appear online, they’re reviewed to make sure that they meet their advertising policies, and that during this process they check the advert’s “images, video, text and targeting information, as well as an ad’s associated landing page”. The process relies primarily on automated tools, though Facebook reveals little about how it’s done in practice. Of course, we didn’t actually publish any of the ads. We set a publication date in the future and deleted the ads once we received the notification from Facebook as to whether they were approved for publication or not.
All eight of the adverts were accepted by Facebook for publication.
Facebook’s community standards define hate speech as:
a direct attack against people — rather than concepts or institutions— on the basis of what we call protected characteristics: race, ethnicity, […] religious affiliation […]. We define attacks as violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation.
The hate speech examples we used are highly offensive and we are therefore deliberately not repeating the exact phrases used here . However, all of the ads fall within Facebook’s definition of hate speech. The sentences used included:
In addition to falling within Facebook’s definition of hate speech, most of the ads would have breached international law had they been published. The International Convention on the Elimination of All Forms of Racial Discrimination makes clear that States must prohibit :
“all dissemination of ideas based on racial superiority or hatred, incitement to racial discrimination, as well as all acts of violence or incitement to such acts against any race or group of persons of another colour or ethnic origin, and also the provision of any assistance to racist activities, including the financing thereof”
We’re not suggesting that paid-for content is the primary means by which hate speech is spread in Myanmar. Instead, we used the submission of adverts as a means of testing Facebook’s ability to detect hate speech without ourselves posting hate speech. It is reasonable to assume that Facebook would apply its hate speech detection systems to ads as well as to organic content given that the company says explicitly that ads violating its community standards (which include hate speech) are prohibited. Indeed, Facebook themselves have said that during elections in Myanmar they have removed political pages that violated their ad policy on hate speech.
We put our findings to Facebook in order to give them the opportunity to put their side of the story but did not receive a response from them.
Facebook and other social media platforms should treat the spread of hate and violence with the utmost urgency. As an immediate step, they must properly resource and publish what integrity and security systems exist for their platform in each country – making sure that people in all countries and languages are sufficiently protected from hate speech and violence online.
In places such as Myanmar where there is clear evidence that Facebook was used to incite real world harms that cost ten thousand people their lives, and hundreds of thousands their homes and livelihoods, and where the Rohingya face an ongoing heightened risk of violence and continued discrimination, the very minimum the platform should do is ensure it is not being used for future incitement, and provide remedy to victims. There are a number of ongoing cases which are attempting to require Facebook to do this, including:
But it’s not enough to rely on private litigation or to expect the companies to regulate themselves. Governments must step in and hold these companies to account, keep people safe and prevent human rights abuses.
The European Union is taking an important step in this direction with its Digital Services Act (DSA). Once passed, the DSA will not only establish content moderation rules but will also require transparency and accountability mechanisms such as requiring platforms to assess and mitigate the risk that they spread hate speech, have their claims audited and provide data to independent researchers.
The EU must ensure the strongest version of the Act is quickly passed into law. Governments elsewhere in the world – notably the United States – should follow the lead of the EU and regulate Big Tech companies and force meaningful oversight, including requiring the platforms to assess and mitigate the risk that their services allow hate speech to flourish.
 From two Burmese language speakers in early 2015 to 60 Myanmar language speakers in mid 2018, 99 by the end of 2018, and further expansion between 2019 and 2021.
 In 2018, Facebook admitted that it was ‘too slow’ in addressing hate speech in Myanmar in response to an investigation by Reuters. They said that they were investing ‘heavily’ in artificial intelligence to proactively flag hate speech and in 2020, they said that they had made progress in ‘improving our ability to detect and remove hate speech’ and that they had ‘invested significantly’ in this technology.
 The ads were in the form of an image and were not labelled as being political in nature.
 Researchers interested to knowing the exact wording of the sentences we used are welcome to request this information from us by writing to email@example.com
 The overwhelming majority of the world, including the places that Facebook operates from, the US and Ireland, have ratified the Convention. Myanmar has not.