YANGON — Facebook is preparing to roll out a new strategy to take down fake content that could incite violence in Myanmar, a company spokesperson said on Wednesday.
Under the new policy, the company will work with local civil society groups to identify misinformation, and when a post is found to include false claims that could provoke unrest, it will be removed from the social media platform.
The company didn’t reveal when the operation would start.
The announcement comes after Facebook was accused of fanning social and religious unrest in Myanmar, especially communal violence between Buddhists and Muslims. During the last episode of unrest, it was found that Facebook was used by both sides not only to instigate conflict but to spread fear among the public. According to 2018 statistics about Internet use in Myanmar, the country has 16 million social media users of whom 95 percent use Facebook.
Early this year, Myanmar civil society organizations shared with US lawmakers detailed information about Facebook’s role in spreading hate in the country, while United Nations investigators also accused the social media giant of contributing to the “acrimony and dissension and conflict” in Rakhine, from where more than 700,000 Rohingya Muslims have fled to neighboring Bangladesh.
In January, Facebook blacklisted a group of ultranationalist Buddhist monks including U Wirathu for spreading hate speech against the Rohingya. But Facebook has continued to come under scrutiny for its slow response to the spread of misinformation and the use of fake accounts and personal information on its platform.
Tessa Lyons, a Facebook product manager, told reporters that the strategy to be applied in Myanmar had already been implemented in Sri Lanka, another country where human rights groups have criticized Facebook for its slow action to remove hate speech and incitements to violence.
On the same day, in an interview with technology news site Recode, Facebook CEO Mark Zuckerberg acknowledged his company had a responsibility to do more in Myanmar regarding the sectarian violence in Rakhine State.
“I think that there’s a terrible situation where there’s underlying sectarian violence and intention. It is clearly the responsibility of all of the players who were involved there,” Zuckerberg said.
“We’ve significantly ramped up the investment in people who speak Burmese. It’s often hard, from where we sit, to identify who are the figures who are promoting hate and what is going to… which is the content that is going to incite violence. So it’s important that we build relationships with civil society and folks there who can help us identify that.”
Some observers in Myanmar greeted the announcement guardedly, noting that Facebook’s previous efforts to tackle fear-mongering had been ineffective.
U Kyi Toe, a spokesperson for the National League for Democracy, said Facebook had removed one of his posts that was simply a shared note about technology.
“I heard they are using AI to scan for hate speech. But I don’t see them intensively removing hate speech. We don’t know their policy on Myanmar or how they choose staff to identify hate speech,” said U Kyi Toe, who works for the ruling party’s information department.
“They need to pay more attention to hate speech rather than other posts,” he said.
The announcement about the new strategy did not make clear whether Facebook would address the issue of innocuous posts being removed.
In April, Zuckerberg was asked about his company’s alleged role in spreading hate during the Rakhine crisis before a joint hearing of a US Senate panel. Zuckerberg explained that Facebook had hired Burmese-language content reviewers to look for hate speech, as well as working with civil society to identify “specific hate figures” who should be banned.