YANGON—Myanmar digital rights advocates and civil society representatives on Thursday urged Facebook’s Mark Zuckerberg to hire a “sufficient” quantity of Myanmar-language content reviewers who have a good understanding of the situation in the country, and to implement effective systems to curb hate speech and messages inciting violence.
The US-based social media giant has been accused of contributing to the spread of fake news and hate speech in the country via accounts opened by users who seek to inflame communal conflict.
Last week, a group of six Myanmar civil society organizations published an open letter to Zuckerberg, Facebook’s chief executive officer, in which they criticized the “inadequate response” of the company in reviewing reports of hate speech on the social media platform.
The six groups are Phandeeyar, Myanmar ICT for Development (MIDO), Equality Myanmar, Burma Monitor, Center of Social Integrity, and Myanmar Human Rights Educator Network.
The open letter followed Zuckerberg’s interview with US-based digital media outlet Vox, in which he said the company’s systems had stopped a harmful message from being sent between users in Myanmar via Facebook’s Messenger application. He said his company is paying a lot of attention to the detection of such messages. In response, the CSOs expressed surprise that he had raised that particular case as an example of the effectiveness of his systems, saying it exemplified “the very opposite of effective moderation.”
The case mentioned in the letter involved two different messages sent separately to Buddhist and Muslim communities in Myanmar in September last year, with the aim of inciting violence. The groups raised the case with Facebook but the messages circulated on social media for days.
At a press conference at the downtown Yangon office of Phandeeyar, the six CSOs and human rights activists issued a six-point demand to Facebook, calling on it to emphasize detection, ban individuals who spread hate speech on the platform, invest more in technology to monitor such speech, and to be more transparent in implementing systematic mechanisms, among other things.
Aung Myo Min, executive director of the Myanmar-based human rights advocacy group Equality Myanmar, said the intention of the open letter was not to control the use of Facebook in Myanmar or individual freedom of expression, but to prevent abuse of the platform to sow hatred among communities.
“Facebook has become an essential part of our daily social life,” Aung Myo Min said.
“When there are increasing abuses of the platform, it’s more dangerous and harmful for a country like Myanmar,” adding that fake news and hate speech can easily do harm and cause violence in a country that is so religiously and ethnically diverse.
“It puts a real burden on the government too,” he said.
Overreliance on Third Parties
Htaike Htaike, director at MIDO, said the open letter created an opportunity to highlight the challenging situation in Myanmar when it comes to using Facebook.
“The case we stressed in the open letter was not the only incident involving Facebook that incited religious conflict. There were several similar cases in the past,” she said.
“If Facebook only relies on reports and information from groups like us, it will be impossible for the company to implement effective and sustainable mechanisms to tackle hate speech in the long run,” she said.
In a personal apology letter to the Myanmar CSO groups on April 6, Zuckerberg said his company was building artificial intelligence (AI) tools to help Facebook identify abusive, hateful or false content even before it is flagged by community members.
The groups responded with another letter the following day stressing that the proposed improvements would not be sufficient to ensure Myanmar users receive the “same standards of care” as those in the U.S. or Europe.
“When things go wrong in Myanmar, the consequences can be really serious — potentially disastrous. You have yourself publicly acknowledged the risk of the platform being abused [to do] real harm,” the letter read.
Human Resources or AI?
According to initial reports by The Guardian and The New York Times published in mid-March, political marketing firm Cambridge Analytica gained access to the personal data of 50 million Facebook users harvested through a third-party app. The number of Facebook accounts affected by the massive data scandal was later revised to as many as 87 million.
During Zuckerberg’s appearance at the U.S. Senate hearings on Tuesday and Wednesday over the scandal, he was asked by Vermont Senator Patrick J. Leahy about his company’s alleged role in spreading hate speech against Rohingya Muslims in Myanmar. The senator also raised the issue of death threats against Myanmar journalist Aung Naing Soe that were spread on Facebook in November 2016.
“What’s happening in Myanmar is a terrible tragedy, and we need to do more,” Zuckerberg answered. When asked by Senator Heahy if he would dedicate resources to make sure hate speech is taken down within 24 hours, he replied he was “working on this.”
He said Facebook was hiring “dozens” more Myanmar-language content reviewers to look for hate speech, as well as working with civil society organizations to identify “specific hate figures” who should be banned from the social media site, and working with product teams to make specific changes for the country’s users.
However, Chan Myae Khine, a local tech enthusiast and owner of a digital marketing enterprise, said Facebook — as one of the biggest tech companies on Earth — needs proper AI tools rather than human resources, and to sit down with linguists, Burmese-language professors and local tech experts to create algorithms that could detect and block hate speech.
“[Hiring more Myanmar language-speaking people] is certainly not the solution as humans can always be biased and it is not practical to monitor [over] 18 million users with a hundred people,” she said.
For Thant Sin, Phandeeyar’s Tech for Peace manager, trying to implement AI in an automated detection system would face many difficulties in Myanmar as the country is still struggling to come up with a standard font, while Facebook content in Myanmar can be displayed in many different ethnic languages.
“Hate speech and fake news are not only being spread in the Bamar language but also in other ethnic languages,” he said, adding that newly developed AI solutions deployed with sufficient human resources would be beneficial and effective.
Low Digital Literacy and Fake News
Another challenge facing Myanmar is the general public’s low level of digital literacy, he said.
“Many people still treat Facebook as if it were the Internet,” he said.
According to a survey of 3,000 people last year, 38% of the surveyed people get most, if not all, of their news from Facebook. Many trust fabricated and fictitious news on the platform without verifying the source. Given such a situation, many minority groups get targeted online.
Nearly 700,000 Rohingya have fled Myanmar’s Rakhine state and crossed into Bangladesh since insurgent attacks sparked a security crackdown last August.
United Nations officials investigating the crisis said last month that Facebook had been a source of anti-Rohingya propaganda. Marzuki Darusman, chairman of the U.N. Independent International Fact-Finding Mission on Myanmar, said social media sites had played a “determining role” in Myanmar in spreading misinformation and hate speech that led to the violence.
Even though the platform has had negative impacts and caused harmful consequences in the country, it has also played an important role in Myanmar’s democratic transition by supporting the electoral process and encouraging public engagement in politics, as well as empowering people to express their opinions and be heard as the country opened up to the world after years of isolation, Thant Sin said.
When asked by The Irrawaddy if Facebook is generally causing more harm than good in Myanmar, Chan Myae Khine disagreed, saying that underlying hatreds are a fundamental problem of the conflicts in the country.
“Any kind of conflict in Myanmar, including religious and racial ones, are because of the hatred rooted in the society. If there was no Facebook, they would still be using any platform they could access to spread hatred,” she said.
“Having said that, Facebook has the power to spread things quickly and without verification.”