The scaling back of Twitter’s efforts to define dehumanising speech illustrates the company’s challenges as it sorts through what to allow on its platform. While the new guidelines help it draw starker lines around what it will and will not tolerate, it took Twitter nearly a year to put together the rules; and even then they are just a fraction of the policy that it originally said it intended to create.
Twitter said it had ratcheted down the policy’s scope partly because it kept running into obstacles. When the company sought users’ feedback last year on what it thought such speech might include, people pushed back on the proposed definitions. Over months of discussions late last year and early this year, Twitter employees also worried that such a policy might be too sweeping, potentially resulting in the removal of benign messages and in haphazard enforcement.
“We get one shot to write a policy that has to work for 350 million people who speak 43-plus languages while respecting cultural norms and local laws,” Peterson said. “It’s incredibly difficult, and we can’t do it by ourselves. We realised we need to be really small and specific.”
Twitter unveiled its new policy ahead of a social media summit at the White House on Thursday that is likely to thrust it and other Silicon Valley companies under the spotlight for what they will and won’t allow. For the event, Trump has invited conservative activists who have thrived on social media, such as Charlie Kirk, president of Turning Point USA, which advocates limited government and other issues. Many of the attendees have accused social media companies of anti-conservative bias.
Twitter declined to comment on the meeting.
In the past, Twitter has focused its removal policies on posts that may directly harm an individual, such as threats of violence or messages that contain personal information or non-consensual nudity. Under the new rules, the company is adding a sentence that says users “may not dehumanise groups based on their religion, as these remarks can lead to offline harm.” Twitter said that included any tweets that might compare people in religious groups to animals, insects, bacteria and other categories.
The company quickly put the change into effect on Tuesday, US time. Twitter said it had removed a tweet in which Louis Farrakhan, the outspoken black nationalist minister, compared Jewish people to termites because it violated the dehumanisation policy.
Twitter’s work around a dehumanisation policy began in August after the company faced a firestorm for not immediately barring Alex Jones, the right-wing conspiracy theorist, when Apple, Facebook and others did. Twitter eventually did bar Jones, and its chief executive, Jack Dorsey, said at the time that the incident had forced the company to consider “that safety should come first.”
“That’s a conversation we need to have,” he added.
The discussions began with the meeting at Twitter’s headquarters, which included the sample tweet featuring Trump’s unflattering description of nations such as Haiti. At the end of that meeting, executives agreed to draft a policy about dehumanising speech and open it to the public for comments.
In September, Twitter published the draft policy of what dehumanising speech would be forbidden. It included posts likening people to animals or suggesting that certain groups serve a single, mechanistic purpose.
The response from users was swift, and critical. Twitter received more than 8000 pieces of feedback from people in more than 30 countries. Many said the draft made no sense, pointing out cases in which the policy would lead to takedowns of posts that lacked any negative intent.
In one example fans of Lady Gaga, who call themselves “Little Monsters” as a term of endearment, worried that they would no longer be able to use the phrase. Others said the draft policy didn’t go far enough in addressing hate speech and sexist comments.
In October and November, Twitter employees began revising the policy with the public input.
“We knew the policy was too broad,” Peterson said. The solution, he and others decided, was to narrow it down to groups that are protected under US civil rights law, such as women, minorities and LGBTQ people. Religious groups seemed particularly easy to identify in tweets, and there were clear cases of dehumanisation on social media that led to harm in the real world, Twitter employees said. Those include the ethnic cleansing of Rohingya Muslims in Myanmar, which was preceded by hate campaigns on social networks like Facebook.
Early this year, Twitter further limited the scope of the policy by carving out an exception. The company prepared a feature to preserve tweets from world leaders, like Trump, even if they engaged in dehumanising speech. Twitter reasoned that such posts were in the public interest. So if any world leaders tweeted something insulting and unacceptable, their posts would be kept online but hidden behind a warning label.
Twitter then trained its moderators to spot dehumanising content, using a list of 42 religious groups as a guide and the tweet of Trump’s uncomplimentary phrase about certain countries as an example of what to allow. It assigned 10 engineering teams to design the warning label and to make sure that any offending tweets would not appear in search or other Twitter products. It announced the exception for world leaders last month.
This week, Twitter also said it would require the deletion of old tweets that dehumanise religious groups but would not suspend accounts that had a history of such tweets because the rule did not exist when they were posted. New offending tweets, however, will count toward a suspension.
“We constantly keep changing our rules, and we try to improve across the product,” said David Gasca, Twitter’s head of product health. “We’re never fully done.”
New York Times