Published: 2020-03-06 12:16:06 BdST
The company told Reuters that the policy change was not a reaction to the outbreak of the virus, which causes the respiratory disease COVID-19, but was part of its continual effort to update its rules against hateful conduct.
"We couldn't have predicted that this would happen in terms of the coronavirus," Jerrel Peterson, Twitter’s head of safety policy, said in a phone interview.
Twitter has long been under pressure to clean up hateful content on its platform, and social media sites are under scrutiny over their attempts to handle misinformation and abuse related to the coronavirus outbreak.
A Reuters search for derogatory terms linked to the virus on Twitter found posts that called Chinese people "subhuman" or likened them to animals. The outbreak, which began in China, has spread to nearly 80 countries and territories and killed more than 3,000 people.
Peterson said the three new categories had been added not because there were more reports of hateful language in these areas but because of the potential for offline harm.
Twitter's hateful conduct policy already bans attacking or threatening others on the basis of categories such as race, sexual orientation, age, disability or serious disease. This update will mean that those attacks do not need to be targeted at an individual or specific group.
Now, even "if it's a tweet that doesn't have an @mention that likens a group based on their age, disability or disease to viruses or microbes or maggots, something that's less than human, that can be in violation of our policy now," Peterson said.
Announcing the new policy in a blog post, Twitter said any offending tweets must be removed. Tweets sent before Thursday would also need to be deleted, but would not directly result in account suspensions, it said.
In July 2019, Twitter expanded its rules to ban language deemed to "dehumanize" people on the basis of religion. Between January and June last year, Twitter's transparency report said there had been a 48% increase in accounts reported for potential violations of its hateful conduct policies. Twitter said it had taken action on 584,429 unique accounts for hateful conduct violations.
Twitter also announced on Wednesday that it was testing a new type of content that disappears after 24 hours, similar to the stories feature by Facebook Inc's Instagram and first popularized by Snap Inc's's Snapchat.
Twitter spokeswoman Lauren Alexander said that this ephemeral content would also be subject to the company's hateful conduct rules.
Activist investor Elliott Management Corp, which has amassed a stake in the company, is pushing for changes at Twitter, including the removal of chief executive Jack Dorsey.