According to Motherboard, a person attending a Twitter staff meeting on March 22 asked why, since the outfit had largely eradicated Islamic State propaganda, couldn’t it do the same thing with white supremacist content.
An executive responded by explaining that Twitter follows the law, and a technical employee who works on machine learning and artificial intelligence issues explained the problem.
With every sort of content filter, there is a tradeoff, he explained. When a platform aggressively enforces against ISIS content, for instance, it can also flag innocent accounts as well, such as Arabic language broadcasters. Society, in general, accepts the benefit of banning ISIS for inconveniencing some others, he said.
In separate discussions, he said that Twitter hadn't taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians.
The employee argued that, on a technical level, content from Republican politicians could get swept up by algorithms aggressively removing white supremacist material. Banning politicians wouldn't be accepted by society as a trade-off for flagging all the white supremacist propaganda, he argued.