Twitter Is Shutting Down Alleged Terrorist Accounts, But Still Not Tackling Threats Of Abuse
Since February alone, Twitter has reportedly shut down more than 235,000 user accounts, which nearly doubles the 125,000 accounts it shut down during the seven months prior. What exactly inspired the sudden increase? Apparently, the company has upped its security measures in order to detect and shut down potential terrorist accounts. Despite Twitter’s attention to possible terrorism though, the site still isn’t handling harassment or threats of abuse toward average users.
In order to seek out more alleged terrorists, Twitter has expanded its teams dedicated to monitoring accounts now, including employees fluent in multiple languages, particularly since their terror focus is mostly limited to the radicalization of the Islamic State (never mind the high rates of domestic terrorism).
How exactly do they define what is potentially terrorist activity on Twitter? Well, as you can imagine it’s a complicated process rife with room for assumptions. “There is no one ‘magic algorithm’ for identifying terrorist content on the internet,” a spokesman said in an interview with the Washington Post. They did reveal that in Twitter’s quest to track terrorists on the internet, the company now utilizes increased spam-fighting tools which more efficiently compile public reports of users who are violating twitter’s policies.
But again, why should the punishment be limited to potential terrorist activity, and how exactly do they define online terrorist speech on a website littered with jokes and meta-commentary? They do it by utilizing the observations and reports of Twitter users who witness what they consider suspicious activity.
The company made clear its stance in a statement on its blog, saying:
“The world has witnessed a further wave of deadly, abhorrent terror attacks across the globe. We strongly condemn these acts and remain committed to eliminating the promotion of violence or terrorism on our platform.”
While at face value the concept of Twitter combating terrorism might sound good, the fact that they’ve largely skirted addressing the intense issues of racist, homophobic, sexist, and violently transphobic threats and abuse that occurs daily shows that Twitter may not be as concerned with regulating violent users as they are emulating a do-gooder public image.
If Twitter has the ability to ban 235,000 accounts in six months for terrorist activity, why can’t they suspend the accounts of the unquestionably racist, violent trolls? If the conjecture of free speech isn’t applied to protect people perceived as a potential terrorist threat, why should it be applied to users who detail the rape and murder they would inflict on other users?
Much of this comes down to what is prioritized as violence and terror. The systemic abuse of POC, women, and LGBTQ people is entrenched in the fabric of our country, as is made clear by the fact that the KKK isn’t flagged as a terrorist group. The abandon with which Twitter feels comfortable profiling potential terrorists while letting violent misogynists and racists run the gamut speaks volumes about who’s a priority when it comes to safety.