Facebook has received plenty of bad press over the past few years due to their inactivity when faced with extremist content, with many sources citing them as a core reason for the rabid political discourse and division in the USA. While social media sites used to be able to hide behind Section 230, a law that stated websites were not responsible for the posts of their users, a recent amendment in 2018 has stated that any human rights violations associated with the website are the responsibility of the site itself, leading to companies cracking down on these instances.

After enabling genocide, this feels like a bandaid on a festering open wound. pic.twitter.com/BsPFFZatx8 — Kit O’Connell 😷🌈 (@KitOConnell) July 1, 2021 Secondly, Facebook may actually ask their users to report people that they think may fall under the extremist umbrella.

h/t @disclosetv pic.twitter.com/7L5B0UORzj — Matt Navarra (@MattNavarra) July 1, 2021 When unveiling these new features, a Facebook spokesperson had this to say: While the detection of extremist content would be difficult through use of an algorithm, due to how many complex forms extremist rhetoric can manifest itself, relying on users will also have its own downfalls, as users with extremist friends are likely to share similar views in the first place. Putting aside the potential for misuse, most users agree that these systems don’t go far enough to make an impact on the core issues at hand. Facebook’s poor efforts to date are mentioned frequently as a key reason so much misinformation and hateful discourse is spread across the internet, and these updates seem unlikely to turn the tide. While this is, on paper, a step in the right direction for Facebook, monitoring the accuracy of harmful content detection from users and automated systems, and ensuring that adequate follow up action is carried out, will need to be developed significantly before this move can really serve the function it needs to.