Online extreme speech has emerged as a significant challenge for democratic societies worldwide. Governments, companies, and academic researchers have responded to this phenomenon by increasingly turning to Artificial Intelligence (AI) as a potential tool that can detect, decelerate, and remove online extreme speech. In this policy brief, we will outline the challenges facing AI-assisted content moderation efforts, and how the collaborative coding framework proposed by the ERC Proof-of-Concept project “AI4Dignity” offers a way to address some of the pertinent issues concerning AI deployment for content moderation.
The policy brief will provide a short review of state regulations and corporate practices around AI and content moderation, highlight existing challenges, discuss what lessons can be learned from ongoing efforts, and underline what new areas and questions are to be charted on priority. In the current context where the excitement around AI’s capacities has run up against anxieties about the development and deployment of the technology, this policy brief will propose ways to develop context-sensitive frameworks for AI-assisted content moderation that is centered around community collaboration.
The policy brief has been authored by the AI4DIGNITY project and is available here.