Balancing Offensive Language and Freedom of Expression in AI Speech Assistants: An Ethical and Algorithmic Perspective
Abstract
This paper examines the multifaceted challenges of moderating offensive language in Augmentative and Alternative Communication (AAC) systems, with a focus on Generative Conversational Agents (GCAs) designed for individuals with cognitive disabilities, language or speech impairments. While such agents are intended to enable participation, the safety layers embedded in Large Language Models (LLMs) often suppress negative affect or offensive expressions, inadvertently restricting user autonomy. To address this tension, we introduce a six-level escalation scale for offensive language that integrates legal, ethical, social, and technical dimensions, ranging from neutral or positive speech to threats and incitement to violence. We further propose a modular detection and mitigation strategy that uses graded markers and four layered mitigation classes to reduce linguistic severity while preserving communicative agency; explicit censorship is limited to cases of legal prohibition. The framework is qualitatively evaluated using authentic interaction data from clients of a German social service association. In this way, the paper contributes a principled approach to balancing freedom of expression with protection against expressive harm in GCAs for assistive communication.
Citation: M. Rischer, J. Ehrhardt, G. Schreiber, “Balancing Offensive Language and Freedom of Expression in AI Speech Assistants: An Ethical and Algorithmic Perspective,” Conversations 2025, 9th International Symposium on Chatbots and Human-Centered AI. doi:https://.