Redirecting to original paper in 30 seconds...

Click below to go immediately or wait for automatic redirect

arxiv_cl 95% Match Research Paper NLP Researchers,Machine Learning Engineers,Content Moderation Specialists 2 weeks ago

Leveraging LLMs for Context-Aware Implicit Textual and Multimodal Hate Speech Detection

large-language-models › multimodal-llms
📄 Abstract

Abstract: This research introduces a novel approach to textual and multimodal Hate Speech Detection (HSD), using Large Language Models (LLMs) as dynamic knowledge bases to generate background context and incorporate it into the input of HSD classifiers. Two context generation strategies are examined: one focused on named entities and the other on full-text prompting. Four methods of incorporating context into the classifier input are compared: text concatenation, embedding concatenation, a hierarchical transformer-based fusion, and LLM-driven text enhancement. Experiments are conducted on the textual Latent Hatred dataset of implicit hate speech and applied in a multimodal setting on the MAMI dataset of misogynous memes. Results suggest that both the contextual information and the method by which it is incorporated are key, with gains of up to 3 and 6 F1 points on textual and multimodal setups respectively, from a zero-context baseline to the highest-performing system, based on embedding concatenation.
Authors (2)
Joshua Wolfe Brook
Ilia Markov
Submitted
October 17, 2025
arXiv Category
cs.CL
arXiv PDF

Key Contributions

This research introduces a novel approach to Hate Speech Detection (HSD) by leveraging LLMs to generate background context and incorporate it into HSD classifiers. It explores different context generation strategies and methods for incorporating context, demonstrating significant performance gains.

Business Value

Improved accuracy in content moderation systems for social media platforms and online forums, leading to safer online environments and reduced spread of harmful content.