|
Title:
|
MULTIMODAL HATE SPEECH DETECTION: LEVERAGING TEXTUAL AND VISUAL INFORMATION (IN PORTUGUESE) |
|
Author(s):
|
Thayná Alves and Leila Weitzel |
|
ISBN:
|
978-989-8704-62 |
|
Editors:
|
Paula Miranda and Pedro Isaías |
|
Year:
|
2024 |
|
Edition:
|
Single |
|
Keywords:
|
Hate Speech Detection, Multimodal, Social Media, Late Fusionv |
|
Type:
|
Full |
|
First Page:
|
55 |
|
Last Page:
|
62 |
|
Language:
|
English |
|
Cover:
|
|
|
Full Contents:
|
click to dowload
|
|
Paper Abstract:
|
The rapid growth of social media has amplified the dissemination of hate speech, posing a substantial threat to online
communities. While traditional text-based approaches have limitations, e.g., they often fall short in capturing the rich
contextual information conveyed through visual cues. To tackle this issue, the multimodal analysis offers potential
improvements. This paper proposes a multimodal model that integrates textual and visual cues for hate speech detection
specifically focusing on Brazilian Portuguese. By combining CNNs for facial expression analysis and BERT for text
processing. Multimodal classification allows for a more comprehensive understanding of the context in which hate
speech occurs. Visual cues, such as symbols or gestures in images, can be strong indicators of hate speech that textual
analysis alone might miss. This research suggest that multimodal approaches can reduce this ambiguity by providing
additional context through images or videos, making it easier to discern the true intent behind a post. While multimodal
classification holds great promise, several challenges remain. One major issue is the computational complexity associated
with processing and integrating multiple modalities, which can require substantial computational resources. |
|
|
|
|
|
|