Skip to content

[ICWSM 2023] AnnoBERT: Effectively Representing Multiple Annotators’ Label Choices to Improve Hate Speech Detection

License

Notifications You must be signed in to change notification settings

socsys/AnnoBERT

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AnnoBERT

Yin, W., Agarwal, V., Jiang, A., Zubiaga, A. and Sastry, N. 2023. AnnoBERT: Effectively Representing Multiple Annotators’ Label Choices to Improve Hate Speech Detection. Proceedings of the International AAAI Conference on Web and Social Media. 17, 1 (Jun. 2023), 902-913.

Abstract

Supervised approaches generally rely on majority-based labels. However, it is hard to achieve high agreement among annotators in subjective tasks such as hate speech detection. Existing neural network models principally regard labels as categorical variables, while ignoring the semantic information in diverse label texts. In this paper, we propose AnnoBERT, a first-of-its-kind architecture integrating annotator characteristics and label text with a transformer-based model to detect hate speech, with unique representations based on each annotator’s characteristics via Collaborative Topic Regression (CTR) and integrate label text to enrich textual representations. During training, the model associates annotators with their label choices given a piece of text; during evaluation, when label information is not available, the model predicts the aggregated label given by the participating annotators by utilising the learnt association. The proposed approach displayed an advantage in detecting hate speech, especially in the minority class and edge cases with annotator disagreement. Improvement in the overall performance is the largest when the dataset is more label-imbalanced, suggesting its practical value in identifying real-world hate speech, as the volume of hate speech in-the-wild is extremely small on social media, when compared with normal (non-hate) speech. Through ablation studies, we show the relative contributions of annotator embeddings and label text to the model performance, and tested a range of alternative annotator embeddings and label text combinations.

The paper is available here.

The CTR embeddings were trained using this implementation.

About

[ICWSM 2023] AnnoBERT: Effectively Representing Multiple Annotators’ Label Choices to Improve Hate Speech Detection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published