Hate speech is in the eye of the beholder Exploring bias on hate perception

Hate speech is in the eye of the beholder Exploring bias on hate perception

Content Policy Research on Social Media Platforms Research Award

Funded by Facebook Research - Content Policy Research on Social Media Platforms

An important goal for hate speech detection techniques is to ensure that they are not unduly biased towards or against particular norms of offence. Training data is usually obtained by manually annotating a set of texts. Thereby, the reliability of human annotations is essential. Meanwhile, the ability to let big data “speak for itself” has been questioned as its representativeness, spatiotemporal extent and uneven demographic information can make it subjective. We hypothesize that demographics substantially affect hate speech perception. In this context, the research question guiding this project is: How do latent norms and biases caused by demographics derive in biased datasets, which affects the performance of hate speech detection systems?

PIs: Antonela Tommasel

  • Daniela Godoy
  • Aiqi Jiang
  • Arkaitz Zubiaga
Avatar
Daniela Godoy

My research interests include recommender systems, social networks, text mining and social networks.