In the sixth workshop of the series, we will discuss machine learning approaches to making knowledge about hate speech in contemporary online contexts broadly usable.
In previous research in the humanities and social sciences, the gap between the qualitative and the quantitative has not been adequately closed. Now, AI is a key to making the results of detailed analyses (as presented in the first workshop) usable for training detection and classification models of hate speech, in particular using language models such as BERT and its further improvements (e.g. RoBERTa). For the first time, detailed analyses of online discourses are able to gain a representativeness that smaller scale investigations cannot attain.
What do the latest research results tell us about the current possibilities and limitations of the iterative exchange between human coders and AI models? What are the state-of-the-art models and their capacities, weaknesses, and lessons for future approaches?