OII researchers will address NLP model inequalities at ACL 2025
Published on 28 Jul 2025
Researchers including DPhil students from the Oxford Internet Institute will present new research and share recent findings at the 63rd Annual Meeting of the Association for Computational Linguistics (ACL) in Vienna.
ACL is one of the most prestigious conferences in the field of natural language processing and AI. It showcases the latest breakthroughs in NLP, including real-world applications of language technologies. Research themes range from medical and legal NLP to trustworthiness and efficiency in AI system design.
OII researchers will contribute to NLP debates through the presentation of three peer-reviewed papers tackling some of the biggest challenges facing NLP development, including; the spread of misinformation and the reliability of fact checking systems online, how to use AI tools more effectively to catch hate speech on social media and the impact of AI agents on governance and oversight systems in the public sector.
The researchers propose alternative frameworks to help address some of the potential inequalities and biases in these developing technologies, whilst still ensuring users have better access to information online.
Jabez Magomere, a DPhil student at OII, is presenting his co-authored research into the reliability of current fact checking tools used to counter the spread of online misinformation. Jabez will be presenting his research at the Poster session on Monday 28th from 6pm – 7:30pm at Hall 4/5.
Explains Jabez: “Our work shows that current algorithms used to match claims on social media to fact-checks struggle when faced with subtle, naturally occurring edits, such as rewriting a claim in a different dialect or changing entities (e.g. covid vs. coronavirus). We developed methods to improve the robustness of these algorithms, enabling more reliable fact-checking of evolving misinformation while reducing false positives.”
Authors: Jabez Magomere, Emanuele La Malfa, Manuel Tonneau, Ashkan Kazemi, Scott A. Hale.
Manuel Tonneau
Manuel Tonneau is a DPhil student at the OII, presenting his research on hate speech detection models and their effectiveness for real-world online content moderation. Manuel’s presentation will take place during in the Resources and Evaluation 1 session happening on Monday, 28 July at 14:00-15:30 in Hall A.
Comments Manuel: “Our work shows that publicly available hate speech detection models would fail in real-world content moderation, missing harmful content while flagging benign posts. We also find that human-AI collaboration performs better, but at a potentially high cost. Our results highlight the necessity to evaluate AI systems in the real-world settings where they are meant to operate.”
Authors: Manuel Tonneau, Diyi Liu, Niyati Malhotra, Scott A. Hale, Samuel P. Fraiberger, Victor Orozco-Olvera, Paul Röttger.
Jonathan Rystrøm
Jonathan Rystrøm is a DPhil student at the OII, presenting his research on how the introduction of AI agents in the public sector challenges existing governance structures. His research highlights five new governance dimensions essential for governing agents in the public sector. Jonathan’s presentation will take place at the First Workshop for REALM (“Research on Agent Language Models”) on July 31st in rooms 1.61-62 at the Vienna conference centre.
Adds Jonathan: “We find that agent oversight poses intensified versions of three existing governance challenges: continuous oversight, deeper integration of governance and operational capabilities, and interdepartmental coordination. We propose approaches that both adapt institutional structures and design agent oversight compatible with public sector constraints.”
Authors: Chris Schmitz, Jonathan Rystrøm, Jan Batzner
Concludes contributing author, Associate Professor Dr Scott A. Hale:
“The Internet and new technologies continually lower the barriers for access to information, but it is essential that we consider potential inequalities and biases in these technologies. I’m tremendously proud that all of these publications help identify ways to improve equitable access to quality information, which is the core topic of the eaqilab (Equitable Access to Quality Information Lab) at the OII.”
The eaqilab is dedicated to researching the growing inequalities in our online information ecosystem. Its researchers explore how people navigate the digital landscape, what influences the visibility of information, and how misinformation, bias, and hate speech impact decision-making across different communities.