The Critical AI Centre (CrAIC)
The Critical AI Centre (CrAIC) is an interdisciplinary research hub dedicated to exploring the complex and evolving roles and risks associated with conceptions of artificial intelligence in society. Drawing on expertise from disciplines within the arts and humanities, social sciences, and computer science, we take a nuanced approach to understanding the shifting epistemologies and methodologies involved in studying AI.
Our mission is to trace, define, and shape the emerging field of Critical AI Studies, examining its intersections with software studies, critical data studies, digital culture studies, and critical algorithm studies.
Through our research we adopt a long-view perspective, experimenting with ideas in, through, and against automated intelligence. This work is complemented by a commitment to both scholarly and public engagement, fostering dialogue and critical reflection on the broader societal implications of AI.
We are looking to make connections with scholars internationally. If you are a researcher interested in the space and would like to sign up for updates or join our network, please contact: craic@exeter.ac.uk.
Our members
Connect with us
Please contact craic@exeter.ac.uk to sign up to our mailing list.
Follow us on 🦋 Bluesky: @craicexeter.bsky.social
Events
We recently held a successful two-day launch workshop with an international array of speakers joining us at the University of Exeter and online. Read the report here.
We are building on this event, with discussions underway for future seminars and research visits.
Current Projects
Read more about some of the projects the CrAIC team are currently working on:

The project Favel IA has been co-designed with the Papo Reto Institute (IPR), a favela media activist and human rights organisation based in Rio de Janeiro. The project analyses a core question developed jointly with IPR: How does the favela use AI as opposed to how AI uses the favela in extractive ways? Drawing from critical and decolonial data and AI studies (Valente and Grohmann, 20024; Goodlad, 2023; Ricaurte, 2019), Favel IA interrogates the politics of empathy as a form of solidarity. It is also attentive to the gendered, raced, classed, and transnational geopolitical dimensions that shape who is permitted to perform empathy and who is positioned as its beneficiary. These dynamics contrast sharply with the everyday invisible forms of political empathy exercised by favela communities, which are rarely recognised or valued.
Favel IA employs participatory action research (PAR) as its main methodological approach, inspired by Latin American scholars such as Orlando Fals Borda (1987) and Paulo Freire (2002). The project involves three main groups: postgraduate students, favela media activists, and community leaders. The research team engaged participants in five consecutive days of workshops. We asked them to document their daily encounters with AI, to carry out empathy exercises and to reflect on critical questions about AI and data from a favela standpoint. Collectively, we also analysed existing and emerging AI regulation in Brazil and elsewhere and co-devised a conceptual note for policymakers, articulating the types of AI favela residents desire and the alternative futures that they envision.
The name Favel IA is a reference to the favelas themselves as forms of intelligence and to inteligência artificial, artificial intelligence in Portuguese. IÁ is also a common ending to many samba refrains (la iá la iá), evoking the idea that favela residents must “dance to the music” of AI systems designed by powerful private companies. While digital media platforms constrain agency, participants also identified instances of mundane resistance (Madianou, 2025), using generative AI to craft texts that signal socioeconomic mobility, employing AI tools to decode complex legislation and explain it for favela residents and/or using chatbots for emotional support. The latter, especially the use of AI as a therapist in contexts of grief caused by racist police violence, prompted critical debates about the myth of “empathetic AI.” These reflections culminated in a collective call to favelise AI, that is, to invert the centre-periphery relationship so that the lived experiences and voices off favela residents become central rather than marginal in AI debates and governance.
The project has been funded by the INCT DSI Consortium (Brazil).
Principal Investigator: Andrea Medrado
Co-Investigator: Thainã de Medeiros

Wikipedia sits at the heart of the Internet’s knowledge infrastructure, openly and collaboratively documenting the ever-evolving record of human understanding. As a key training source for large language models, it now shapes the information that AI produces—yet these AI writing tools are increasingly being used by editors back on the platform itself. This project investigates how this reciprocal relationship may reproduce or reinforce biases within collective knowledge systems, focusing on the issue of notability: what and who is included in, or excluded from, these resources. Through ethnographic and computational analysis, including a public “edit-AI-thon”, we will examine how algorithmic and human practices intertwine in defining who and what counts.
This project is funded by the UKRI AHRC BRAID DOT programme
Project lead: Patrick Gildersleve
Project co-lead (International): Francesca Tripodi
Project co-lead: Brett Zehner
The release of ChatGPT-5 disappointed many, with Sam Altman acknowledging its flaws. Neural networks’ limitations are increasingly apparent. Over fifty years after Marvin Minsky and Seymour Papert warned of their fragility, AI models generate impressive outputs but lack a proper understanding of tasks. Their logic is fragile, trained on narrow datasets that often ignore context and nuance. They produce responses that seem insightful but are superficial, mirroring biases in their training data.
This newfound necessity for more quality human data could call for forms of creativity beyond statistical aesthetics. The qualitative turn in AI research represents a value proposition for the humanities. Stagnation underscores the importance of human insight, reasoning, and creativity. As AI increasingly relies on synthetic data, tech firms are hiring humanities scholars to guide product development.
An applied humanities approach is now necessary that emphasizes social and ethical dimensions at the local level. This research advocates for qualitative critique at every stage—data collection, algorithm design, and output interpretation—to remain attentive to context and difference. Insights from the humanities expose biases and inequalities.
Brett is currently working with the bourgeoning group – The Public Intelligence research group – which explores the risks of knowledge dispossession in the AI boom. The group aims to develop new models of community intelligence that draw on richer forms of collective knowledge, examining the role of universities in local knowledge ecosystems and how diversity can drive innovation.
Principal Investigator: Brett Zehner





