Bias in AI programs

Visual programs

General Bias

  • Jan 20, 2026. Kerche et al (2026). The silicon gaze: A typology of biases and inequality in LLMs through the lens of place. Platforms and society. “PDF/EPUB . This paper introduces the concept of the silicon gaze to explain how large language models (LLMs) reproduce and amplify long-standing spatial inequalities. Drawing on a 20.3-million-query audit of ChatGPT, we map systematic biases in the model’s representations of countries, states, cities, and neighbourhoods. From these empirics, we argue that bias is not a correctable anomaly but an intrinsic feature of generative AI, rooted in historically uneven data ecologies and design choices. Building on a power-aware, relational approach, we develop a five-part typology of bias (availability, pattern, averaging, trope, and proxy) that accounts for the complex ways in which LLMs privilege certain places while rendering others invisible.”
  • March 19 2025. AI, Language and the Right to Be Understood. Jarek Janio. “I recently came across a social media post by a colleague who declared she would immediately delete any email she believes was generated by artificial intelligence (AI). As an English as a Second Language (ESL) speaker, this is not merely a theoretical debate but my daily reality.”
  • March 1, 2024. Dialect prejudice predicts AI decisions about people’s character, employability, and criminality. Valentin Hofmann, Pratyusha Ria Kalluri, Dan Jurafsky, Sharese King. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice: we extend research showing that Americans hold raciolinguistic stereotypes about speakers of African American English and find that language models have the same prejudice, exhibiting covert stereotypes that are more negative than any human stereotypes about African Americans ever experimentally recorded, although closest to the ones from before the civil rights movement.
    • March 5, 2024. Covert racism in LLMs. Gary Marcus looks at the paper on Dialect Prejudice and reflects on the information in the paper.
  • March 2024. Dialect prejudice predicts AI decisions about people’s character, employability, and criminality. Here, we demonstrate that language models embody covert racism in the form of dialect prejudice.