AI and Environmental Concerns

We have heard concerns about AI tools and the environmental costs of using them. But the conversation isn’t straightforward or stagnant.

See articles here as well as info from Jon Ippolito and Rebecca Yeager after the articles.

Waterfall and a field with flowers on a sunny day

Articles

2020

  • Jul 22, 2020. Language Models are Few-Shot Learners. Tom B. Brown et al. “Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art finetuning approaches.”

2024

Back to top

2025

  • Jan 13, 2025. Using ChatGPT is not bad for the environment. Andy Masley. “This post is about why it’s not bad for the environment if you or any number of people use ChatGPT, Claude, or other large language models (LLMs).”
  • Jan 15, 2025. Making AI Less “Thirsty”: Uncovering and Addressing the Secret Water Footprint of AI Models. Pengfei Li, Jianyl Yanh, Mohammend A. Islam, and Shaolei Ren. “In this paper, we provide a principled methodology to estimate the water footprint of AI, and also discuss the unique spatial-temporal diversities of AI’s runtime water efficiency. Finally, we highlight the necessity of holistically addressing water footprint along with carbon footprint to enable truly sustainable AI.”
  • Jan 16, 2025. Jon Ippolito. AI’s impact on energy and water usage. “This list assesses only energy and water usage, and not actual environmental impact.”
  • Jan 17, 2025. “Goodbye TikTok, Ni Hao RedNote? + A.I.’s Environmental Impact + Meta’s Masculine Energy.” Hard Fork podcast.
  • Jan 24, 2025. The Local Impact of Data Centers with Julie Bolthouse. “This is the first event in CURA, Sustainability Institute, and the Translational Analytics Data Institute’s 2025 spring webinar series, “Is AI Sustainable?” According to the Piedmont Environmental Council, Virginia is undergoing a massive transformation centered around the activities of one industry: data centers. The industry is growing at a rapid rate, requiring huge amounts of energy, land, and water to operate. As a result, communities across the state are experiencing significant impacts. Julie Bolthouse will discuss the feedback she’s hearing from residents in Virginia, the organization’s demands of the state legislature, and other factors to consider in the midst of this quickly moving landscape.”
  • Jan 27, 2025. From Efficiency Gains to Rebound Effects: The Problem of Jevons’ Paradox in AI’s Polarized Environmental Debate. ALEXANDRA SASHA LUCCIONI, EMMA STRUBELL, KATE CRAWFORD. “This paper examines how the problem of Jevons’ Paradox applies to AI, whereby efficiency gains may paradoxically spur increased consumption.We argue that understanding these second-order impacts requires an interdisciplinary approach, combining lifecycle assessments with socioeconomic analyses.”
  • 2025. THE STANFORD EMERGING TECHNOLOGY REVIEW 2025: A Report on Ten Key Technologies and Their Policy Implications. CO-CHAIRS: Condoleezza Rice, John B. Taylor, Jennifer Widom, Amy Zegart. DIRECTOR AND EDITOR IN CHIEF: Herbert S. Lin. MANAGING EDITOR: Martin Giles. Stanford University. “This is our latest report surveying the state of ten key emerging technologies and their implications. It harnesses the expertise of leading faculty in science and engineering fields, economics, international relations, and history to identify key technological developments, assess potential implications, and highlight what policymakers should know.”
    • Chapter 1 is on AI
  • Feb 10, 2025. Harnessing AI for environmental justice. Friends of the Earth policy. “Principles and practices to guide climate justice and digital rights campaigners in the responsible use of AI.”
  • March 11, 2025. Maine Legislature. SP 402. Electricity usage by data centers.
  • Apr 2025. Energy and AI. IEA. “This report from the International Energy Agency (IEA) aims to fill this gap based on new global and regional modelling and datasets, as well as extensive consultation with governments and regulators, the tech sector, the energy industry and international experts”
  • Apr 9, 2025. Andy Masley. Recommended resources for energy literacy.
  • Apr 9, 2025. Lori Valigra. $300M data center at former Millinocket paper mill is canceled. “But the company was unable to get the artificial intelligence customer it wanted because the data center would not be able to produce enough power.”
  • Apr 22, 2025. Artificial Intelligence: Generative AI’s Environmental and Human Effects. U.S. Government Accountability Office. “Our Technology Assessment discusses … challenges and offers options for policymakers to consider.”
  • Apr 25, 2025. AI & Climate: Seeing the Forest, Not Just the Kilowatt-Hours. Tim Dasey. “Considering the full spectrum of AI’s potential impacts, the level of attention devoted specifically to its energy consumption feels disproportionate compared to other urgent societal and ethical AI challenges, especially when weighed against AI’s potential to contribute to climate solutions.”
  • Apr 28, 2025. “A cheat sheet for conversations about ChatGPT and the environment“. Andy Masley. “This post will be a cheat sheet for that post, framed around conversations you might have about AI chatbots. I’ve broken it up so you can skip around and only read sections relevant or interesting to you. This post will only focus on the arguments, not how I got the numbers.”
  • May 20, 2025. We Went to the Town Elon Musk Is Poisoning. More Perfect Union. “Elon Musk’s massive xAI data center is poisoning Memphis. It’s burning enough gas to power a small city, with no permits and no pollution controls. Residents tell us they can’t breathe and they’re getting sicker.”
  • May 20, 2025. We did the math on AI’s energy footprint. Here’s the story you haven’t heard. James O’Donnellarchive and Casey Crownhart. “The emissions from individual AI text, image, and video queries seem small—until you add up what the industry isn’t tracking and consider where it’s heading next.”
  • May 21, 2025. Reactions to MIT Technology Review’s report on AI and the environment. Andy Masley. “I had some strong positive and negative reactions to different parts of the piece. Because this seems likely to be the big new post on ChatGPT and the environment for a while, I wanted to show how the data and framing compares to my last few posts.”
  • June 18, 2025. Artificial intelligence: Supply chain constraints and energy implications. Alex de Vries-Gao. Supply chain analysis from the environmental journal Joule.
  • June 18, 2025. Energy costs of communicating with AI. Maximillian Dauner and Gudrun Socher. “This study presents a comprehensive evaluation of the environmental cost of large language models (LLMs) by analyzing their performance, token usage, and CO2 equivalent emissions across 14 LLMs….Our results reveal strong correlations between LLM size, reasoning behavior, token generation, and emissions.”
  • June 18, 2025. Misinformation by Omission: The Need for More Environmental Transparency in AI. Sasha Luccioni, Boris Gamazaychikov, Theo Alves da Costa, and Emma Strubell. “We discuss the importance of data transparency in clarifying misconceptions and mitigating these harms, and conclude with a set of recommendations for how AI developers and policymakers can leverage this information to mitigate negative impacts in the future.”
  • June 20, 2025. AI Content Landfills. Start at 8:30 to skip discussion of a specific AI slop site. “Investigating the surreptitious world of AI content nobody has seen.”
  • June 21, 2025. Data waste: According to study, a significant portion of stored server data is never accessed. Daniel Sims. “Making cloud servers and data centers more efficient is a crucial step in the push to increase sustainability and reduce carbon emissions. However, one company has started drawing attention to what it calls “data wastage” – the retention of large amounts of data no one accesses – and the factors making it difficult to cut back.”
  • July 22, 2025. Our contribution to a global environmental standard for AI. Mistral AI. “we have conducted a first-of-its-kind comprehensive study to quantify the environmental impacts of our LLMs. This report aims to provide a clear analysis of the environmental footprint of AI, contributing to set a new standard for our industry.”

–> back to top


Jon Ippolito

Jon Ippolito created an app in June 2025 to help people better assess the environmental impacts of using AI tools. He says: ” The purpose of this app is not to pretend we know definitive measures of AI energy and water use, but to challenge our students to learn how dramatically various factors—like where you live or the type of prompt—can influence the footprint of both AI and non-AI tasks.”

The app is called “What Uses More?” Ippolito notes: “The numbers come from academic papers and industry reports, compiled in a public Google Sheet. It only includes current usage, not projections or deeper environmental effects, and doesn’t cover broader risks like misinformation or labor issues that I cover with the IMPACT RISK framework.”



Notes that Rebecca Yeager put together (Jan 2025) around the conversation of AI and environmental impacts. I am thankful for this conversation because, again, the environmental conversation around AI is constantly shifting and it is not a simple answer.

  • AI’s electricity usage varies by model (Luccioni et al., 2024):
    • Task-specific models use less energy than multi-task/zero-shot models (difference in orders of magnitude on a logarithmic scale)
    • Within multi-task/zero-shot models, sequence-to-sequence models use less energy than decoder models (the ChatGPT family uses decoder models)
    • Models with fewer parameters use less than models with more parameters, but this factor matters less than the first two (and the task type, below)
  • AI’s electricity usage also varies by task (Luccioni et al., 2024):
    • Classification tasks use less than generation tasks
    • Tasks involving text use less than tasks involving images
    • Within image generation tasks, the number of pixels and size of the image matters
    • Within text generation tasks, the size of the input and the size of the output both matter (but output matters more than input)
  • AI’s water usage comes from three sources (Li et al., 2024):
    • Scope-1/direct/on-site: water used for cooling data centers (measured in on-site WUE: water usage effectiveness, which is reported in liters per kilowatt hour as L/kWh)
    • Scope-2/indirect/offsite: water used in electricity generation (measured in offsite WUE in L/kWh)
      • Scope-1 and -2 together are called operational water usage
    • Scope-3/embodied: water used for hardware creation (including Nvidia chips)
      • Scope-1, -2, and -3 are typically reported in terms of water consumption rather than water withdrawal, unless otherwise stated. Water withdrawal = water that is recycled + water than is consumed, so recycled water is not generally counted in these metrics
  • There is almost no data on scope-3 water usage, so no estimates available. However, scope-1 and -2 WUE are typically reported for each data center, and vary in the following ways (Li et al., 2024):
    • Scope-1 usage varies . . .
      • By location  (hotter centers need more cooling and colder centers need more humidity)
      • By cooling method
        • cooling towers use more water (1-9 L/kWh – note the wide range here)
        • outside cooling systems use less (.2 L/kWh) but are only possible in certain locations for certain times of year
      • By season: According to Karimi et al. (2022), the monthly US average on-site WUE for data centers is 9 L/kWh in the summer and 4 L/kWh in the winter (note this leans towards the higher side of the range, but it’s a slightly older source)
    • Scope-2 usage varies . . .
      • By energy source (fossil fuels/solar/wind). The US average is 3.1 L/kWh (Li et al., 2024).
  • Ultimately, water usage is impacted by energy usage, and energy usage differs across models (and seems to be increasing). We have energy data on the following models:
    • ChatGPT-2-era models: Average of 8 open access-models downloaded from HuggingFace in Luccioni et al. (2024): approximately 0.047 kWh for 1,000 inferences with ten tokens each – so 0.000047 kWh for one sentence with <ten words (did I get the correct number of zeros?)
    • ChatGPT-3: Official estimate from OpenAI in Brown et al. (2020): 0.4 kWh for 100 pages of content generated (so 0.004 kWh for one page)
    • ChatGPT-4: Calculations from Shaolei Ren, one of the authors of the Li et al. (2024) study: approximately 0.14 kWh for one 100-word email (reported by Verma and Tan (2024) in the Washington Post article – this statistic has not yet been peer-reviewed)
  • If the energy usage data above is correct, then we can also infer the water usage for scope-1 and scope-2:
    • ChatGPT-3 (Li et al., 2024): on average, 29.6 inferences for 500 mL water (range of 10-50 inferences per 500 mL, depending on location of data center – length of output text not specified)
    • ChatGPT-4 (Verma and Tan, 2024, citing Shaolei Ren, again, not peer-reviewed): one 100-word email for 519 mL of water (location of data center not specified)
  • Take the Verma and Tan (2024) statistics for ChatGPT-4 with a grain of salt, because they have not yet been peer-reviewed. However, [she has] not yet seen any other peer-reviewed statistics that contradict them, and they do fit with the general trend of increasing resource usage with each model iteration.
  • Her take-aways:
    • 1. Consider AI use in context of other activities (transportation, diet, cloud-based internet use) and encourage minimizing consumption without putting all the blame on any one activity (and recalling that governmental and corporate action impact the environment more than individual actions)
    • 2. Note the general trend that AI resource consumption is increasing with each model and is adding to our overall energy and water usage, making it an area to watch – especially in an era where we really need to be reducing our carbon footprint, not increasing it
    • 3. Whenever possible, task-specific models should be preferred over multi-task/zero-shot models (biggest take-home point from Luccioni et al. (2024), with a difference of orders of magnitude – so use a search engine instead of a chatbot for search functions, etc. – also look for task-specific models in your field)
    • 4. Whenever possible, work offline. Minimize cloud usage unless needed for some function (sharing, etc.)
    • 5. Advocate for more transparency from AI companies, data centers, and governments.
  • References: