Information about AI

  • Aug 16, 2023. Why Aren’t We Asking Questions of AI? Sean Ross Meehan. As students and professors grow more skilled at commanding chatbots to produce the outputs they want, Sean Ross Meehan wonders what this will mean for question-based inquiry.
  • Sept 29, 2023. ‘All in on AI’ and the University. Looking beyond generative AI.
  • The AI Minimalist from Dan Cryer. “Associate Professor of English at Johnson County Community College outside Kansas City. I made this site during my Fall ’24 sabbatical, which I spent researching AI and writing instruction.”

Ethics and AI

Gen AI programs

Other

  • Feb 25, 2025. Talbot, J. (2025). Editing AI-Generated Text for Accuracy and Completeness. Prompt: A Journal of Academic Writing Assignments, 9(1). https://doi.org/10.31719/pjaw.v9i1.204 . “This assignment, developed for a fall 2023 section of an upper-division undergraduate editing course, asks students to perform a comprehensive edit of a ChatGPT-generated text. The highest stated priorities for the assigned edit were factual accuracy, rhetorical appropriateness, and completeness in relation to user need. Overall, the project successfully developed and assessed the desired learning outcomes, and served as an introduction to generative AI for students whose experience with it was limited.”
  • April 18, 2023. What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature. Chung Kwan Lo. This rapid review of the literature aims to enrich our understanding of ChatGPT’s capabilities across subject domains, how it can be used in education, and potential issues raised by researchers during the first three months of its release (i.e., December 2022 to February 2023). A search of the relevant databases and Google Scholar yielded 50 articles for content analysis (i.e., open coding, axial coding, and selective coding).
  • Oct 5, 2023. Computing Pioneers Profoundly Disagree on AI Risk. Susan D’Agostino. Mingling with young researchers last month in Germany, luminaries in computer science debated AI’s potential impact on the future of humanity.
  • Oct 31, 2023. Students Outrunning Faculty in AI Use. Lauren Coffey. A new study finds over half of students use generative AI, while more than 75 percent of faculty members do not regularly use the technology.
  • Nov 2, 2023. Deep Intelligence: Fostering Human Deep Learning, Amplifying Our Intelligence, and Supporting a Human Rennaisance. Stefan Bauschard & Sabba Quidwai. This report looks at the growing gap between the attention paid to the development of intelligence in machines and humans….We emphasize the need to prioritize academic programs that promote human deep learning as well as methods that integrate human deep learning approaches and artificial intelligence (AI) tools.
  • 2023. Engaging with Artificial Intelligence in Research and Writing. Initiative led by Xinyue Li. Cambridge.
  • Feb 12, 2024. 2024 EDUCAUSE AI Landscape Study. This survey was distributed from November 27 to December 8, 2023, and focuses on the impacts AI has had on higher education since the mainstreaming of generative AI tools.
  • 2024, Jan 23. A pragmatic introduction to model distillation for AI developers. Mikiko Bazeley. “Model distillation has been instrumental in driving both open-source innovation of LLMs as well as the adoption of large models (both language and vision) for use cases where task specificity and runtime optimization have been required.”
  • 2924, May 5. TROJANS IN LARGE LANGUAGE MODELS OF CODE: A CRITICAL REVIEW THROUGH A TRIGGER-BASED TAXONOMY. Aftab Hussain, Rafiqul Islam Rabin, Toufique Ahmed, Bowen Xu, Premkumar Devanbu, Mohammad Amin Alipour. “This work presents an overview of the current state-of-the-art trojan attacks on large language models of code, with a focus on triggers – the main design point of trojans – with the aid of a novel unifying trigger taxonomy framework. We also aim to provide a uniform definition of the fundamental concepts in the area of trojans in Code LLMs. Finally, we draw implications of findings on how code models learn on trigger design.”
  • 2024, May 24. Near to Mid-term Risks and Opportunities of Open-Source Generative AI. Francisco Eiras, Aleksandar Petrov, Bertie Vidgen, Christian Schroeder de Witt, Fabio Pizzati, Katherine Elkins, Supratik Mukhopadhyay, Adel Bibi, Botos Csaba, Fabro Steibel, Fazl Barez, Genevieve Smith, Gianluca Guadagni, Jon Chun, Jordi Cabot, Joseph Marvin Imperial, Juan A. Nolazco-Flores, Lori Landay, Matthew Jackson, Paul Röttger, Philip H.S. Torr, Trevor Darrell, Yong Suk Lee, Jakob Foerster. “We argue for the responsible open sourcing of generative AI models in the near and medium term. To set the stage, we first introduce an AI openness taxonomy system and apply it to 40 current large language models. We then outline differential benefits and risks of open versus closed source AI and present potential risk mitigation, ranging from best practices to calls for technical and scientific contributions. We hope that this report will add a much needed missing voice to the current public discourse on near to mid-term AI safety and other societal impact.”
  • Jan 15, 2025. About a quarter of U.S. teens have used ChatGPT for schoolwork – double the share in 2023. Olivia Sidoti, Eugenie Park and Jeffrey Gottfried. Pew Research Center. “The share of teens who say they use ChatGPT for their schoolwork has risen to 26%, according to a Pew Research Center survey of U.S. teens ages 13 to 17. That’s up from 13% in 2023. Still, most teens (73%) have not used the chatbot in this way.”

Professional Development