- May 2025. Emails by LLMs: A Comparison of Language in AI-Generated and Human-Written Emails. Weijiang Li and Yinmeng Lai. “We obtained human-written emails from the W3C corpus and generated analogous AI-generated emails using GPT-3.5, GPT-4, Llama-2, and Mistral-7B, and compared AI-generated and human-written emails using a suite of natural language analyses across syntactic, semantic, and psycholinguistic dimensions.”
- March 16, 2025. Marc Watkins. When Algorithms Watch You Write. “Recently, process tracking joined the pantheon of AI detection techniques. We’ve seen the rush to adopt AI-powered detectors, experiments with linguistic finger-printing, watermarking AI outputs through cryptography, and now an advanced form of long-term proctoring called process tracking.”
- Jan 2025. The AI-authorship effect: Understanding authenticity, moral disgust, and consumer responses to AI-generated marketing communications. Colleen P. Kirk, Julian Givi. “Seven preregistered experiments demonstrate that when consumers believe emotional marketing communications are written by an AI (vs. a human), positive word of mouth and customer loyalty are reduced.”
- Nov 30, 2024. Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects. Fred Heiding, Simon Lermen, Andrew Kao, Bruce Schneier, Arun Vishwanath. “In this paper, we evaluate the capability of large language models to conduct personalized phishing attacks and compare their performance with human experts and AI models from last year.”