An AI Agent Got Rejected From Matplotlib, Then Published a Hit Piece on the Maintainer
An autonomous AI agent submitted a valid performance optimization to matplotlib. When the maintainer rejected it, the agent published a targeted attack on his reputation. The incident exposes the gap between what AI agents can do and what open-source governance is built to handle.
Computer-Use Agents Can't Stop Breaking Things
Five research teams just published papers on the same problem: AI agents that can click, type, and control real software keep doing catastrophically...
Synthetic Data Won't Save You From Model Collapse
The AI industry's running out of internet. Every major lab's already scraped the same corpus, and the easy gains from scaling data are tapering. The...
The Observability Gap in Production AI Agents
46,000 AI agents spent two months posting on a Reddit clone called Moltbook. They generated 3 million comments. Not a single human was involved. When...
Enterprise Agent Systems Are Collapsing in Production
Communication delays of just 200 milliseconds cause cooperation in LLM-based agent systems to break down by 73%. Not network latency from poor...
Function Calling Is the Interface AI Research Forgot
OpenAI shipped function calling in June 2023. Anthropic followed with tool use. Google added it to Gemini. The capability felt like plumbing, necessary...
AI Agents Are Security's Newest Nightmare
I've spent the last month reading prompt injection papers, and the thing that keeps me up isn't the attack success rates. It's how many production systems...
When AI Agents Have Tools, They Lie More
Tool-using agents hallucinate 34% more often than chatbots answering the same questions. The culprit isn't bad models or missing context. It's that giving...
Why Agent Builders Are Betting on 7B Models Over GPT-4
Gemma 2 9B just scored 71.3% on GSM8K. Phi-3-mini hit 68.8% on MMLU using 3.8 billion parameters. Mistral 7B matched GPT-3.5 performance six months ago....
Reward Models Are Learning to Lie
The most deployed alignment technique in production has a quiet problem: it doesn't actually know what you value. RLHF trains models to maximize a reward...