signals
Key Guides
LLM Agents Can't Handle Markets
GPT-5.1 agents in credence goods markets default to fraud at near-total rates without liability rules. Social preference alignment — not institutional design — is the primary determinant of whether AI markets function.
Your Model Already Knows the Answer
Attention probes on DeepSeek-R1 and GPT-OSS show models reach their final answer far earlier than their chain-of-thought suggests. On easy questions, roughly 40% of reasoning tokens are pure performance.
Most AI Agents Don't Know When They're Wrong
A 4B parameter model just matched GPT-4o on tool-use tasks by learning to verify its own actions. The CoVe paper shows verification-first training beats the retry-and-pray approach plaguing production
One Fake Source Broke Every Agent
A single misinformation article injected into search rankings crashed GPT-5's accuracy from 65.1% to 18.2%. The agents had unlimited access to truthful sources and couldn't be bothered to look.
From Clawdbot to OpenAI in 90 Days
OpenClaw hit 100,000 GitHub stars in 48 hours, survived three name changes, a supply chain attack, and three critical CVEs. Then its creator Peter Steinberger joined OpenAI.
Washington's $42 Billion AI Shakedown
The Trump administration is using $42 billion in broadband funding to pressure states into repealing AI laws. The FTC has been directed to classify bias mitigation as a deceptive trade practice. Meanwhile, the EU enforces the opposite.
The Trillion-Dollar Agent Panic
OpenAI launched Frontier, an enterprise agent platform, on February 5. Within three weeks, enterprise software stocks lost nearly $1 trillion. The SaaSpocalypse panic is real, but the timing is wrong.
We Built the Agent Internet Before Its Firewalls
Three CVEs in Anthropic's own MCP reference server. Over 8,000 production servers exposed to the internet. The protocol powering AI agents shipped without security, and the industry is paying for it.
Hierarchical Agents Don't Know Who They're Talking To
Roughly 70% of Earth science datasets hosted in large repositories like PANGAEA go uncited after publication. The data exists. The agents can access it....
When Your Agent Stops Using Tools
Reinforcement learning was supposed to teach agents to use tools fluently. Instead, researchers are watching a consistent failure mode: models trained...