Safety & Governance
The hard problems: red teaming, bias, interpretability, alignment, and the governance frameworks that might actually matter. No hand-waving.
Key Guides
Alignment Works in English. In Japanese, It Backfires.
A new study shows the same alignment intervention that produces strong safety effects in English reverses direction in Japanese, increasing harmful outputs. Tested across 1,584 simulations, 16 languages, and three model families.
One Fake Source Broke Every Agent
A single misinformation article injected into search rankings crashed GPT-5's accuracy from 65.1% to 18.2%. The agents had unlimited access to truthful sources and couldn't be bothered to look.
Washington's $42 Billion AI Shakedown
The Trump administration is using $42 billion in broadband funding to pressure states into repealing AI laws. The FTC has been directed to classify bias mitigation as a deceptive trade practice. Meanwhile, the EU enforces the opposite.
We Built the Agent Internet Before Its Firewalls
Three CVEs in Anthropic's own MCP reference server. Over 8,000 production servers exposed to the internet. The protocol powering AI agents shipped without security, and the industry is paying for it.
The EU AI Act Hits Full Force in August 2026. Here's What Changes.
On August 2, 2026, the EU AI Act becomes fully enforceable for high-risk AI systems. 40% of enterprise AI systems can't even determine whether they qualify. Here's what changes.
AI Agent Security in 2026: Prompt Injection, Memory Poisoning, and the OWASP Top 10
AI agents don't just have a security problem. They have a fundamentally different security problem than the systems they're replacing. Five attack surfaces and the defense patterns that actually work.
The Swarm That Fakes Consensus
Twenty-two researchers across four continents show how agent swarms fabricate consensus, infiltrate communities, and poison the training data of future AI models.
The Accountability Gap When AI Agents Act
When an AI agent causes harm, who pays? Current law can't answer that clearly.
AI Guardrails for Agents: How to Build Safe, Validated LLM Systems
A Chevrolet chatbot sold a Tahoe for $1. Now AI agents can execute code, call APIs, and trigger real-world actions. Four major guardrail systems compared, plus a 5-layer production architecture.
The International AI Safety Report 2026: What 12 Companies Actually Agreed On
The most comprehensive global AI safety assessment ever assembled was released last week. The International AI Safety Report 2026, led by Turing Award winn