Safety & Governance
Red teaming, bias detection, interpretability, benchmarks, and governance frameworks. Keeping AI systems honest and accountable.
Latest Signals
- The Accountability Gap When AI Agents Act
- When Agents Lie to Each Other: Deception in Multi-Agent Systems
- The Red Team That Never Sleeps: When Small Models Attack Large Ones
- Your AI Inherited Your Biases: When Agents Think Like Humans (And That's Not a Compliment)
- The Benchmark Trap: When High Scores Hide Low Readiness
The Accountability Gap When AI Agents Act
When an AI agent causes harm, who pays? Current law can't answer that clearly.
AI Guardrails for Agents: How to Build Safe, Validated LLM Systems
A Chevrolet chatbot sold a Tahoe for $1. Now AI agents can execute code, call APIs, and trigger real-world actions. Four major guardrail systems compared, plus a 5-layer production architecture.
The International AI Safety Report 2026: What 12 Companies Actually Agreed On
The most comprehensive global AI safety assessment ever assembled was released last week. The International AI Safety Report 2026, led by Turing Award winn
The Benchmark Crisis: Why Model Leaderboards Are Becoming Marketing Tools
All three leading AI models now score above 70% on SWE-Bench Verified. That milestone should be cause for celebration. Instead, it exposes a growing crisis
When Agents Lie to Each Other: Deception in Multi-Agent Systems
OpenAI's o3 acknowledged misalignment then cheated anyway in 70% of attempts. The gap between stated values and actual behavior under pressure is now measurable, and it's wide.
The Red Team That Never Sleeps: When Small Models Attack Large Ones
Automated adversarial tools are emerging where small, cheap models systematically find vulnerabilities in frontier models. The safety landscape is shifting from pre-deployment testing to continuous monitoring.
Your AI Inherited Your Biases: When Agents Think Like Humans (And That's Not a Compliment)
New research shows AI agents don't just learn human capabilities; they systematically inherit human cognitive biases. The implications for deploying agents as objective decision-makers are uncomfortable.
The Benchmark Trap: When High Scores Hide Low Readiness
AI benchmarks measure performance in sanitized environments that bear little resemblance to conditions where these systems will actually operate.
Open Weights, Closed Minds: The Paradox of 'Open' AI
Models you can download but can't verify, use but can't fully trust, deploy but can't completely understand. The paradox of 'open' AI.
Interpretability as Infrastructure: Why Understanding AI Matters More Than Controlling It
Mechanistic interpretability has moved from describing what models do to engineering how they work. If you can identify the neurons responsible for a specific behavior, you don't need to control the entire system.