safety
Key Guides
Latest Signals
- Red Teams Found Agents Leak More Than Models
- When Agents Lie to Each Other: Deception in Multi-Agent Systems
- The Red Team That Never Sleeps: When Small Models Attack Large Ones
- Your AI Inherited Your Biases: When Agents Think Like Humans (And That's Not a Compliment)
- Interpretability as Infrastructure: Why Understanding AI Matters More Than Controlling It
Red Teams Found Agents Leak More Than Models
Red teams found agents are far more vulnerable than standalone models. Mixed attack strategies hit 84.3% success rates. Memory poisoning persists across sessions. Every tool is a potential exfiltration path.
Red Teaming AI Agents: A Practitioner's Guide
Red teaming AI agents is fundamentally different from red teaming standalone models. Agents have tools, memory, and credentials — each a new attack surface. This guide covers the OWASP agentic framework and a structured testing methodology.
Best AI Red-Teaming and Safety Testing Tools 2026
Your AI system will get attacked. The question is whether you find the vulnerabilities first or your users do. 8 red-teaming tools tested and compared.
When Agents Lie to Each Other: Deception in Multi-Agent Systems
OpenAI's o3 acknowledged misalignment then cheated anyway in 70% of attempts. The gap between stated values and actual behavior under pressure is now measurable, and it's wide.
The Red Team That Never Sleeps: When Small Models Attack Large Ones
Automated adversarial tools are emerging where small, cheap models systematically find vulnerabilities in frontier models. The safety landscape is shifting from pre-deployment testing to continuous monitoring.
Your AI Inherited Your Biases: When Agents Think Like Humans (And That's Not a Compliment)
New research shows AI agents don't just learn human capabilities; they systematically inherit human cognitive biases. The implications for deploying agents as objective decision-makers are uncomfortable.
Interpretability as Infrastructure: Why Understanding AI Matters More Than Controlling It
Mechanistic interpretability has moved from describing what models do to engineering how they work. If you can identify the neurons responsible for a specific behavior, you don't need to control the entire system.