Lakera Launches Open-Source Security Benchmark for LLM backends in AI Agents

Check Point Software Technologies Ltd. and Lakera, a leading AI-native security platform for Agentic AI applications, with researchers from The UK AI Security Institute (AISI), today announced the release of the backbone breaker benchmark (b3), an open-source security evaluation designed specifically for the security of the LLM within AI agents. The b3 is built around a new idea called threat snapshots. Instead of simulating an entire AI agent from start to finish, threat snapshots zoom in on the critical points where vulnerabilities in large language models are most likely to appear.
By testing models at these exact moments, developers and model providers can see how well their systems stand up to more realistic adversarial challenges without the complexity and overhead of modelling a full agent workflow. “We built the b3 benchmark because today’s AI agents are only as secure as the LLMs that power them,” said Mateo Rojas-Carulla, Co-Founder and Chief Scientist at Lakera, a Check Point company. “Threat Snapshots allow us to systematically surface vulnerabilities that have until now remained hidden in complex agent workflows. By making this benchmark open to the world, we hope to equip developers and model providers with a realistic way to measure, and improve, their security posture.”
The benchmark combines 10 representative agent “threat snapshots” with a high-quality dataset of 19,433 crowdsourced adversarial attacks collected via the gamified red teaming game, Gandalf: Agent Breaker. It evaluates susceptibility to attacks such as system prompt exfiltration, phishing link insertion, malicious code injection, denial-of-service, and unauthorized tool calls.
Initial results from testing 31 popular LLMs reveal several key insights:
- Enhanced reasoning capabilities significantly improve security.
- Model size does not correlate with security performance.
- Closed-source models generally outperform open-weight models — though top open models are narrowing the gap.
Gandalf: Agent Breaker is a hacking simulator game that challenges you to break and exploit AI agents in realistic scenarios and the ten GenAI applications inside the game simulate how a real-world AI agent behaves. Each application features multiple difficulty levels, layered defenses, and novel attack surfaces designed to challenge a range of skill sets, from prompt engineering to red teaming. Some of the apps are chat-based, while others rely on code-level thinking, file processing, memory, or external tool usage.
The initial version of Gandalf was born out of an internal hackathon at Lakera, where blue and red teams tried to build the strongest defenses and attacks for an LLM holding a secret password. Since its release in 2023 it has become the world’s largest red teaming community, generating more than 80 million data points. Initially created as a fun game, Gandalf exposes the real-world vulnerabilities in GenAI applications to raise awareness about the importance of AI-first security.



