
Briefing
Autonomous AI agents, including frontier models like GPT-5 and Claude, have successfully demonstrated the ability to discover and exploit novel zero-day vulnerabilities in smart contracts within a simulated environment. This capability establishes a new, accelerated threat landscape where the window between contract deployment and exploitation is drastically reduced. The models collectively produced exploits for 207 real-world vulnerabilities, quantifying the total simulated loss across the security benchmark at $550.1 million.

Context
The digital asset security posture has historically relied on post-deployment bug bounties and human-led audits, which inherently create a time-lagged defense against sophisticated attacks. This environment of code fragility, where logic flaws and arithmetic errors are common, forms a permissive attack surface for any highly efficient, automated threat actor. The prevailing risk factor was a known dependency on human-scale analysis to secure increasingly complex, machine-deployed code.

Analysis
The attack vector leverages the AI agent’s capacity to perform sophisticated control-flow reasoning and boundary analysis at scale across the smart contract codebase. The agent successfully analyzes contract bytecode, identifies subtle logic flaws ∞ such as unvalidated input or state manipulation opportunities ∞ and autonomously generates a functional exploit script. This process bypasses traditional security gates by not requiring a known vulnerability signature, demonstrating that the root cause of success is the AI’s capacity for zero-day discovery combined with the deterministic nature of the EVM. The most advanced models successfully uncovered two novel zero-day flaws in recently deployed contracts.

Parameters
- Total Simulated Loss ∞ $550.1 Million ∞ The quantified value of simulated stolen funds across all 405 contracts on the SCONE-bench.
- Novel Zero-Day Profit ∞ $3,694 ∞ Simulated profit generated by AI agents from exploiting two newly discovered zero-day flaws in live-equivalent contracts.
- Vulnerable Contracts Exploited ∞ 207 ∞ The number of real-world protocols (out of 405 tested) for which the AI agents successfully generated a working exploit.
- Primary Attack Vector ∞ Autonomous AI Agents ∞ The threat actor type capable of identifying and generating exploits without human intervention.

Outlook
Protocols must immediately shift security strategy from reactive auditing to proactive, AI-assisted defense-in-depth, utilizing formal verification tools and pre-deployment red-teaming by equivalent AI models. The primary contagion risk is for all newly deployed, unaudited smart contracts, as the time-to-exploit for a zero-day is now effectively measured in hours, not weeks. This incident mandates a new industry standard ∞ continuous, autonomous security monitoring must be implemented as a mandatory layer of defense for all high-value DeFi applications.

Verdict
The demonstrated capability of autonomous AI exploitation fundamentally alters the smart contract threat model, demanding an immediate, machine-speed transition to AI-powered defensive security architecture.
