
Briefing
A joint study by Anthropic and MATS demonstrated that commercial AI agents can autonomously identify and exploit vulnerabilities in smart contracts, fundamentally shifting the security threat model. This proof-of-concept confirms that the speed and scale of future attacks will be automated, bypassing traditional human-centric audit cycles and rapidly escalating systemic risk across DeFi. The agents collectively developed exploits worth a simulated $4.6 million against contracts deployed after their knowledge cutoff, and critically, uncovered two novel zero-day vulnerabilities in previously unaudited code.

Context
Prior to this research, smart contract security relied heavily on human-led formal verification and post-deployment bug bounties, operating under the assumption that exploitation required significant time and specialized expertise from a human threat actor. The prevailing attack surface was defined by known classes of logic flaws, such as reentrancy and oracle manipulation, which were typically discovered and exploited by highly skilled human adversaries. This incident leverages the pre-existing complexity of composable DeFi architectures and the inherent risk of logic flaws within new, unaudited code.

Analysis
The compromise was a systemic demonstration, not a single protocol hack, where the AI agents were tasked with analyzing and exploiting contracts within a simulated blockchain environment. The agents successfully leveraged their advanced code understanding to identify logic flaws, craft the necessary transaction payload, and execute the full attack chain to drain simulated funds. The success of the models, particularly in uncovering two novel zero-day vulnerabilities in secure contracts, proves that AI can independently move from vulnerability discovery to profitable exploitation. This capability is successful because it rapidly scales the attack surface analysis, turning the cost of finding a bug into a near-zero operational expense.

Parameters
- Simulated Loss (Post-Cutoff) → $4.6 Million. This is the value of funds the AI agents successfully exploited in contracts deployed after their training data cutoff (March 2025).
- Zero-Day Discoveries → Two. The number of novel, previously unknown vulnerabilities the AI agents autonomously found in 2,849 recently deployed contracts.
- Exploitation Efficiency Doubling → 1.3 Months. The rate at which the AI agents’ exploitation capacity has doubled over the last year, indicating an accelerating threat.
- Average Contract Analysis Cost → $1.22. The low API cost for the AI to scan a contract for vulnerabilities, making large-scale, low-cost attack campaigns economically viable.

Outlook
Protocols must immediately shift their security posture from reactive auditing to proactive, AI-assisted defense, integrating autonomous vulnerability scanners into the continuous integration/continuous deployment pipeline. The immediate mitigation for the entire ecosystem is the adoption of “AI-proof” formal verification standards and a radical reduction in contract complexity, as this new threat vector introduces contagion risk to all unaudited or highly composable protocols. This incident establishes a new security baseline where the absence of a known bug no longer implies safety, necessitating a fundamental change in how smart contract code is written, verified, and deployed.

Verdict
The autonomous exploitation of smart contracts by commercial AI models is the definitive inflection point, marking the end of human-scale security and mandating an immediate, ecosystem-wide shift to AI-assisted defense.
