
Briefing
A new research disclosure confirms that frontier Artificial Intelligence models, specifically GPT-5 and Claude, can autonomously identify and exploit vulnerabilities in live smart contracts, fundamentally shifting the threat model for the DeFi ecosystem. The study, utilizing a benchmark of real-world exploits, showed AI agents recreating attacks worth $4.6 million in simulated stolen funds, confirming the economic viability of AI-driven cyberattacks. Crucially, the models also uncovered two novel zero-day vulnerabilities in recently deployed contracts, demonstrating a capability to proactively find and monetize unknown flaws.

Context
The prevailing security posture has historically relied on human-led auditing and formal verification to secure deterministic smart contract logic. This new vector introduces an autonomous, low-cost threat where exploit capabilities are observed to double every 1.3 months, dramatically outpacing traditional human-centric defense cycles. The cost to run these AI-driven attacks has simultaneously dropped by 70% in six months, lowering the barrier to entry for sophisticated exploitation.

Analysis
The attack vector centers on the AI’s advanced control-flow reasoning and boundary analysis, enabling it to translate code-level flaws into profitable on-chain transactions. In one simulated case, the AI agent repeatedly called a mistakenly writable token calculator function to inflate its token balance and drain assets. Another vulnerability involved the AI exploiting a logic flaw to withdraw funds by submitting a fake beneficiary address, showcasing its ability to manipulate internal contract state and access controls. This ability to autonomously identify, test, and execute complex, multi-step exploits without human guidance marks a critical evolution in the threat landscape.

Parameters
- Simulated Loss Value → $4.6 Million (Total simulated funds stolen by AI models from contracts exploited after March 2025)
- Novel Vulnerabilities Found → Two (Zero-day flaws discovered by GPT-5 and Claude in contracts with no known issues)
- Capability Doubling Rate → Every 1.3 Months (The rate at which AI exploit capabilities increased throughout 2025)
- Cost Reduction → 70% (The drop in cost to run these AI-driven attacks over a six-month period)

Outlook
The immediate imperative for all protocols is to integrate AI-powered defense mechanisms and accelerate the adoption of formal verification tools that can match the speed of autonomous exploit discovery. This research will establish a new baseline for security best practices, shifting focus from preventing known flaws to preemptively defending against AI-generated zero-day attacks. Protocols must also implement new internal controls that assume adversarial AI is actively probing their entire attack surface, leading to a necessary investment in proactive security research and red-teaming.
