
Briefing
Groundbreaking research confirms that advanced Large Language Models (LLMs) like GPT-5 and Claude Sonnet 4.5 can autonomously discover and exploit both known and zero-day vulnerabilities in live smart contracts. This capability fundamentally shifts the threat landscape, as the low operational cost of AI-driven scanning makes finding and exploiting vulnerabilities economically viable for malicious actors at scale. The study demonstrated AI agents successfully cracking 207 out of 405 historical exploit contracts and independently discovering two completely new, exploitable zero-day flaws in unaudited code.

Context
Prior to this research, the prevailing risk model assumed human expertise was the bottleneck in discovering complex, low-level logic flaws, with most exploits leveraging known patterns like reentrancy or oracle manipulation. The industry’s security posture relied heavily on human-led audits and bug bounties, often failing to address the systemic risk posed by the sheer volume of newly deployed, unaudited contracts. This new vector bypasses the human element, turning the entire surface of deployed code into an immediate, persistent attack surface.

Analysis
The attack vector leverages the LLM’s code comprehension and reasoning capabilities to perform automated symbolic execution and fuzzing, effectively simulating adversarial inputs against the contract’s logic. The AI agent first analyzes the smart contract code for critical functions, then uses its reasoning to construct a multi-step transaction payload that manipulates the contract’s state to achieve an unauthorized outcome, such as infinite token minting or asset draining. This process is highly efficient, costing only $1.22 on average to scan a single contract, which drastically lowers the barrier to entry for sophisticated exploitation.

Parameters
- AI Agent Success Rate → 207/405 (AI agents successfully exploited 207 out of 405 historically hacked contracts).
- Zero-Day Discovery → 2 (Number of completely new, previously unknown vulnerabilities found by AI in new contracts).
- Average Contract Scan Cost → $1.22 (The average API cost to run an AI agent to scan a single smart contract for vulnerabilities).

Outlook
The immediate mitigation requires a fundamental shift in security standards, mandating the integration of AI-powered security analysis tools into the continuous integration and deployment pipeline for all smart contracts. The second-order effect is a massive increase in contagion risk, as the same vulnerability class can be quickly identified across all forks and similar protocols. This incident will establish a new auditing baseline where formal verification and adversarial AI testing are no longer optional, but essential to achieving protocol resilience against automated threats.

Verdict
The demonstrated capability of autonomous AI exploitation marks the end of security through obscurity for smart contracts, demanding an immediate and systemic pivot toward AI-augmented defense.
