Skip to main content

Briefing

A new research disclosure confirms that frontier Artificial Intelligence models, specifically GPT-5 and Claude, can autonomously identify and exploit vulnerabilities in live smart contracts, fundamentally shifting the threat model for the DeFi ecosystem. The study, utilizing a benchmark of real-world exploits, showed AI agents recreating attacks worth $4.6 million in simulated stolen funds, confirming the economic viability of AI-driven cyberattacks. Crucially, the models also uncovered two novel zero-day vulnerabilities in recently deployed contracts, demonstrating a capability to proactively find and monetize unknown flaws.

This image displays a sophisticated blue and black modular hardware system, featuring intricate components, exposed wiring, and a prominent "P" emblem on a gray panel. The unit exhibits a high level of mechanical detail, including various bolts, connectors, and internal structures, emphasizing its complex engineering

Context

The prevailing security posture has historically relied on human-led auditing and formal verification to secure deterministic smart contract logic. This new vector introduces an autonomous, low-cost threat where exploit capabilities are observed to double every 1.3 months, dramatically outpacing traditional human-centric defense cycles. The cost to run these AI-driven attacks has simultaneously dropped by 70% in six months, lowering the barrier to entry for sophisticated exploitation.

A prominent, cratered lunar sphere, accompanied by a smaller moonlet, rests among vibrant blue crystalline shards, all contained within a sleek, open metallic ring structure. This intricate arrangement is set upon a pristine white, undulating terrain, with a reflective metallic orb partially visible on the left

Analysis

The attack vector centers on the AI’s advanced control-flow reasoning and boundary analysis, enabling it to translate code-level flaws into profitable on-chain transactions. In one simulated case, the AI agent repeatedly called a mistakenly writable token calculator function to inflate its token balance and drain assets. Another vulnerability involved the AI exploiting a logic flaw to withdraw funds by submitting a fake beneficiary address, showcasing its ability to manipulate internal contract state and access controls. This ability to autonomously identify, test, and execute complex, multi-step exploits without human guidance marks a critical evolution in the threat landscape.

A close-up view reveals a sophisticated, translucent blue electronic device with a central, raised metallic button. Luminous blue patterns resembling flowing energy or data are visible beneath the transparent surface, extending across the device's length

Parameters

  • Simulated Loss Value ∞ $4.6 Million (Total simulated funds stolen by AI models from contracts exploited after March 2025)
  • Novel Vulnerabilities Found ∞ Two (Zero-day flaws discovered by GPT-5 and Claude in contracts with no known issues)
  • Capability Doubling Rate ∞ Every 1.3 Months (The rate at which AI exploit capabilities increased throughout 2025)
  • Cost Reduction ∞ 70% (The drop in cost to run these AI-driven attacks over a six-month period)

A close-up view reveals a highly detailed, abstract representation of interconnected blue electronic circuitry. The complex structure features various components, including prominent silver square processors and numerous smaller, darker blue modules, all set against a soft, blurred light background

Outlook

The immediate imperative for all protocols is to integrate AI-powered defense mechanisms and accelerate the adoption of formal verification tools that can match the speed of autonomous exploit discovery. This research will establish a new baseline for security best practices, shifting focus from preventing known flaws to preemptively defending against AI-generated zero-day attacks. Protocols must also implement new internal controls that assume adversarial AI is actively probing their entire attack surface, leading to a necessary investment in proactive security research and red-teaming.

The autonomous capability of frontier AI to discover and exploit zero-day vulnerabilities is the single most significant threat multiplier to the smart contract ecosystem in the coming year.

autonomous exploitation, artificial intelligence threat, smart contract zero-day, frontier AI models, code vulnerability discovery, simulated financial loss, blockchain security research, AI-driven cyberattack, SCONE-bench benchmark, control-flow reasoning, autonomous threat actor, decentralized finance risk, automated exploit generation, vulnerability doubling rate, smart contract audit, on-chain forensic analysis, ethical hacking research, AI model capability, token calculator function, access control flaw Signal Acquired from ∞ beincrypto.com

Micro Crypto News Feeds