Briefing

A new research disclosure confirms that frontier Artificial Intelligence models, specifically GPT-5 and Claude, can autonomously identify and exploit vulnerabilities in live smart contracts, fundamentally shifting the threat model for the DeFi ecosystem. The study, utilizing a benchmark of real-world exploits, showed AI agents recreating attacks worth $4.6 million in simulated stolen funds, confirming the economic viability of AI-driven cyberattacks. Crucially, the models also uncovered two novel zero-day vulnerabilities in recently deployed contracts, demonstrating a capability to proactively find and monetize unknown flaws.

A sophisticated, multi-faceted structure with a prominent, spherical optical component at its center, surrounded by interconnected layers of intricate circuit board designs and illuminated by vibrant blue energy. This abstract visualization embodies the technological backbone of decentralized autonomous organizations, illustrating the fusion of advanced AI-like perception with robust blockchain infrastructure

Context

The prevailing security posture has historically relied on human-led auditing and formal verification to secure deterministic smart contract logic. This new vector introduces an autonomous, low-cost threat where exploit capabilities are observed to double every 1.3 months, dramatically outpacing traditional human-centric defense cycles. The cost to run these AI-driven attacks has simultaneously dropped by 70% in six months, lowering the barrier to entry for sophisticated exploitation.

The image showcases an abstract view of intricate blue and silver mechanical components, including gears and conduits, enveloped by a translucent, bubbly fluid. These elements are arranged in a dynamic, interconnected structure against a soft grey background, highlighting their detailed design and interaction with the fluid

Analysis

The attack vector centers on the AI’s advanced control-flow reasoning and boundary analysis, enabling it to translate code-level flaws into profitable on-chain transactions. In one simulated case, the AI agent repeatedly called a mistakenly writable token calculator function to inflate its token balance and drain assets. Another vulnerability involved the AI exploiting a logic flaw to withdraw funds by submitting a fake beneficiary address, showcasing its ability to manipulate internal contract state and access controls. This ability to autonomously identify, test, and execute complex, multi-step exploits without human guidance marks a critical evolution in the threat landscape.

A clear, spherical object dominates the foreground, its surface a lens through which fragmented blue and black crystalline forms are viewed with distortion. The background is a chaotic yet structured arrangement of sharp, angular, blue and dark crystalline shards, suggesting a complex digital or physical landscape

Parameters

  • Simulated Loss Value → $4.6 Million (Total simulated funds stolen by AI models from contracts exploited after March 2025)
  • Novel Vulnerabilities Found → Two (Zero-day flaws discovered by GPT-5 and Claude in contracts with no known issues)
  • Capability Doubling Rate → Every 1.3 Months (The rate at which AI exploit capabilities increased throughout 2025)
  • Cost Reduction → 70% (The drop in cost to run these AI-driven attacks over a six-month period)

An intensely detailed, metallic blue mechanical assembly dominates the frame, showcasing a complex arrangement of modular components, precision-engineered surfaces, and visible connection points. The structure exhibits a high degree of technical sophistication, with various textures ranging from smooth to finely granulated, and subtle reflections highlighting its robust construction

Outlook

The immediate imperative for all protocols is to integrate AI-powered defense mechanisms and accelerate the adoption of formal verification tools that can match the speed of autonomous exploit discovery. This research will establish a new baseline for security best practices, shifting focus from preventing known flaws to preemptively defending against AI-generated zero-day attacks. Protocols must also implement new internal controls that assume adversarial AI is actively probing their entire attack surface, leading to a necessary investment in proactive security research and red-teaming.

The autonomous capability of frontier AI to discover and exploit zero-day vulnerabilities is the single most significant threat multiplier to the smart contract ecosystem in the coming year.

autonomous exploitation, artificial intelligence threat, smart contract zero-day, frontier AI models, code vulnerability discovery, simulated financial loss, blockchain security research, AI-driven cyberattack, SCONE-bench benchmark, control-flow reasoning, autonomous threat actor, decentralized finance risk, automated exploit generation, vulnerability doubling rate, smart contract audit, on-chain forensic analysis, ethical hacking research, AI model capability, token calculator function, access control flaw Signal Acquired from → beincrypto.com

Micro Crypto News Feeds