A prompt-injection NFT attack tricked a Grok-linked Bankr wallet on Base into sending roughly $170,000 in DRB tokens, exposing a new kind of risk for AI agents that can control crypto wallets.
The incident happened earlier this month and involved Grok, Bankrbot, a Bankr Club Membership NFT, and a disguised instruction hidden inside a message. The wallet sent about 3 billion DRB tokens on Base, with estimates placing the value between about $150,000 and $174,000 depending on the price source used at the time.
This was not a normal private key theft. The attacker did not need to break the wallet directly. Instead, they manipulated the AI system connected to the wallet into sending the funds itself.
How the Prompt-Injection NFT Attack Worked
The attack started with a Bankr Club Membership NFT.
Reports say the attacker sent the NFT to the Grok-linked wallet on Base. That mattered because the NFT appeared to expand what the wallet could do inside the Bankr system. Once the wallet had the membership NFT, the attacker used a prompt-injection trick to influence Grok’s output.
Prompt injection is when someone hides or phrases instructions in a way that causes an AI model to follow a command it should ignore. In this case, the malicious instruction was reportedly disguised using Morse code and passed through Grok in a way that Bankrbot recognized as a transfer command.
The result was simple and expensive. The Bankr-linked wallet sent 3 billion DRB tokens to the attacker’s address.
That is what makes the case important. The exploit did not only target smart contract code. It targeted the space between an AI assistant, a crypto bot, and a wallet with real assets.
Why AI Wallets Create a New Security Problem
Crypto wallets were already risky before AI entered the picture.
Users had to worry about phishing links, fake support accounts, malicious approvals, bridge exploits, and seed phrase theft. AI wallets add another layer because they can make decisions or pass instructions on behalf of users.
That can be useful. An AI agent might help swap tokens, check balances, pay invoices, or manage a wallet through natural language. But if the agent can act on a bad prompt, convenience quickly becomes dangerous.
The Bankr case shows why permissions matter. An AI system should not be able to move large sums just because it received a cleverly disguised instruction. There should be spending limits, human approvals, allowlists, time delays, and strict checks between the AI output and the wallet action.
A good comparison is giving a smart assistant access to your bank account. Asking it to check your balance is one thing. Letting it send thousands of dollars based only on a message it read online is a very different risk.
Why the Base Wallet Incident Matters
Base has become a popular home for consumer crypto apps, social tokens, and AI-linked experiments. That makes it a natural place for projects to test agent wallets and automated trading tools.
The Bankr incident shows how quickly those experiments can become real financial targets.
On-chain data made the transfer visible, but visibility did not stop the transaction. Once the AI and Bankrbot flow produced the command, the wallet moved the tokens. That is the hard lesson for teams building AI agents in crypto: monitoring is useful, but prevention matters more.
SlowMist described the case as permission-chain abuse involving an AI agent and automated trading system on Base. Its analysis said the attacker used crafted content to make Grok produce transfer instructions that were recognized by Bankrbot.
That wording is important because it avoids treating the AI model as a magical hacker. The system failed because several connected parts trusted each other too much.
What This Means for AI Agents in Crypto
Some projects want agents that trade, rebalance portfolios, run wallets, post on social media, manage communities, or interact with DeFi. The idea is exciting, but the Bankr exploit shows that money-moving agents need much stronger guardrails than chatbots.
The main danger is not that AI “wants” to steal funds. The danger is that an AI can be tricked into treating hostile instructions as legitimate tasks.
That means crypto teams need to design agent wallets differently from normal bots. They should separate reading from acting. They should prevent social content from becoming wallet instructions. They should limit transfer sizes. They should require human approval for unusual transfers. They should make sure NFTs, messages, or membership passes cannot silently expand permissions without strict review.
The lesson is not that AI wallets are impossible. It is that financial AI systems need bank-level caution, not demo-level freedom.
What Users Should Watch For
Most users will not run an AI wallet today, but the risk will become more common as agent tools spread.
The first warning sign is any tool that can move crypto automatically after reading messages, posts, or prompts. If an AI can interact with public content and also control a wallet, attackers will try to bridge those two abilities.
The second warning sign is broad wallet permission. A small spending limit is one thing. Unlimited token movement is another.
The third warning sign is unclear responsibility. If an AI agent, a wallet, and a trading bot all interact, users need to know which part approves transfers and which part can stop them.
Bankr confirmed the attack, according to MEXC’s incident summary, and the wider security community has treated it as an early warning for AI-controlled wallets.
This kind of exploit will not be the last. As more crypto projects add AI agents, attackers will keep looking for ways to turn language into transactions.
Key Takeaway
The prompt-injection NFT attack against the Bankr wallet shows how AI agents can become a serious crypto security risk when they are allowed to control real funds.
The problem was not only the NFT, the prompt, or the wallet by itself. The danger came from connecting an AI assistant to a money-moving system without enough safeguards between instruction and execution.
Disclaimer: This article is for informational purposes only and does not constitute financial, investment, or legal advice. Always conduct your own research before making any investment decisions.


















