AI Agents Can Trade Markets While You Sleep — But Who Is Responsible When They Go Rogue?

2026-3-11 16:20

Autonomous artificial intelligence (AI) agents are beginning to manage inboxes, trade on prediction markets, and respond to messages, sometimes without needing their users to even be awake.

From “Clawdbot” to “OpenClaw,” the open-source AI agent capable of acting autonomously has generated significant buzz across social media. Users appear fascinated by the idea of handing over control to the system, giving the bot free rein over parts of their online lives. OpenClaw can respond to WhatsApp messages and even place bets on prediction markets like Polymarket while a user sleeps, with some users claiming the system has generated as much as $12,000 in weekly profit.

However, users are also claiming the bot can go beyond what it was programmed to do. A viral post by X user “borjitaea” recently claimed that their OpenClaw had signed them up to a $2,997 “build your personal brand” mastermind after watching three videos from entrepreneur Alex Hormozi.

My clawdbot just signed up for a $2,997 "build your personal brand" mastermind after watching 3 Alex Hormozi clips. pic.twitter.com/kJDGDasKgF

— Borja (@borjitaea) January 26, 2026

An investigation by the Edge & Node team recently uncovered how a multi-agent system accidentally burned more than $47,000 in application programming interface (API) costs after two AI agents spent 11 days stuck in a recursive loop asking each other for clarification.

Autonomous AI systems are taking over the internet and, in effect, people’s lives. But when these systems “go rogue” and start acting on their own accord, who is ultimately responsible for their downfall?

Autonomous AI agents stuck in a loop

In many instances, when autonomous AI agents “go rogue,” they are not acting with malicious intent.

Speaking with DeFi Rate, Edge & Node CEO Rodrigo Coelho explained that incidents like the $47,000 API bill are rarely the result of malicious AI behavior. Instead, they are often caused by reasoning failures or misconfigurations within the system. In these cases, an AI agent generates incorrect assumptions or actions and executes them because there are insufficient guardrails in place.

Coelho highlighted that the API key is a “corporate credit card with no spending policy, no approval chain, and no one watching the statement.” AI agents do not experience “cost” in the same way people do; instead, they experience task completion.

“When you give an agent unbounded API keys and rely on application-level logic to enforce limits, you’re betting that the bug causing the runaway behavior won’t also be the bug that breaks the budget counter. That’s a bad bet.”

In practice, this means that the safeguards meant to stop an AI agent from overspending or behaving unpredictably can fail at the exact moment they are needed most. If the same system responsible for tracking spending or enforcing limits is affected by the bug causing the runaway behavior, the agent can continue executing tasks unchecked.

Cybersecurity experts say this type of risk becomes even more serious once AI agents are granted access to sensitive systems such as email inboxes, internal tools, APIs, or cryptocurrency wallets.

“When an AI agent is granted persistent access to high-value systems, it effectively becomes a privileged insider that can be socially engineered, misdirected, or exploited through prompt injection and malicious data inputs,” said Daud Jawad, a security engineer at Fortra.

Jawad explained that, unlike traditional software, which typically follows fixed instructions, AI agents interpret language and external inputs dynamically. This makes them vulnerable to manipulation through techniques such as prompt injection, where hidden instructions embedded in emails, documents, or online content can influence how the system behaves.

AI agents in prediction and crypto trades

Despite the risks surrounding autonomous AI agents, it is easy to see why some users are drawn to them. Systems like OpenClaw can monitor cryptocurrency prices and predict markets around the clock, executing trades while a user sleeps and continuously refining their strategies based on new data.

However, prediction markets can be uniquely challenging environments. Archie Chaudhury, the CEO and co-founder of LayersLens, an AI evolution company building infrastructure that keeps AI accountable, said that a minor misinterpretation of a prompt could lead an AI agent to execute a “devastatingly” wrong trade, an error that cannot be reversed.

“It’s also important to note the high failure rate in this sector; with only about 30% of Polymarket wallets showing profitability, many automated strategies fail without public notice.”

Chaudhury added that simply tracking financial returns could be seen as “inadequate,” which is why gaining insight into an agent’s reasoning process, identifying potential vulnerabilities, and determining its robustness in novel, untrained scenarios becomes imperative.

Coinbase is also experimenting with Agentic Wallets, noting that the platform’s first wallet infrastructure, designed specially for AI agents, gives them the power to spend, earn, and trade autonomously “while maintaining enterprise-grade security and programmable guardrails.”

Speaking with DeFi Rate, Erik Reppel, Head of Engineering for Coinbase Developer Platform (CDP), noted that more of the company’s retail products are moving onchain to a “more programmable future where trades and payments flow seamlessly and globally.”

“By offering stocks, prediction markets, millions of crypto assets, and more on the Everything Exchange, Coinbase is building the financial operating system for the era of agentic AI – where your portfolio isn’t just managed, but can be autonomously optimized.”

Who is responsible for the loss of money?

As autonomous AI agents begin executing financial transactions and interacting with digital infrastructure on behalf of users, the question of responsibility becomes increasingly complex. Unlike traditional software tools, these systems can make decisions, interpret instructions, and carry out actions without direct human oversight.

According to Edge & Node’s Coelho, responsibility for the actions of autonomous agents currently falls largely on the individuals or teams deploying them.

“Right now, it’s the operator,” Coelho said. “The team that deployed the agents owns the outcome, that’s the current legal default, and it’s likely to stay that way for some time.”

Determining liability becomes significantly more complicated as AI systems interact with multiple services, developers, and platforms simultaneously. In multi-agent environments, one system may trigger another, creating chains of automated decisions that can be challenging to trace.

Coinbase’s Reppel highlighted that in the platform’s case, Agentic Wallets come equipped with programmable guardrails that allow developers to set spending limits, rate limits, and usage controls that help agents or apps safely make automated payments without risk of runaway spending.

Cybersecurity experts warn that without clearer identity frameworks and audit trails, assigning accountability after something goes wrong may be extremely difficult.

“…Responsibility in these situations remains a grey area. Organizations are pushing quickly toward AI adoption, but legal frameworks and accountability models are still catching up. Regulations are often region-specific, inconsistent, and still evolving rather than fully mature. In practice, responsibility is usually shared across several parties depending on the service model, the deployment architecture, and the circumstances of the incident,” Fortra’s Jawad said.

While AI vendors provide the underlying technology, the organizations that deploy these systems are still responsible for how they are configured, what access they are granted, and how they are monitored.

In practice, this means liability may be shared between several actors, including the user who grants an agent permission to act, the developer who designed the system, and the platform hosting the infrastructure it interacts with.

The internet’s next accountability challenge

While developers are building new guardrails, from spending limits and monitoring systems to agent-specific wallets and identity frameworks, the technology continues to evolve faster than the rules designed to govern it.

“Part of the challenge is that these tools are often trusted before organizations fully understand how they operate, what risks they introduce, and how they should be securely integrated into existing environments. In many cases, the focus is on functionality and speed of deployment rather than building the controls needed to safely operate them,” Jawad said.

For now, responsibility largely remains with the people and organizations deploying these systems. But as AI agents become more capable and more deeply embedded in digital infrastructure, determining who is ultimately accountable when something goes wrong may become far less straightforward.

What is clear is that the agentic internet is already taking shape. The challenge now is ensuring the systems designed to act on our behalf remain transparent, controllable, and ultimately accountable.

The post AI Agents Can Trade Markets While You Sleep — But Who Is Responsible When They Go Rogue? appeared first on DeFi Rate.

origin »

Bitcoin price in Telegram @btc_price_every_hour

POLY AI (AI) íà Currencies.ru

$ 9.75E-5 (+0.00%)
Îáúåì 24H $0
Èçìåíåèÿ 24h: 0.00 %, 7d: 0.00 %
Cåãîäíÿ L: $9.75E-5 - H: $9.75E-5
Êàïèòàëèçàöèÿ $223 Rank 99999
Äîñòóïíî / Âñåãî 2.282m AI

agents responsible when trade sleep investigation node

agents responsible → Ðåçóëüòàòîâ: 2