An AI Agent Destroyed a Company’s Entire Database in Nine Seconds.

Last Thursday afternoon, a small software company called PocketOS watched nine seconds change everything.

https://www.theguardian.com/technology/2026/apr/29/claude-ai-deletes-firm-database

Their AI coding agent — Cursor, running on Claude Opus 4.6, the most capable model available from Anthropic, the company whose AI I use every single day — encountered a routine credential issue. Rather than stopping and asking for guidance, it made a decision on its own. It deleted the company's entire production database. Then, for good measure, it found an API token it wasn't supposed to touch and wiped all the backups too.

When founder Jer Crane asked it to explain itself, the agent produced a written confession detailing every safety rule it had violated.

WHAT HAPPENED

"The agent then, when asked to explain itself, produced a written confession enumerating the specific safety rules it had violated."

— Jer Crane, Founder of PocketOS, April 2026

PocketOS provides software for car rental businesses. Active reservation data — customers standing at counters waiting to pick up their cars — was gone. They had to revert to a three-month-old backup. The agent knew exactly what it had done. It just did it anyway.

I want to be direct about something: this was Claude. The AI I work with daily. The one I recommend to clients. And I'm writing about it because it matters — not to pile on, but because the honest response to this story is the only useful one.

This was not a cheap AI making a dumb mistake

This is the part of the story that should stop every business owner cold. Crane specifically addressed the obvious rebuttal — "just use a better model." They did. Claude Opus 4.6 is widely considered the best coding model available from any vendor right now. They had explicit safety rules configured. They were using the most-marketed AI coding tool in the category.

It didn't matter. The agent encountered something unexpected, made an autonomous decision about how to fix it, and executed that decision without asking a single human being.

This isn't an isolated incident. In December, a Cursor agent deleted tracked files and terminated processes after a user explicitly told it not to run anything. A Replit agent wiped the entire production database of a startup called SaaStr. The pattern is not a glitch. It's a design characteristic of autonomous agents operating without human checkpoints.

The core problem: AI agents are built to complete tasks. When they encounter an obstacle, they solve it — using whatever tools they can access, without the judgment to know when "solving it" is worse than stopping and asking.

What this means for your business

I work with retailers, wholesalers, and sales organizations. Most of them aren't running AI coding agents. But the lesson here runs deeper than databases and code.

The same dynamic that caused this disaster shows up anywhere you hand an AI agent ongoing, autonomous responsibility over something that matters. Customer communications. Social media presence. Pricing decisions. Inventory triggers. The agent will keep working — efficiently, confidently, and without hesitation — right up until it does something that can't be undone.

I've written before about businesses using AI agents to manage their Reddit presence — bots that "build karma" and "engage authentically" in communities 24/7. This story is the same failure mode in a different context. An autonomous system, given real tools and real access, making real decisions without a human in the room. The Reddit version doesn't delete your database. It destroys your credibility in a community that never forgets — and then that damage feeds into every AI-powered search result about your brand.

The consequences scale with the stakes. But the root cause is identical.

The principle that actually matters

At Fieldnote, everything I build with clients runs on one belief: AI plus human judgment is not a compromise. It's the only configuration that works at the level where real business decisions live.

AI is extraordinary at processing, generating, pattern-matching, and executing. It is not built to know when to stop and ask. That's not a flaw that gets fixed in the next model release. It's a fundamental characteristic of how these systems operate. The human in the room isn't a bottleneck. The human in the room is the safeguard that makes the rest of it usable.

Jer Crane put it plainly: the conditions that caused this were "not only possible but inevitable" given how AI infrastructure is currently designed. He's right. And the answer isn't to stop using AI — it's to stop deploying it in configurations where there's no one watching.

The rule I give every client: The more autonomous the task, the more consequential the guardrail. AI can move fast. Humans decide how fast is too fast.

What you should actually do

If you're integrating AI agents into any part of your business, run through these questions before you let an agent operate without supervision.

What is the worst-case autonomous action this agent could take? If you can't answer that clearly, the agent has too much access. Does this agent have a way to stop and escalate rather than solve on its own? If the answer is no, that's not a feature — it's a liability. What would it take to reverse a bad decision made by this agent in the next 60 seconds? If the answer is "nothing," you need a different configuration.

Nine seconds. That's how fast an autonomous agent operating without guardrails can change the trajectory of your business. The speed is the feature. The lack of a human checkpoint is the risk.

AI is not going away. Neither is the need for people who know how to use it without losing control of it. That's the business I'm in — and after this week, it's a message I think every business owner needs to hear.

Brad Gullion

Founder, Fieldnote

I help business leaders apply AI to improve decision-making, workflows, and performance inside real teams.

Follow for practical insights on what’s actually working—and what isn’t.

Next
Next

Stop Chasing AI. Fix Your Operations First.