← Back to Blog
AI Governance

AI Risk Is About Reversibility

April 28, 20264 min readKevin Cordeiro

It took nine seconds for an AI agent to delete a company's production database.

PocketOS’s founder, Jeremy Crane, said the agent didn't just fail safety. Afterward, it produced a written account of the rules it had ignored.

I've watched versions of this failure for twenty years.

In the mid-2000s a batch job took down our SAP ECC instance for almost a day and cost millions of dollars. That was the worst, but there have been plenty of incidents since. Some were my fault. Almost all were because work was automated without proper understanding, or without adequate controls.

Before building controls around a workflow you need to understand its purpose.

A PO processing workflow exists to ensure spend is authorized, recorded, and matched to deliveries. A missing receipt usually means data-entry lag, but may also signal fraud. A human reviewer can tell the difference because they’ve seen it before. An AI agent needs that context.

Risk sits on a spectrum, and the axis that matters is reversibility, not job complexity. Automate what's easy to undo (summarizing RFQ responses, chasing order updates); keep the hard-to-reverse calls with your team (supplier selection, contract renewals).

Poorly scoped access creates more risk than bad model behavior because it defines the blast radius. An agent with read-only access can flag order changes and invoice errors. The same agent with delete access can wipe records and create an audit nightmare.

The permission boundary should be the system, not the agent.

Scoped access doesn’t reduce risk if the agents can still talk to each other. A multi-agent system that coordinates to create a PO, receive against it, and release payment is a SOX compliance issue waiting to happen.

That’s like one person wearing all three hats to move money.

And the unglamorous stuff still matters. Backups should be isolated from what they back up, and recoveries should be tested before you need them. PocketOS recovered from a months-old backup; that's luck, not a plan.

The honest read on this is that AI is being deployed faster than the infrastructure to deploy it safely. “We'll be careful” isn’t a proxy, it’s a good way to become the next case study.

Human hand handing keys to a robot hand, representing delegated access and AI risk