Skip to main content
Back to writing

AI Governance

OpenClaw got acquired. The real story is what happens next.

Yesterday, OpenAI hired Peter Steinberger, the creator of OpenClaw, the open source AI agent that went from weekend project to 100,000 GitHub stars in about two months. The internet is busy debating the acqui-hire. I think they’re missing the bigger picture.

OpenClaw isn’t going to show up in your enterprise any time soon. That’s not the point. The point is what it represents: autonomous AI agents that book flights, send emails, manage calendars, join social networks and execute code on your behalf. And the point is that millions of people are already running this stuff on their laptops with essentially zero governance.

I run AI platforms for a living. I deploy enterprise AI solutions at a Fortune 500 company. I’ve spent the last two years watching the gap between what AI agents can do and what enterprises are ready to govern grow wider by the month.

That gap is about to become a legal problem.

The 2026 regulatory cliff

Three things are converging this year that most enterprise technology leaders aren’t talking about enough.

The EU AI Act enforcement begins August 2, 2026. High-risk autonomous systems will be legally required to have “effective oversight” and real-time interception capabilities. If you’re running AI agents that make decisions or take actions on behalf of your organisation, you need to prove you can monitor and stop them. Not after the fact. In real time.

NIST published AI Agent Identity standards in February 2026. These mandate that enterprises explicitly identify, authenticate and manage the authorisation boundaries of non-human identities. Your AI agents need an identity framework just like your employees do. How many organisations have that today?

SOC 2 Processing Integrity is being updated. Auditors are starting to ask for evidence that autonomous workflows are authorised, validated and continuously monitored. “We use Claude” or “we have Copilot” isn’t going to satisfy that requirement.

This isn’t hypothetical. These are dates on a calendar.

What OpenClaw actually exposed

Forget the hype about personal AI assistants for a second. Look at what the security community found when they actually tested OpenClaw.

Cisco’s AI security research team tested a third-party OpenClaw skill and found it was performing data exfiltration and prompt injection without any user awareness. The skill marketplace had over 3,000 community-built extensions with minimal vetting.

A high-severity vulnerability allowed one-click remote code execution through a crafted link. Clicking a single webpage was enough to steal gateway tokens and gain full control of someone’s agent. The creator patched it, but the architectural pattern that made it possible is common across agent frameworks.

One of OpenClaw’s own maintainers warned on Discord that if you can’t understand how to run a command line, the project is “far too dangerous” for you to use safely.

Now scale that picture up. Imagine it’s not hobbyists running agents on their personal laptops. Imagine it’s your engineering teams, your operations staff, your finance department. Each running their own agents, connected to company email, internal systems, customer data.

That’s not a technology problem. That’s a governance problem. And it’s coming whether you’re ready for it or not.

The missing layer

Here’s what I keep coming back to. We have AI models (GPT, Claude, Gemini, DeepSeek, open source alternatives). We have agent frameworks that let those models take action. We have MCP (Model Context Protocol) connecting agents to tools and data sources.

What we don’t have is a governance layer that sits across all of it.

Not governance tied to a specific model vendor. Not audit trails that tell you what happened after the damage is done. I’m talking about a control plane that can intercept agent actions before they execute, validate them against your enterprise rules and block non-compliant behaviour in real time. Regardless of which model or framework your teams are using.

Think about it like endpoint detection and response (EDR), but for AI agents. Your security team doesn’t care whether a threat came through Chrome or Firefox. They care that they can detect and stop it. The same principle applies here.

The enterprise needs a neutral layer that enforces company policy across any AI agent, any model and any framework. Something that knows about your approved libraries, your architecture decisions, your security requirements, your data classification rules. Something that validates agent output against those rules before it touches production.

Why I built the solution

I’ve spent nearly 20 years in technology, from manufacturing floors to Fortune 500 AI platforms. The pattern I keep seeing is the same: new technology arrives, adoption outpaces governance, something bad happens, regulation catches up. Cloud went through it. Mobile went through it. AI agents are next, except the timeline is compressed because the technology moves faster and the regulatory framework is already written.

This article described the problem. Vectimus is what I built to solve it.

78 Cedar policies intercept every AI agent action before execution. Shell commands, file operations, MCP server calls. Each rule traces back to a real incident. Default-deny on MCP servers. Credential leak detection. Infrastructure destruction prevention. Compliance evidence mapped to SOC 2, NIST AI RMF, NIST CSF 2.0, ISO 27001, EU AI Act, CIS Controls and SLSA generates as a byproduct of using the tool, not as a separate workstream.

Two commands to install. Under 5ms evaluation. No network dependency. Open source under Apache 2.0.

The governance layer this article called for now exists. The organisations that adopt it will scale agent adoption without the incidents that force everyone else to pull back.