Back to blog

April 2, 2026

Your GRC Program Has No Mechanism to Govern AI Agents

Written by

Keith Peer, Chief Revenue Officer

Your organization has AI agents making decisions right now. Not just answering questions. Making decisions. Approving purchase orders, triaging support tickets, routing claims, flagging transactions, adjusting pricing. They operate inside your business processes, they touch regulated data, and they act with a degree of autonomy that your current governance model was never designed to handle.

Most GRC programs still treat AI as a tool. Something a person uses, like a spreadsheet or a query engine. The compliance question under that assumption is simple: did the person follow the policy? But when the AI itself is the actor, when it decides what to do, when to do it, and which systems to touch, that question stops making sense. There is no person in the loop to follow the policy. The agent is the loop.

This is not a future problem. It is a current exposure that most enterprises have not yet named.

The Governance Model That Doesn't Exist Yet

A research consortium of eighteen academics and industry researchers, spanning sixteen institutions across Europe, the Middle East, Australia, and North America, recently published a research paper on what they call Agentic Business Process Management. The paper, released in March 2026 (link at the end), lays out a conceptual architecture for how autonomous agents should be governed when they operate inside organizational processes.

Their central argument is worth paying attention to: agents that perceive, reason, and act autonomously within business processes require an explicit governance frame. Not guidelines. Not best practices. A concrete, enforceable structure that constrains what agents can do, ensures their goals align with organizational objectives, and makes their decisions explainable to the humans who remain accountable for them.

The paper calls this "framed autonomy," the principle that an agent's independence must operate within defined normative boundaries. Obligations, permissions, prohibitions. If that language sounds familiar to anyone who has spent time in regulatory compliance, it should. The researchers arrived at governance primitives that GRC professionals have been working with for decades. They just applied them to a class of actor that GRC programs have not yet learned to govern.

From Process Management to Risk Management: The Bridge That Matters

The research is framed in the language of Business Process Management, not GRC. That distinction matters, and collapsing it would be dishonest. BPM is concerned with how work gets executed. GRC is concerned with whether that execution creates unacceptable risk, violates regulatory obligations, or falls outside organizational policy.

But the bridge between them is short and load-bearing. Every business process that touches regulated data, financial transactions, customer information, or critical infrastructure is, by definition, a GRC concern. When the entity executing that process shifts from a human following a workflow to an AI agent making autonomous decisions within that workflow, the risk profile changes fundamentally. And the governance model has to change with it.

The paper identifies four capabilities that governed agents must possess. Each one maps directly to a problem that enterprise risk and compliance leaders are either already facing or will face within the next eighteen months.

Framed autonomy means the agent operates within explicit constraints: what it is permitted to do, what it is obligated to do, what it is prohibited from doing. In GRC terms, this is your control framework applied to a non-human actor. The uncomfortable question: does your control framework even contemplate non-human actors? For most organizations, the answer is no. Controls are written for people, enforced through training and access management, and validated through attestation. None of that works when the actor is software that reasons about its next action.

Explainability means the agent can articulate why it made a given decision. The paper explicitly cites the EU AI Act, GDPR, and high-risk domains including finance and healthcare, regulatory contexts where "the algorithm decided" is not a defensible answer. This is not an abstract concern. Article 86 of the EU AI Act requires deployers of high-risk AI systems to be able to interpret outputs and explain decisions to affected individuals. The Act entered into force in August 2024, with most high-risk obligations phasing in through August 2026; the compliance clock is already running. If your AI agent denies a claim, flags a transaction, or alters a process flow, you need a traceable rationale. Today. Not after the audit.

Conversational actionability means agents must be able to interact with human principals, receive instructions, report status, negotiate constraints, and connect those conversations to actual process execution. For a GRC leader, this translates to a basic operational requirement: can you interrogate what your AI agents do, in terms you understand, and direct them to change course when needed? Or are they black boxes that you discover have misbehaved only after the fact?

Self-modification means agents adapt their behavior over time based on experience. They learn. They evolve. And in doing so, they can drift from their original design and from the compliance posture they were initially configured to maintain. The researchers draw an explicit distinction between short-term adaptation (adjusting to an individual situation) and long-term evolution (permanently changing how the process works). Both create governance risk. The latter, if unmonitored, creates regulatory exposure that compounds silently.

What This Actually Demands From Enterprise GRC

Read those four capabilities together, and a clear picture emerges. Governing AI agents is not a feature you bolt onto an existing compliance tool. It is a structural capability that requires your GRC platform to do things it never had to do before.

It requires treating an AI agent as a governed entity. Not a tool used by a governed person, but an actor with its own control obligations, its own audit trail, and its own potential for policy violation. It requires real-time detection, not quarterly attestation. An agent that violates a segregation-of-duties control at 2 AM on a Tuesday will not wait for your next audit cycle to cause damage. It requires explainability infrastructure: not just logging what happened, but capturing why the agent chose one action over another, in terms a regulator can evaluate. And it requires the ability to frame the agent's autonomy within your specific regulatory context. Your frameworks, your policies, your risk appetite. Not a generic set of AI safety principles. Your governance, applied to your agents, in your environment.

The gap between what this demands and what most GRC programs currently deliver is not incremental. It is architectural.

The Market Is Not Ready. Some of It Knows.

The GRC vendor market has embraced AI enthusiastically as a productivity feature. AI that drafts policies faster. AI that maps controls more efficiently. AI that summarizes audit findings. All useful. None of it addresses the problem the research describes.

The problem is not that your GRC program needs AI to work faster. The problem is that AI agents are creating a new category of governed entity inside your enterprise, and your GRC program has no mechanism to govern them. Different problem. Different architecture.

A small number of platforms are building toward this. LockThreat, for instance, is building agentic AI governance into its core GRC architecture: defining normative frames as enforceable controls, monitoring agent behavior in real time, and reporting directly into the enterprise risk and compliance framework. That is a fundamentally different proposition from using AI to automate compliance workflows; the distinction will matter increasingly as regulators catch up to what AI agents are actually doing inside regulated enterprises.

The academic community is now formalizing the same conclusion that a few practitioners arrived at independently: if AI agents are going to operate autonomously inside your business processes, someone has to govern them. Not advise them. Not hope they behave. Govern them, with the same rigor, traceability, and accountability you apply to every other actor in your organization that touches regulated operations.

The question for GRC leaders is not whether this governance challenge is coming; it’s already here. Rather, the question is whether your program will be ready when the regulator asks how you are managing it.

Can you answer that question? Today?

------------------------

Read the full research paper, "Agentic Business Process Management: A Research Manifesto".

On This Article

Copied!