Your AI Agent Has No ID
Why Agentic AI Needs a Trust Score, Not a Checklist
Image created using AI
This week, I was invited through one of the world’s largest expert networks to consult on a topic that stopped me in my tracks: the challenges and solutions for securely deploying autonomous AI agents in business environments, with a particular focus on something called “verifiable credentials” for AI agents and “trusted AI through intent binding.”
I have been building AI governance infrastructure for over two years now, and I have spent 25 years before that deploying AI and data systems across healthcare, fintech, insurance, legal technology, and education. But this invitation was different. It was not about compliance checklists or policy frameworks. It was about a question that the enterprise world is only now beginning to ask out loud: How do you prove that an AI agent acting on your behalf is actually authorized to do what it is doing?
Let that sink in for a moment.
The Problem Nobody Talks About
Right now, across every industry you can name, companies are racing to deploy AI agents that can act autonomously. These agents book meetings, process claims, triage legal inquiries, approve transactions, generate reports, and make decisions that used to require a human being in the loop. The promise is enormous: speed, scale, consistency, cost reduction. And the technology has reached a point where these agents can genuinely perform.
But here is what almost nobody is asking: What credentials does that AI agent carry?
When a human employee joins an organization, they go through background checks, they receive role-based access, they sign agreements about what they can and cannot do, and there is a paper trail connecting their identity to their authority. If that employee oversteps their boundaries, there is an audit trail. There is accountability. There is a verifiable chain that connects what they did to what they were authorized to do.
Now think about an AI agent. It gets deployed with an API key, maybe some prompt instructions, maybe a set of tool permissions configured by a developer who was moving fast to hit a sprint deadline. Where is the verifiable proof of what that agent is authorized to do? Where is the audit trail that connects its actions to a specific human decision about its scope? Where is the credential that another system, a partner, a regulator, a customer, can independently verify without simply trusting the agent’s own assertions?
It does not exist. Not in any standardized, quantitative, independently verifiable way.
Your AI agent has no ID.
Why Checklists Fail for Autonomous Systems
The traditional approach to AI governance has been borrowed from software compliance: create a checklist, assess against it annually or quarterly, produce a report, file it away. This works reasonably well for static systems where humans make the final decisions and the AI is just providing recommendations. You can audit the model, check the training data, review the outputs, and sign off.
But autonomous AI agents break this model completely.
An autonomous agent does not wait for a human to review its output before acting. It chains decisions together. It interprets ambiguous inputs in real time. It interacts with other systems, sometimes other agents, and the scope of its actions can shift based on context in ways that no static checklist can anticipate. A checklist that was accurate on Monday might be meaningless by Wednesday because the agent encountered a scenario that nobody tested for, and it made a judgment call.
This is not a theoretical concern. I have seen it firsthand. When I deployed an autonomous AI voice agent for a law firm handling workers’ compensation cases, the most dangerous moments were not when the agent got something wrong in a predictable way. They were when callers deviated from expected conversational paths and the agent had to decide, in real time, whether it was authorized to handle the new direction or whether it needed to escalate. A checklist cannot govern that decision. A quantitative, continuously updated trust boundary can.
The difference matters. A checklist says “this system was compliant when we last checked.” A trust score says “this system is operating within its verified boundaries right now, and here is the quantitative evidence.”
What Verifiable Credentials for AI Actually Means
When the consultation framed the topic around “verifiable credentials for AI agent deployment,” it pointed at something that I believe will become one of the defining infrastructure layers of the next decade.
Verifiable credentials for AI agents means that an agent carries cryptographically provable attestations about what it is authorized to do, who authorized it, what compliance standards it has been assessed against, and what boundaries it operates within. Any party interacting with that agent, whether it is another system, a business partner, a regulator, or a customer, can independently verify those claims without having to trust the agent itself.
Think of it like a digital license. Not a static certificate that was issued once and sits in a drawer, but a living, scored, continuously updated credential that reflects the agent’s current risk posture and authorization scope. When a partner organization’s system interacts with your AI agent, it can check that credential and confirm: yes, this agent has been assessed at a risk score of 247 out of 1000, it is authorized for these specific actions, it has been evaluated against ISO 42001 and the EU AI Act, and its last assessment was 14 minutes ago.
That is fundamentally different from saying “we passed an audit last quarter.”
The Market Is Moving Faster Than You Think
Here is what struck me about this invitation, and about the broader pattern I am seeing across the industry right now. This is not a topic that only governance nerds and compliance officers are thinking about. Investment firms are actively conducting diligence on companies in the AI security and governance tooling space. Major corporations are paying expert network rates to understand how verifiable trust for AI agents works. Professional services firms are benchmarking API governance frameworks in banking. The money is following the question.
And the question is the same everywhere: How do we trust AI that acts on its own?
For those of us who have been working in AI governance, this is the moment where the market catches up to the problem. For the past two years, I have heard enterprise leaders say they are “exploring” AI governance, that they know it matters but they are not ready to invest. That language is shifting. When investment firms start researching the competitive landscape for AI trust infrastructure, it means capital allocation decisions are being made. When enterprise clients specify “verifiable credentials” and “intent binding” as the topics they want to discuss, it means they have moved past awareness and into solution design.
The window between “people are asking the question” and “someone owns the answer” is open right now.
Why Measurement Beats Compliance
This brings me to the core thesis of everything I write about in this newsletter, and everything I am building.
The reason checklists and traditional compliance frameworks fail for autonomous AI is not that they are poorly designed. It is that they are qualitative instruments being applied to a quantitative problem. Asking whether an AI agent “meets” a compliance standard is like asking whether a bridge “meets” safety requirements without measuring the load it can bear. The answer is meaningless without a number.
What the market needs, and what it is beginning to demand, is quantitative measurement infrastructure for AI trust. A scoring system that can express, in a single interpretable number, how much risk an AI agent carries across multiple regulatory frameworks simultaneously. Not a binary pass/fail. Not a subjective assessment. A reproducible, auditable, continuously updated measurement that engineering teams can act on and compliance teams can report on and regulators can verify and business partners can trust.
This is the science of measurement applied to critical AI systems. It is what I call the work that happens “before the number,” the careful, rigorous thinking about what to measure, how to measure it, and why the measurement methodology matters as much as the result.
What Comes Next
If you are deploying AI agents in your organization today, or planning to, here is what I would encourage you to think about.
First, ask yourself whether you can prove, right now, what your AI agent is authorized to do. Not what it was designed to do. Not what the prompt says it should do. Can you prove it, verifiably and quantitatively, to a third party who has no reason to trust your assertions?
Second, ask yourself how you would know if your AI agent exceeded its authorized scope. Not after the fact, when a customer complains or a regulator asks. Right now. In real time. Do you have a continuous measurement of whether the agent is operating within its trust boundaries?
Third, ask yourself whether your current governance approach would survive the question: “Show me the score.” Not the checklist. Not the policy document. The score. The number that tells me, quantitatively, where this agent falls on the risk spectrum and what that number is based on.
If you cannot answer those questions today, you are not alone. Almost nobody can. But the market is telling us, loudly and with real dollars behind it, that the window to build this capability is right now.
That is what I am working on. That is what “Before the Number” is about. And I will have a lot more to say about it in the weeks ahead.
Suneeta Modekurty is the Founder and Chief AI Architect of SANJEEVANI AI, where she builds quantitative AI governance infrastructure. She is an ISO 42001 Lead Auditor and holds an O-1A visa for extraordinary ability in AI, bioinformatics, and data science. She publishes “Before the Number” on Substack, exploring the science of measurement in critical AI systems.


