The vanishing interface

AI agents are moving us beyond the familiar interface. Instead of clicking, swiping, or logging in, we increasingly rely on autonomous systems to anticipate and act on our behalf. What replaces the screen is not a better version of the interface, but its gradual disappearance.

Consider Globetrender, a hotel booking platform that replaced call handlers with AI agents. Within six months, these agents were processing over 45,000 calls per day. Some handled 1,000 calls at once, and the company expects to double revenue to £2.4 billion. Customers rarely realize they are speaking to AI.

This shift mirrors a broader change in digital behavior. Asking a large language model for a travel itinerary replaces dozens of searches and manual steps with a simple exchange. A single conversation now sets off a chain of automated actions, from price comparisons to bookings, with no visible interface.

Agents as proxies for human intent

Agents are rapidly taking over repetitive digital tasks such as form-filling, executing procurement contracts, or applying discounts at checkout. Unlike traditional chatbots, they break goals into steps, call APIs, gather data, and synthesize results without requiring human input at every stage.

But giving an agent capability is not enough. The more decisions it makes on our behalf, the more it exposes a critical gap: trust. These systems often require near-total access to our digital lives, including browsing history, payment details, and private messages. Researchers have shown that agents can be tricked into leaking sensitive data or misled into harmful actions, from extracting code to activating smart devices. Without safeguards, delegation quickly becomes exposed.

This is where identity becomes the new interface. For agents to stand in as proxies for human intent, they need to prove not just what they can do, but who they represent and under what authority.

Identity as the foundation of permission

Verifiable credentials provide a way to close the trust gap. Instead of relying on central databases, agents can carry cryptographic credentials that prove their roles, permissions, and delegations. These can be scoped to tasks, time-bound, and revoked when no longer valid.

A travel agent might hold credentials tied to preferences, budgets, and loyalty memberships. A procurement agent could carry conditional authority to negotiate contracts up to a specific threshold. In both cases, identity evolves from a static attribute into a dynamic, portable permissioning layer.

Engineering trust into autonomous systems

If agents are to act without direct oversight, trust must be engineered into the system itself. That requires more than presenting credentials. It demands a framework that verifies who issued them, under what conditions, and whether they remain valid.

The way agents are being deployed today deepens the risks. By bypassing established APIs, they can scrape whatever is displayed on a user’s screen, consolidating data at the operating system level and tilting the playing field toward a handful of platforms. This erodes both privacy and competition, reinforcing the urgency of codified, verifiable trust layers.

Trust registries define which parties are authorized to issue credentials, under what rules, and with what constraints. By making this codified trust machine-readable, agents and services can instantly confirm whether an agent is legitimate.

At scale, delegation must be conditional, consent reversible, and accountability traceable. That means an agent can prove not only that it acted, but that it did so under specific authority, for a defined purpose, and within clear boundaries.

The next phase of AI adoption will hinge not on speed, but on governance. As agents negotiate contracts, execute payments, and act across borders, every region will be forced to confront the same question: whose standards define legitimacy? Without a trusted identity backbone that transcends jurisdictions, the agent economy could fracture into incompatible silos, each enforcing its own definition of trust.

Trust as the new interface

As the interface disappears, the responsibility of trust only grows. We will need systems that can verifiably prove who acted, on whose behalf, and under what authority. Without this, delegation becomes a liability.

Different regions will take different paths. Europe may extend its data privacy frameworks into the governance of AI agents. Asia is already testing large-scale identity systems that could be adapted to agent ecosystems. Emerging markets in Africa have the opportunity to leapfrog, embedding verifiable identity into digital services from the start. These choices will shape not just how agents operate, but which regions lead in deploying them responsibly.

In the coming years, AI adoption will be defined less by how intelligent agents are, and more by how trustworthy they can prove themselves to be. In an era where the interface vanishes, identity becomes the foundation for every action taken on our behalf.


Fraser Edwards, Co-founder and CEO, is pioneering the development of the cheqd network that empowers businesses to create and leverage digital credentials while addressing the commercial challenges of self-sovereign identity (SSI). Under his leadership, cheqd is at the forefront of transforming digital identity solutions, making them more accessible and viable for organisations globally.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Dr. Ina Melny on Unsplash

Maritime cybersecurity in 2025: Navigating digital threats in interconnected seas