A recently exposed database containing 149 million stolen usernames and passwords was taken offline after a security researcher alerted the hosting provider. Headlines understandably highlighted its size. But the operational reality matters far more.

This was not a breach in the traditional sense, nor the result of a single system failure. It was the byproduct of an ecosystem that continuously harvests credentials from compromised endpoints. Infostealers do not target services – they target users. Credentials for consumer platforms, financial services and government systems appear side by side because attackers value correlation, reuse and lateral movement. The public exposure of such datasets is almost incidental. What’s more important is that defenders often treat these discoveries as isolated events rather than evidence of ongoing identity erosion. Taking a dataset offline does nothing to address the underlying issue, which is that many of these credentials remain valid and trusted long after they have been stolen.

That reality reshapes how we think about fraud and cyber defense – especially as Artificial Intelligence (AI) systems become autonomous actors inside enterprises.

Machines with agency, machines with access

AI agents have evolved from passive tools into autonomous systems capable of accessing data, triggering workflows, calling APIs and executing decisions continuously at machine speed. They operate persistently across cloud services, SaaS platforms and internal infrastructure.

This introduces a structural change to the threat model. Traditional security frameworks are modeled for human users, which are relatively slow and constrained. AI agents break those assumptions entirely. Critically, they rarely introduce new access pathways; they inherit existing credentials, API keys, service accounts and secrets. In an environment where credential compromise is pervasive, this inheritance amplifies risk.

An attacker no longer needs to “break in.” Compromising or influencing an AI system with pre-existing access is enough. When such a system acts across multiple services in milliseconds, the window for detection and response collapses.

Why legacy security breaks against autonomous actors

Most organizations still rely on perimeter controls, endpoint protection and static identity systems. These remain important but are insufficient when the actor is autonomous.

Perimeter-based security assumes threats come from outside the network. AI agents routinely operate across multiple domains, cloud providers and third-party services where no clear perimeter exists. Endpoint controls protect devices, not distributed systems executing API calls at scale. Traditional identity and access management assumes predictable user behavior after authentication.

Autonomous agents can authenticate legitimately, request additional permissions, chain multiple services and execute individually authorised but collectively dangerous actions. Multi-Factor Authentication helps at login but cannot constrain what an authenticated entity does thereafter. In this environment, alert-based security is inherently reactive. The next generation of cybercrimes will largely exploit excessive, inherited or poorly governed access.

From detection to control in an AI-driven environment

Defending autonomous systems requires treating identity as the control plane and behavior as the primary signal. Organizations need deep telemetry into actions at the session level: which systems are accessed, in what order and with what outcomes.

Behavioural analysis is essential. AI agents follow consistent workflows. Deviations, such as unexpected privilege requests, access to unrelated datasets or unusual system chaining, should immediately trigger high-confidence risk signals.

Modern defense moves beyond detection toward automated enforcement. Outcome-based controls terminate or constrain activity in real time without waiting for human intervention. This is AI-versus-AI defense: autonomous systems monitored and constrained by equally autonomous controls.

Fraud, identity and the UX myth

A commonly voiced concern is that stronger controls degrade user experience. Friction, however, typically occurs when security is bolted on, forcing users to navigate complex approval processes and manual reviews. Identity-first controls reduce that friction by making access more precise. Permissions that are task-based, time-bound and context-aware allow legitimate activity to flow while constraining risk.

For AI agents, convenience is secondary to control. Broad, persistent access is unnecessary; narrowly defined permissions aligned to specific tasks improve security without slowing operations. Automation and control are not mutually exclusive and well-designed identity governance proves it.

From best practice to baseline: APAC regulation

Across APAC, governments are converging on expectations for AI governance, even while pursuing different regulatory paths.

  • Japan relies on agile, non-binding frameworks that emphasise innovation, safety, transparency and human-centric principles. Compliance is voluntary, but organizations are expected to actively manage risks such as copyright, misinformation and security failures, particularly as AI autonomy grows.
  • Singapore has recently introduced structured guidance through its Model AI Governance Framework for Agentic AI, establishing expectations for explainability, accountability and human oversight. The frameworks will be increasingly used by regulators, enterprises and auditors to assess high-impact use cases.
  • Australia adopts a risk-based enforcement model, extending existing privacy, cybersecurity and critical infrastructure obligations to automated decision-making. AI systems are now covered by incident reporting, breach notification and governance requirements with legal consequences.

Despite these differences, the conclusion is the same: organizations must demonstrate control, traceability and auditability over AI actions. Regulators want not only outcomes but visibility into how those outcomes were produced, including identities, permissions used, data sources accessed and decision sequences. Maintaining tamper-resistant logs and full audit trails is becoming a foundational requirement.

Leading in the age of autonomous AI

For enterprise and public sector leaders, the path forward is clear, even if implementation from the outset seems complex.

  • Recognize AI agents as privileged digital identities. Inventory, classify, and govern them like high-impact human users.
  • Eliminate standing access wherever possible. Use task-specific, time-limited permissions that expire automatically.
  • Monitor behavior, not just authentication. Session-level visibility is essential for detecting misuse after login.
  • Enforce outcomes automatically. Controls must act in real time without waiting for alerts to be reviewed.
  • Demand auditability by design. If an AI system cannot explain what it accessed and why, it should not be trusted with sensitive operations.

Control is the foundation of trust

AI agents offer extraordinary potential: improving efficiency, reducing human error and unlocking new capabilities. But autonomy without control is risk accumulation. Organizations that embed security from the outset, treating identity governance as a prerequisite, will be the ones that succeed. Access control, behavioral monitoring and auditability should not be viewed as constraints; but as the foundations for safe, scalable adoption. If organizations fail to control what AI can access, they will fail to control what it can do – and no amount of reactive incident response can compensate.


Shane Barney is Chief Information Security Officer (CISO), Keeper Security. Shane Barney joined Keeper Security as Chief Information Security Officer (CISO) in May 2025, bringing with him more than two decades of cybersecurity leadership in both the public and private sectors. Prior to joining Keeper, Mr. Barney dedicated 20 years to the Department of Homeland Security (DHS), serving within the U.S. Citizenship and Immigration Services (USCIS) as both a contractor and federal employee. He began his career at USCIS in the Office of Security and Integrity (OSI), where he played a pivotal role standing up the agency’s national operations center and building DHS’s first classified system. He also collaborated closely with the intelligence and law enforcement communities to strengthen national security efforts.

In 2014, Mr. Barney transitioned to the USCIS Office of Information Technology as Chief of Cyber Intelligence, where he established the agency’s first threat intelligence and insider threat programs. He was promoted to Deputy CISO in 2016 and named CISO in 2018.

As CISO of USCIS, Mr. Barney led transformative cybersecurity initiatives that prioritized identity security, risk intelligence and operational agility. Under his leadership, USCIS became the first federal agency to have 100% single-sign on across all systems. He spearheaded the deployment of role-based access across the agency, centralized secrets management and automated certificate lifecycle management across the enterprise. He also modernized the agency’s approach to risk management – shifting to a more adaptive, mission-driven model.

Mr. Barney holds a Bachelor’s degree and two Master’s degrees from the University of Vermont. He currently resides in Maryland, just outside Washington, D.C.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Glen Carrie on Unsplash

AI profitability in Southeast Asia: What 200 case studies reveal about regional transformation