AI is closing technical vulnerabilities at a speed and scale that would have seemed impossible just a few years ago. For security teams, that should be good news. But it also comes with a less comfortable implication: as AI systematically patches the technical layer, adversaries are accelerating their focus on the one layer that cannot be patched — people, and increasingly, the AI agents acting on their behalf.
Across APAC, enterprise AI adoption is accelerating faster than the governance frameworks designed to manage it. Organizations are deploying agents that retrieve information, summarize documents, trigger workflows, and move data across systems, often without clear oversight of what those systems are doing or whose data they are touching.
Most enterprises are not prepared for this shift. While many have moved quickly to adopt AI, far fewer have established the controls needed to understand how these systems interact with sensitive data. The result is a growing visibility gap that traditional security models were never designed to address.
The human-AI interface is now a primary attack surface
For years, enterprise data security has focused on human behavior. An employee forwards a document to a personal account. A departing staff member downloads files before leaving. Someone pastes sensitive information into an external tool without realizing the implications. These scenarios have shaped how organizations think about data loss as something driven by individual actions, whether careless or intentional.
Increasingly, however, the risk is no longer coming from an employee sitting at a keyboard.
An employee uses a chatbot to summarize internal reports, unknowingly exposing confidential information outside the organization. A team deploys an automation that pulls data from a production database without formal review. A workflow designed to improve efficiency connects multiple systems in ways that were never intended. None of these scenarios requires malicious intent. In many cases, they are simply part of getting work done.
AI agents were built by humans, operate under human credentials, and are deployed to perform tasks humans would otherwise do. They make mistakes just as humans do. What is different is the scale and speed at which those mistakes — and misuse scenarios — can unfold.
An agent with access to internal databases, customer records, and cloud storage does not pause to question whether it should share something. It acts.
The question organizations across the region are not yet asking clearly enough is this: who governs that action, and what happens when something goes wrong?
Now multiply that across every agent an organization has deployed, operating continuously, across every system it has been granted access to. That is where the new data risk lives.
The problem organizations can’t see
Traditional security tools tend to focus on defined environments such as endpoints, networks, and approved applications. They are built to monitor known behaviors within structured systems.
AI-driven workflows do not fit neatly into those boundaries.
Data now moves through browser-based tools, third-party integrations, user-built automations, and increasingly, AI systems that connect multiple services together. These interactions can span cloud platforms, SaaS applications, and internal databases, often within a single task.
In this environment, even answering basic security questions becomes difficult. Organizations may not know what data an AI agent has accessed, where that information has been transferred, or whether sensitive content has been exposed to external platforms.
Without clear visibility, organizations are left reacting to incidents after the fact rather than understanding risk as it emerges.
Why existing approaches fall short
Most traditional security strategies were designed around three assumptions: users are the primary source of risk, systems operate within defined boundaries, and data movement can be monitored at key control points.
AI disrupts all three assumptions.
Actions are no longer initiated solely by users. Systems are no longer confined to clearly defined environments. Data no longer moves in linear, observable paths.
As a result, reactive security approaches are becoming increasingly ineffective. By the time an issue is detected, the data may already have been exposed, replicated, or integrated into other workflows.
This is particularly challenging in environments where employees are adopting AI tools faster than organizations can establish formal governance policies. Shadow AI is no longer an isolated phenomenon. In many enterprises, it is rapidly becoming part of day-to-day operations.
Rethinking data security for the AI era
To manage this new reality, organizations need to rethink how they approach data risk.
Visibility must extend beyond user activity to include how AI systems interact with data across environments. It is no longer enough to know who has access. Organizations also need to understand how that access is being used and how it translates into downstream actions.
Context matters as well. The same behavior can carry very different levels of risk depending on the user, the system, and the type of data involved. Security strategies need to account for these nuances rather than applying static rules uniformly across all activity.
Speed matters too. In fast-moving environments, delays between detection and response can significantly increase exposure. Organizations need the ability to identify and respond to risk as it unfolds, not after the damage has already occurred.
Governance must evolve alongside AI adoption. That means treating AI agents with the same level of scrutiny applied to human employees. Organizations should define which tools are sanctioned, review how new agents are introduced into the environment, and ensure those agents have access only to the data they genuinely require.
Organizations that get ahead of this now will be better positioned as expectations from regulators, customers, and business stakeholders continue to sharpen.
People are not the vulnerability
There is a temptation, when confronted with a new category of risk, to frame the problem as one of human failure: people using the wrong tools, clicking the wrong links, or deploying the wrong agents.
That framing misses the point.
People are not the vulnerability to be patched. They are a core part of the security ecosystem, provided the infrastructure is designed to support them rather than work around them.
The same applies to the AI agents employees now operate on their behalf. Get the human layer right — the behaviors, governance, and guardrails — and organizations will be far better positioned to manage not just individual human risk, but also the growing ecosystem of agents acting in their name.
The race to adopt AI is moving faster than the race to govern it. The organizations that close that gap first — not by slowing AI adoption, but by building the visibility and governance to match it — will be the ones that turn AI from a liability into a genuine competitive advantage.

Nicky Choo is Vice President and General Manager for Asia Pacific (APAC) at Mimecast, where he leads the company’s regional strategy, operations, go-to-market execution and customer success. Based in Singapore, he is responsible for driving Mimecast’s growth and expanding its footprint across the APAC region.
Choo brings extensive experience in cybersecurity and enterprise technology, with a strong track record of helping organisations protect against evolving cyber threats. Prior to joining Mimecast, he led APAC operations at Devo and held senior leadership roles at Pegasystems and IBM.
At Mimecast, Choo is focused on advancing the company’s mission to reduce human risk and strengthen cyber resilience, particularly as organisations across APAC face increasingly sophisticated, AI-driven threats. He is also spearheading the company’s regional expansion, including investments in Singapore such as a new ASEAN headquarters and an upcoming data centre to support data sovereignty, threat protection, and AI innovation.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
Featured image: Steve A Johnson on Unsplash

