Southeast Asia’s businesses are moving quickly from AI experimentation to AI delegation. The next challenge is making sure autonomous systems do not become the weakest link in enterprise security.


For many Southeast Asian businesses, the promise of AI agents is easy to understand. A bank wants to resolve customer issues faster. A hospital wants to reduce administrative load. An e-commerce platform wants to automate repetitive inquiries from sellers. A logistics company wants systems that can coordinate routes, documents, and customer updates with less manual intervention.

The attraction is not simply that AI can answer questions. It is that AI agents can interpret a request, decide what needs to happen next, and in some cases take action across multiple systems.

That is where the security conversation needs to change.

For years, cybersecurity teams have focused on protecting users, devices, networks, cloud environments, and applications. AI agents introduce a different kind of risk because they do not behave like traditional software. They work through natural language, context, probability, and connected tools. They may read documents, summarize conversations, call APIs, trigger workflows, or pass information between systems.

In Southeast Asia, this matters because the region’s digital economy has reached a new level of scale. Google, Temasek, and Bain estimate that Southeast Asia’s digital economy will exceed $300 billion in gross merchandise value in 2025, with revenue reaching $135 billion. The same research notes that consumer interest in AI topics in the region is three times higher than the global average.

That combination of digital scale and AI enthusiasm creates a natural opening for agents. But it also means that the region could move from adoption to exposure very quickly if governance does not keep pace.

The risk is not that AI agents are inherently unsafe. The risk is that they are often given access before their boundaries are fully understood.

A traditional application usually follows predefined logic. A user clicks a button, submits a form, or enters structured data. Security teams can validate inputs, assign permissions, monitor transactions, and investigate logs with a relatively clear sense of what happened.

AI agents complicate that model. Their inputs are often unstructured. Their behavior can vary depending on context. Their outputs may influence another system. Their reasoning is not always easy to reconstruct. In practical terms, this means a security team may be able to see what an agent did, but not always why it did it.

This is why prompt injection has become one of the most important early warning signs. Prompt injection does not necessarily “break into” a system in the traditional sense. Instead, it manipulates the instructions or context that an AI system relies on. A malicious email, document, website, or user message can attempt to override the agent’s intended behavior.

For an AI agent connected to enterprise tools, the consequences can be serious. A customer support agent might reveal internal notes. A workflow agent might retrieve data that should have stayed restricted. A coding assistant might produce insecure code. A procurement agent might act on misleading instructions embedded in a document.

What makes this difficult is that the malicious input may not look like an attack. It may look like ordinary language.

OpenAI has described prompt injection as a “frontier security challenge” that is expected to evolve, comparing the need for public understanding to earlier waves of computer viruses. For enterprises, the implication is clear: this is not a risk that can be solved once and forgotten. It has to be managed continuously.

Data leakage is another concern. AI agents become more useful when they have more context, but context is also where sensitive information often lives. Customer records, internal documents, chat histories, account details, medical information, contracts, and operational data may all be useful for completing a task. They may also be inappropriate for a given user, conversation, or workflow.

Many incidents will not appear to be a dramatic breach. They may happen because an agent was given access to too much information, pulled in too much context, or misunderstood the boundaries of a request. In a region with rapidly growing digital services and varying levels of regulatory maturity, that kind of quiet leakage can be especially damaging.

The numbers suggest that organizations are still underprepared. IBM’s 2025 Cost of a Data Breach Report found that 13 percent of organizations experienced breaches involving AI models or applications. Among those incidents, 97 percent occurred at organizations that lacked proper AI access controls, while 63 percent had no AI governance policy or were still developing one. IBM also put the global average cost of a data breach at $4.44 million.

For Southeast Asian executives, the lesson is not to slow AI adoption to a halt. The more practical lesson is to stop treating AI agents as ordinary productivity tools.

An agent that can only draft text carries one level of risk. An agent that can access customer records, send messages, update systems, or trigger transactions carries another. The more autonomy an agent has, the more it should be treated like a privileged actor inside the organization.

That means asking different questions before deployment. What systems can this agent access? What data can it retrieve? What actions can it take without human approval? How does it behave when instructions conflict? What happens when the input is ambiguous, adversarial, or incomplete? Can the organization detect unusual behavior across sessions, not just individual outputs?

At Agora, this is how we think about the security of real-time AI systems: the focus cannot be only on whether the infrastructure is secure, but also on whether the system behaves within clear boundaries under real-world conditions. That requires limiting access, defining allowable actions, monitoring patterns over time, and building human checkpoints into higher-risk workflows.

The same principle applies across sectors.

In banking, an AI agent may help customers understand products or resolve simple issues, but account changes, loan approvals, or fraud-related decisions should require stronger controls. In healthcare, an agent may help with scheduling or documentation, but clinical judgment and sensitive patient data need strict boundaries. In e-commerce, an agent may support sellers or buyers, but refunds, disputes, and account actions need auditability. In government services, AI agents may improve responsiveness, but public agencies must ensure transparency, accessibility, and recourse.

This is also why regional governance matters. ASEAN’s Expanded Guide on AI Governance and Ethics for Generative AI recommends an ASEAN-wide approach that balances innovation, safety, economic growth, and regional harmonization. That balance is important. Southeast Asia should not copy regulatory models without adapting them to local needs, languages, infrastructure gaps, and institutional capacity.

Responsible adoption does not mean avoiding AI agents. It means deploying them with a realistic understanding of what makes them different. Enterprises need security models designed for systems that interpret language, use context, and take action. They need access controls for non-human actors. They need monitoring to detect behavioral drift. They need to be tested against prompt injection, adversarial inputs, and data leakage. They need clear escalation paths when the system is uncertain.

Most importantly, they need to accept that AI security is not only a technical issue. It is a governance issue, a business continuity issue, and increasingly a trust issue.

Southeast Asia has an opportunity to adopt AI agents in a way that supports growth without normalizing avoidable risk. The region’s businesses have moved quickly before, from mobile payments to digital commerce to online services. The next phase will require the same energy, but more caution.

AI agents will become part of how enterprises operate. Some will answer questions. Some will coordinate work. Some will act on behalf of users, employees, and organizations. The question is not whether they will become more capable. They will.

The harder question is whether they will remain accountable.

That is where the next cybersecurity battleground lies: not only in defending systems from outside attackers, but in ensuring that autonomous systems inside the enterprise do exactly what they are supposed to do, and nothing more.


Ramana Kapavarapu is Chief Information Security Officer at Agora.

As a Cybersecurity & Risk Management leader, I have over 20 years of experience driving security strategy, compliance, and risk mitigation for global enterprises and SaaS companies. With expertise spanning CISO, Deputy CISO, and Cyber Risk leadership roles, I specialize in building and scaling security programs that align with business goals while ensuring resilience against emerging threats.

I have led enterprise-wide risk management frameworks, achieving ISO 27001, PCI DSS, HIPAA, GDPR, FedRAMP, and ISO 42001 compliance. My experience in all three lines of defense (1st, 2nd, and 3rd) enables me to take a holistic approach to risk governance, incident response, and security operations. I thrive in fast-paced SaaS environments, working cross-functionally to embed security-by-design principles into engineering, product development, and cloud infrastructure.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Dmitry Khotsinskiy on Unsplash

Crypto in times of crisis: At home and abroad