Speed is the new competitive currency, but unprotected velocity is a liability. Nowhere is this more apparent than in the race to adopt agentic AI – AI capable of autonomous decision-making based on real-time data.

Across Asia, organizations are racing to deploy AI to drive efficiency and unlock competitive advantage. But in the pursuit of speed, one area is falling dangerously behind: security. As AI systems become more autonomous and embedded in decision-making processes, security cannot be treated as an afterthought – it must be a foundational requirement from the outset.

Security struggles to keep up with AI advancements

The problem isn’t just speed – it’s complexity. Unlike traditional software, agentic AI isn’t static; it can evolve as the model learns and adapts. It can interact with external APIs, access internal datasets, or even collaborate with other models, often in ways its developers may not fully anticipate. Once deployed, agentic AI systems may amplify risks through unsupervised actions, unexpected data access, or interactions that legacy security frameworks aren’t equipped to assess. Ultimately, what makes AI powerful is also what makes it unpredictable.

The real-world consequences of insecure AI

AI failures don’t just cause technical disruptions. They erode trust, evoke regulatory scrutiny, and damage reputation. As agentic AI powers more customer-facing services and national infrastructure, the stakes rise. In Singapore, GovTech developed a multi-agent system prototype to extract deeper insights from customer relationship management (CRM) data. This setup allowed the team to glean contextualized and human-validated insights, which typically involved human intervention. While such deployments have the potential to improve service delivery and enhance operational efficiency, a single oversight could still lead to regulatory complications or reputational risks.

Adding to the pressure is the growing attention on AI from governments and regulatory bodies, which are tightening expectations around privacy, safety, and accountability. When companies fall behind from a security perspective, it could potentially lead to more breaches, fines, and sanctions.

Ironically, lax security can also stifle innovation. Teams preoccupied with patching issues don’t have the bandwidth to develop new capabilities. Over time, a reactive posture widens the gap between innovation and security, leaving organizations perpetually playing catch-up.

Prioritizing security for Agentic AI

Security should not be an afterthought addressed late in the development cycle. Rather, it must be a core principle, integrated into every stage of the AI lifecycle. To deploy secure agentic AI systems, we must rethink how systems are built, governed, and maintained. Some key priorities include:

  • Start with security as a core requirement. Security considerations must be integrated right from the initial planning and design phases of agentic AI projects. This requires new architectural norms, such as adversarial testing and defined trust boundaries to prevent unauthorized interactions.
  • Keeping humans in the loop. While agentic AI can process and analyse data at scale, it still requires human judgment for ambiguous or unique situations. Critical decisions, especially in sectors like finance, healthcare, and national infrastructure, must involve human oversight to catch edge cases and ensure ethical outcomes.
  • Tailored defences for Agentic AI. Traditional security tools often miss the unique threats agentic AI faces. For example, in a model inversion attack, bad actors can reverse-engineer an AI model to uncover the private data it was trained on. In data poisoning, they tamper with the model’s training data,  subtly corrupting its future behavior without detection. Organizations should adopt tools like machine learning bills of materials (ML-BOMs), robust audit trails, and AI behavior monitoring to protect against such risks.
  • Evolving security with technology. As AI models and threats evolve, so must security protocols. Organizations need adaptive frameworks, regular reviews, and the agility to respond to new vulnerabilities as they emerge.

Thankfully, industry bodies are increasingly taking the lead in shaping responsible agentic AI development and deployment. In Singapore, GovTech’s AI Practice Group recently released an Agentic AI Primer, which outlines practical governance principles for securing and overseeing autonomous AI systems. It emphasizes auditable autonomy, human oversight, and adaptive safeguards – all critical foundations for building trust in agentic AI from the outset.

As AI becomes more autonomous, trust must be a deliberate outcome of design, not an afterthought. Organizations that embed resilience, accountability, and security into their AI initiatives from day one will not only stay compliant, but they’ll earn the confidence needed to innovate at scale and remain competitive.


Gareth Cox is Vice President, Asia Pacific & Japan at Exabeam. He leads the sales and go-to-market strategy for Exabeam across Asia Pacific and Japan (APJ). He helps organizations in Australia, New Zealand, Japan, and Southeast Asia defend against evolving cyber threats by augmenting their security operations with AI and automation.

Gareth is a valued advisor in cybersecurity and technology, known for building trusted relationships with clients and partners across APJ and consistently delivering exceptional service. A seasoned sales leader with over 25 years of experience, he has a proven track record of driving growth for top technology companies. Since joining Exabeam in 2018, Gareth has been instrumental in the company’s expansion in APJ, overseeing a more than tenfold growth in its customer base across the region.

Before joining Exabeam, Gareth served as the Regional Director of Cloud Security for APJ at Skyhigh Networks. There, he launched their Cloud Access Security Broker business, successfully deploying solutions for Fortune 500 clients and establishing a robust partner ecosystem. Skyhigh Networks was later acquired by McAfee in January 2018. He also previously managed the Financial Services business for Check Point Software Technologies.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Marek Piwnicki on Unsplash

How AI can enhance massive MIMO in APAC