The recent insights from Singapore’s Global AI Assurance Pilot underscore a critical juncture in AI operationalisation: the burgeoning adoption of Agentic AI and the imperative to define robust risk frameworks and testing protocols for these increasingly autonomous systems. While the promise of Agentic AI — with its capacity for proactive decision-making and goal-oriented execution — is immense, organisations must approach its integration with a keen awareness of inherent risks and a commitment to advanced safety measures.
The current hesitancy surrounding AI adoption, even for mature solutions like predictive credit risk models or computer vision for health records, often stems from a fundamental “semantic confusion” regarding the evolving AI landscape. This confusion is amplified with the rise of Agentic AI, where systems are designed to operate with greater autonomy, leading to heightened concerns about predictability, control, and accountability.
Achieving semantic clarity: The foundation for Agentic AI safety
At its core, Artificial Intelligence, including Agentic AI, is grounded in probabilistic and statistical modeling. These models, by their very nature, possess a margin of error due to training on imperfect or incomplete data. When an AI system becomes agentic, capable of initiating actions and making decisions independently, the implications of these inherent errors become more profound. It is impossible to capture every conceivable future scenario within historical training data, making occasional unpredictable behaviour, malfunction, or deviation from intended outcomes a persistent challenge. This necessitates not only rigorous data quality management and continuous model validation but also robust governance frameworks specifically designed to ensure Agentic AI operates reliably, ethically, and in alignment with organisational objectives, even in autonomous contexts.
Certain AI applications, particularly those involving agentic capabilities, are more susceptible to specific risk categories:
- Inaccuracies: Can lead to faulty KPI measurements and incorrect reporting, potentially cascading into misguided autonomous actions.
- Inference errors: Spurious correlations and incorrect cause-effect relationships can lead to flawed autonomous decisions, data misinterpretation, and a misunderstanding of business drivers.
- Prediction errors: Wide forecast error margins and costly misclassifications in agentic systems can result in significant operational disruptions and unintended consequences.
- Bias: Agentic AI algorithms can systematically perpetuate and amplify biases present in data, potentially leading to discriminatory autonomous actions and widespread impact on customers.
- Hallucinations: Large Language Models (LLMs) forming the basis of many Agentic AI systems can generate non-existent details and present them as facts, misleading customers and potentially leading to harmful autonomous interventions.
- Privacy & security violations: Concerns are amplified with Agentic AI, encompassing non-permissible data usage, data leakage to third parties or external AI models through autonomous interactions, and regulatory or compliance breaches from independent agent actions.
- Loss of control & unintended autonomy: A critical safety concern unique to Agentic AI, where the system deviates from its intended purpose or operates in ways not foreseen by its developers, leading to unforeseen or undesirable outcomes.
Minimizing risks in Agentic AI adoption
Singapore IMDA’s plans to launch a Testing Starter Kit for GenAI apps will be instrumental in helping businesses deploy AI safely and with confidence. For Agentic AI, further targeted measures are crucial:
Data quality assurance for autonomous systems: Ensuring data integrity is paramount to minimise errors and hallucinations in Agentic AI. This demands accurate data capture, high-quality processing, and reliable, transparent pipelines. For agentic systems operating in sensitive domains like healthcare or finance, automated checks must guarantee data quality before model training and during real-time data ingestion for autonomous operations.
Robust analysis and modeling frameworks for agentic capabilities: Careful selection of analytic methods, modelling frameworks and features is vital to enhance the accuracy and transparency of Agentic AI models. This includes frameworks designed to interpret and explain the decision-making processes of autonomous agents.
Optimised Agentic AI workflow: Hallucinations and unintended actions can be significantly reduced by consciously optimising each step in the Agentic AI workflow. This encompasses data processing, task-specific Prompt Engineering, Retrieval Augmented Generation (RAG) for grounding agent knowledge, judicious LLM selection, domain-specific fine tuning to align with organisational values, and rigorous evaluation methodologies that account for autonomous behaviour. For instance, an AI agent for employee onboarding can be optimised using a RAG workflow, leveraging employee handbooks and SOPs, combined with prompts specific to new employees and their typical onboarding questions, ensuring the agent provides accurate and relevant information autonomously.
Input and output guardrails for autonomous agents: Implementing stringent controls on Agentic AI inputs, outputs, and customer-facing actions is essential to reduce bias and hallucinations, and prevent unintended autonomous actions. Input safeguards should include robust content filters and anonymisation. Output guardrails are even more critical for agentic systems, requiring confidence thresholds, bias detection, and “human-in-the-loop” intervention points to ensure fair, reliable, and safe autonomous responses.
Infrastructure and security protocols for agentic deployment: The selection of the correct infrastructure configuration (On-Premises, Cloud, Private Cloud) is central to addressing privacy and security risks. For Agentic AI, data encryption, self-hosted LLMs, stringent data anonymisation, and filters to screen data before it reaches cloud LLM APIs are crucial to protect sensitive information and ensure data privacy, especially as agents interact with diverse data sources.
Data and model governance for Agentic AI: Robust Data and Model Governance provides the structure and discipline to manage all types of AI risks, particularly the increased complexities introduced by agentic capabilities. This necessitates combining people, processes, and technology across multiple organisational functions. Financial services and healthcare organisations are maturing their governance practices by treating data and AI, especially agentic AI, as products. This involves establishing data and model stewardship with a focus on agent behaviour, continuous monitoring of data flows and Agentic AI model parameters, completing formal reviews and sign-offs for releasing data and Agentic AI products, and designing “human-in-the-loop” AI processes for regular business operations, allowing for oversight and intervention in autonomous decision-making.
Maximizing Agentic AI’s benefits through a systematic safety framework
Agentic AI presents an exciting array of opportunities for enhanced efficiency and innovation, alongside inherent pitfalls that are amplified by increased autonomy. A prudent approach involves a systematic framework for weighing the benefits of Agentic AI when it functions correctly against the potential consequences if it goes wrong.
While Singapore businesses are increasingly confident with using AI for low-risk tasks like routine customer questions or personalised messages, the adoption of Agentic AI for high-stakes applications such as health, wellness, or financial advice demands significantly greater diligence in risk mitigation. The possibility of risk should not deter enterprises from adopting Agentic AI. Instead, proportionate and proactive investments in robust safeguards, particularly in the realm of AI safety, can maximise the transformative benefits of these intelligent agents while effectively managing potential drawbacks and ensuring responsible innovation.
Dr. Phong Nguyen is currently Vice President and Chief Artificial Intelligence Officer at FPT Software, leading the Center of Excellence in Generative AI. He has played an instrumental role in shaping the company’s AI strategy. During his tenure, Phong has driven strategic partnerships, most notably with Landing AI in 2023, which brought renowned AI expert Andrew Ng on board for FPT’s K12 program and the AI Alliance with FPT as one of the founding members, standing alongside Meta and IBM.
Recognized among the Top 150 AI Executives by Constellation, Phong holds advanced degrees from Carnegie Mellon and The University of Tokyo, complemented by research stints at renowned institutions such as Hitachi and Mila.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
Featured image: Google DeepMind on Unsplash
From data overload to strategic clarity: Unlocking business agility with AI