The Minister for Digital Development and Information of Singapore Josephine Teo has announced the launch of the new Model AI Governance Framework (MGF) for Agentic AI, at the World Economic Forum (WEF) on Thursday.
Infocomm Media Development Authority (IMDA) said in a statement that developed by the authority, this first-of-its-kind framework for reliable and safe agentic AI deployment builds upon the governance foundations of MGF for AI, which was introduced in 2020.
It provides guidance to organizations on how to deploy agents responsibly, recommending technical and non-technical measures to mitigate risks, while emphasizing that humans are ultimately accountable.
Initiatives such as the MGF for Agentic AI support the responsible development, deployment and use of AI, so that its benefits can be enjoyed by all in a trusted and safe manner.
This is in line with Singapore’s practical and balanced approach to AI Governance, where guardrails are put in place, while providing space for innovation.
Unlike traditional and generative AI, AI agents can reason and take actions to complete tasks on the behalf of users.
This allows organizations to automate repetitive tasks, such as those related to customer service and enterprise productivity, and drive sectoral transformation by freeing up employees’ time to undertake higher value activities.
However, as AI agents may have access to sensitive data and the ability to make changes to their environment, such as updating a customer database or making a payment, their use introduces potential new risks, for example unauthorized or erroneous actions.
The increased capability and autonomy of agents also create challenges for effective human accountability, such as greater automation bias, or the tendency to over-trust an automated system that has performed reliably in the past.
It is therefore crucial to understand the risks agentic AI could pose and ensure that organizations implement the necessary governance measures to harness agentic AI responsibly, including maintaining meaningful human control and oversight over agentic AI.
The MGF for Agentic AI offers a structured overview of the risks of agentic AI and emerging best practices in managing these risks.
It is targeted at organizations looking to deploy agentic AI, whether through developing AI agents in-house or using third-party agentic solutions.
According to the statement, the framework provides organizations with guidance on technical and non-technical measures they need to put in place to deploy agents responsibly, across four dimensions.
Firstly, assessing and bounding the risks upfront by selecting appropriate agentic use cases and placing limits on agents’ powers such as agents’ autonomy and access to tools and data.
Secondly, making humans meaningfully accountable for agents by defining significant checkpoints at which human approval is required.
Thirdly, implementing technical controls and processes throughout the agent lifecycle, such as baseline testing and controlling access to whitelisted services.
Fourthly, enabling end-user responsibility through transparency and education/training.
“As the first authoritative resource addressing the specific risks of agentic AI, the MGF fills a critical gap in policy guidance for agentic AI,
“The framework establishes critical foundations for AI agent assurance. For example, it helps organizations define agent boundaries, identify risks, and implement mitigations such as agentic guardrails,” said April Chin, Co-Chief Executive Officer, Resaro.
It is noted that the MGF for Agentic AI is the latest initiative introduced by Singapore to build a global ecosystem where AI is trusted and reliable.
Singapore is working with other countries though our AI Safety Institute (AISI) and leading the ASEAN Working Group on AI Governance (WG-AI) to develop a trusted AI ecosystem within ASEAN, while fostering collaboration among Southeast Asian nations.
Closer to home, initiatives such as the MGFs, AI Verify toolkit and the Starter Kit for Testing of LLM-Based Applications for Safety and Reliability (3.02MB), have formed important stepping stones towards the goal of building a trustworthy AI ecosystem internationally.
Singapore’s Level3AI raises $13M in seed funding led by Lightspeed

