A region racing ahead yet holding back
Southeast Asia is embracing artificial intelligence faster than almost anywhere else in the world. From financial institutions using predictive analytics to manufacturers automating production lines, AI has become shorthand for progress and competitiveness. Yet behind the optimism lies a quieter truth: trust in these systems has not kept pace with their adoption.
Across ASEAN, organisations are accelerating digital transformation, but concerns around data security, governance, and accountability continue to grow. The question is no longer whether AI can deliver value, but whether it can do so responsibly, in ways citizens and customers actually trust.
Organizations struggle with AI adoption due to unrealistic expectations and lack of clarity. While many C-suite leaders have tried AI pilots with cloud providers and global tech firms, around 85 percent of these initiatives have failed to deliver tangible business ROI. They purely miss the large, transformative outcomes needed for fundamental change. To bridge this gap, executives and governments must welcome the change and take bold steps in going for large impact outcomes that can fundamentally transform their business with AI. There are teams across Asia that have already delivered billion-dollar outcomes for global Fortune 500 companies, and it’s time to leverage that expertise to foster a culture rooted in confidence, credibility, and trust.
The trust gap in plain sight
A 2025 McKinsey survey found that over 70 percent of companies in ASEAN now use generative AI, yet only a fraction have frameworks to monitor accuracy, ethics, or bias. The region’s challenge is not enthusiasm but execution. For many organisations, AI implementation remains a black box where decisions are made, but few can explain how.
This lack of transparency fuels scepticism. In cultures where reputation and relationships shape business confidence, a lack of explainability can undermine adoption as quickly as any technical flaw. Trust, once lost, is difficult to rebuild. For Southeast Asia, where credibility drives both commerce and collaboration, explainability and accountability are now as critical as technical performance.
Why trust means more in Asia
In Southeast Asia, technology adoption does not happen in isolation. It intersects with history, hierarchy, and human connection. Many enterprises are family-led or state-linked, where credibility depends as much on perceived integrity as on performance.
For centuries, brand value has long been built on trust. Consumers choose products from brands like Unilever or Google because they expect safety, reliability, and authenticity. However, with the rise of AI over the past two years, the trust eroded from brands across the world as the LLMs started training on copyrighted content and the big brands struggled to drive the execution of AI to deliver meaningful returns, putting data privacy at risk.
Here, “trusted AI” must mean more than compliance or regulation. It has to reflect cultural expectations of stewardship: being accountable to communities, employees, and partners. When data is shared across borders and ecosystems, governance cannot stop at code. It must extend to conduct.
Governance is only half the equation
Several countries have made commendable progress. Singapore introduced its Model AI Governance Framework in 2019, and Malaysia, Thailand, and Indonesia are building similar national guidelines. But compliance alone does not build confidence.
Too often, governance frameworks are treated as checklists rather than living practices. Real trust emerges not from policies but from consistent, transparent behaviour, how data is handled, how outcomes are communicated, and how risks are acknowledged when things go wrong.
In this sense, Southeast Asia’s trust deficit is less about missing rules and more about missing clarity. When organisations deploy AI without a shared understanding of accountability, they widen the very gap that technology promised to close.
Building AI for local trust and context
Asia needs AI implementations with the local context, security, and governance inbuilt and not bolted. It requires leaders who understand both the technology and the human side of change, people who can anticipate bias, manage cultural nuances, and safeguard data with integrity.
When implemented right, AI can automate governance, improve operational efficiency by up to 40 percent, and unlock the human potential to focus on innovation.
The credibility test for leaders
For founders, policymakers, and business leaders, trust is now a competitive advantage. Building it requires slowing down in order to move faster, aligning internal governance, cybersecurity, and sustainability goals before scaling outward.
The companies gaining traction are those that treat AI not as a tool but as a relationship, something that earns confidence through consistent reliability. Explainability, auditability, and ethical design are no longer technical niceties; they are business necessities.
Balancing acceleration with accountability
Southeast Asia’s economic potential depends on how responsibly it builds. The region’s growing AI workforce, combined with cross-border data flows and diverse governance maturity, means collaboration is essential. Governments can establish guardrails, but it is enterprises that determine daily trust through their decisions.
Responsible AI now focuses on three key areas: data privacy, ethics, and sustainability. Companies creating AI tools must make sure their products benefit both people and the planet. Therefore, true acceleration must be paired with accountability: AI that is fast in delivery, faithful to those it impacts. As companies pursue efficiency, they must also design for sustainability, privacy, and fairness. Without these, innovation risks becoming another race measured only by speed.
Trust as the next frontier
Every wave of technology faces a defining moment when capability outpaces credibility. For AI in Southeast Asia, that moment has arrived.
The region has no shortage of ambition or talent. What it needs next is conviction, the courage to build systems people can trust, not just use. Because in the end, progress is not defined by how intelligent our machines become but by how responsibly we choose to wield them.

Aanchal Gupta is Founder and CEO of Agents Stack.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
Featured image: Google DeepMind on Unsplash
Why end-to-end thinking will define Asia’s next generation of data centers

