We worry about corrupt data. But what if the bigger risk is corrupted thinking?

Agentic artificial intelligence (AI) doesn’t just learn from historical data. It learns from how we define risk, assign responsibility, and reward behavior. If our default mindset is tactical and reactive, we’re hard-coding that worldview into the systems we build.

This isn’t just a data issue – it’s a design flaw. Unless we rethink our approach to uncertainty, we won’t just automate today’s blind spots. We’ll institutionalize them. Instead of building AI that is adaptive, predictive, and resilient, we’ll train it to mirror our limitations.

To manage risk in an age of autonomous systems, we need to rewire how we think about risk itself.

The illusion of control

Picture this. A generative AI model, designed to guide retail investors, scans market data and personalizes stock recommendations. One day, without warning, it starts issuing sell alerts on high-performing stocks. Investors panic. Markets shift. And someone, somewhere, makes a fortune.

Behind the scenes, the algorithm has been subtly corrupted. This is algorithmic poisoning, where attackers don’t just tamper with the data, but rewrite the decision logic itself. And in an age of agentic AI, where systems don’t just assist but act, these risks become harder to trace and faster to scale.

There are no more black swans. Only white ones. Events we once considered rare must now be expected.

Cybercriminals move faster, hit harder, and scale wider than ever before. AI is both the amplifier and the accelerant. Traditional risk models – built on linear assumptions and backward-looking probability – are no longer built for this terrain.

Autonomous agents, exposed assumptions

Gartner named agentic AI its top strategic technology for 2025. By 2029, autonomous agents are expected to resolve 80 percent of customer service issues without human input, transforming cost structures and decision speed across industries.

But this efficiency exposes new risks. Agentic AI isn’t just a tool; it’s an actor. In the wrong hands, or with the wrong incentives, it can become a risk vector in its own right.

Data poisoning is already a problem. But algorithmic manipulation is the next frontier—where attackers exploit the model’s internal logic by rewriting ethical constraints, undermining safety protocols, or subtly steering decisions. These threats don’t require elite skills or state-level backing. Off-the-shelf tools and open-source code make them widely accessible.

The bigger issue is the growing disconnect between how fast AI is deployed and how slowly risk governance is evolving. Most organizations are still using yesterday’s frameworks to contain tomorrow’s threats.

We’re teaching systems to move faster without teaching them to reason better about risk.

The digital autobahn meets governance gridlock

Cybercriminals operate on a digital autobahn – fast, fluid, and increasingly automated. Deepfakes, phishing-as-a-service, synthetic identity fraud, and ransomware are deployed by networks that function like agile startups. They experiment, scale, and iterate faster than most organizations can update their firewalls.

Meanwhile, many businesses are stuck in governance gridlock – slowed by legacy systems, budget constraints, and fragmented governance. AI and cybersecurity are treated as adjacent concerns, managed by disconnected teams.

This fragmentation creates systemic blind spots. Responsible AI cannot be retrofitted. It must be designed in – governing how data is sourced, how models are trained, how decisions are explained, and how systems are monitored. And it must be owned by more than the risk team.

At the EY organization, we’ve developed a Responsible AI framework that brings together governance, performance, transparency, security, fairness and data quality. But the deeper shift is cultural: from compliance-driven thinking to cross-functional trust by design.

Asia isn’t just ground zero – it’s the proving ground

The Asia-Pacific is often described as “ground zero” for cyber attacks. But it may also be the most important testing ground for resilient, adaptive AI governance.

The region’s diverse regulatory, cultural, and technological maturity creates a complex governance landscape. Some jurisdictions lead in AI adoption but are still developing legal guardrails. Others emphasize privacy and consumer protection, but are taking a slower path on AI innovation.

There is no single standard or checklist that works across all markets. Businesses must adopt layered, adaptable approaches: aligning to global standards while responding to local expectations.

Firms operating across the region have a built-in stress test for their agentic AI systems. In effect, they are training their AI to handle ambiguity. That’s not a liability, but a leadership advantage

Confidence is the new velocity

In our latest EY/Institute of International Finance global risk survey, 75 percent of chief risk officers said cybersecurity was their top priority – outstripping all other concerns.

But boardroom action still lags boardroom awareness. Most directors understand that AI is reshaping the risk landscape. They’ve read the headlines about hallucinating chatbots, cloned voices, data breaches, and disappearing audit trails. But many still treat cybersecurity and AI governance as compliance issues, not strategic enablers.

That’s a mistake. AI now sit at the heart of customer experience, brand trust, supply chain operations, and investor confidence. The cost of failure isn’t just financial. It is potentially existential.

Boards shouldn’t be asking, “Are we compliant?”. They should be asking, “Are we confident?”

In a world of white swans, hope is not a strategy. The organizations that succeed won’t be those who move the fastest. They’ll be the ones who design with foresight – embedding guardrails, empowering cross-functional governance, and building cultures where everyone understands and bears responsibility for the risks.


Chee Kong Wong is the Asia-Pacific (APAC) risk consulting and APAC ServiceNow consulting practice leader. He is a partner in the EY Oceania practice based in Melbourne.

He brings 30 years’ experience in management and technology consulting including 20 years in leadership positions with consulting and technology organisations. He has worked on major business and technology transformation programs in both financial services and public sectors across Asia Pacific and Middle East.

He works with his team of partners across ASEAN, Greater China, Japan, Korea and Oceania to bring the best of EY capabilities, services and solutions to help customers navigate through both traditional and emerging risks. He worked with customers to adopt a new risk mindset by leveraging on data and technology to embrace disruptions and manage enterprise resilience. As the APAC risk consulting leader, he is responsible for the total book of risk business in APAC which covers solutions and services across internal audit and controls, integrated risk management, digital risk management, operational resilience, regulatory and compliance and financial crime.

In addition to the APAC risk consulting leader role, he is the APAC ServiceNow consulting practice leader. He works with his team of partners to build the ServiceNow practice focusing on using ServiceNow platform to drive risk, employee and customer experience transformation programs.

He has a Bachelor of Engineering (Electrical – Class 1 Honours), an MBA (Finance) with Distinction and a PhD in Computing & Information Systems (Information Security).

Jeremy Pizzala is the lead Partner for EY’s Cybersecurity practice in the Asia Pacific region. Prior to this role, Jeremy led EY’s Hong Kong Financial Services Consulting practice, across the Business and Technology Consulting service lines. Over the past 20 years, Jeremy has worked with clients in multiple industries, in particular Financial Services and Technology, Media & Communications, with a deep understanding and experience across consulting, systems integration and business transformation.

In the past 14 years Jeremy has focused on Information Security and Technology Risk Management – ensuring that clients have appropriate actionable strategies in place to protect their organizational assets from both internal and external risk, while ensuring compliance with regulations and industry standards. In doing this, Jeremy has established consulting led organizations across the Asia Pacific region, developed market offerings and led and motivated teams to ensure clients achieve tangible and sustainable business outcomes from their technology investments.

The views reflected in this article are the views of the authors and do not necessarily reflect the views of the global EY organization or its member firms.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Luke Jones on Unsplash

Cybersecurity: A business imperative for today’s leaders