For years, companies have treated geopolitical conflict as a source of familiar business risks. Oil prices rise, shipping routes tighten, inflation returns, and supply chains come under strain. That picture still holds, but it no longer captures the full extent of the disruption. In a more digitally dependent economy, conflict now travels through cloud systems, data infrastructure, cyber operations, and AI-enabled attack paths too. That is why AI risk is starting to look less like a narrow technology concern and more like a business continuity issue.

Recent events have made that shift harder to ignore. Reuters has reported that the current Iran war is disrupting aviation, cargo, energy flows, manufacturing, and finance, while also damaging Gulf data centers and pushing major financial institutions to move some operations to remote work. AP, meanwhile, has reported that Iran-linked hackers are widening their targets and that experts are warning about attacks on critical infrastructure. The lesson is not simply that cyber risk rises during conflict. It is that the systems companies depend on for continuity are now more exposed to the overlap between physical disruption and digital escalation.

AI is increasing operational exposure

In the earlier phase of enterprise AI adoption, much of the focus was on use cases. Could AI improve customer service, accelerate software development, strengthen analytics, or lift productivity? Those questions still matter, but the operating context has changed. As AI becomes more embedded in workflows, data pipelines, and decision support, the number of systems tied to it grows. That makes disruptions harder to isolate. When something goes wrong, the issue may no longer stay inside one model or one application. It can ripple outward into access controls, cloud environments, internal knowledge systems, and business operations.

That wider exposure is now being acknowledged by major security players. In its Cybersecurity Forecast 2026, Google Cloud says threat actors are expected to use AI more fully to increase the speed, scope, and effectiveness of attacks, while targeted attacks on enterprise AI systems are also likely to rise. In the same report, Jon Ramsey, vice president and general manager of Google Cloud Security, says, “Organizations need to be prepared for threats and adversaries leveraging artificial intelligence.” It is a concise warning, but an important one. AI is no longer only something companies deploy. It is also part of the environment they have to defend against.

Continuity planning can no longer separate cyber from infrastructure

This is where the business continuity angle becomes more useful than a generic cyber warning. Traditional continuity planning often treats infrastructure disruption, cyber incidents, and software risk as related but distinct categories. That separation is becoming harder to maintain. If AI now sits inside everyday operations, then cloud resilience, visibility, data integrity, access governance, and incident response all become continuity measures.

The Microsoft Digital Defense Report 2025 makes a similar point from a broader strategic angle. It describes AI as both an accelerant for defenders and an advantage for adversaries, while warning that cyber threats are increasingly challenging economic stability and individual safety. The report argues that as digital transformation accelerates, traditional defenses have to be rethought. That language matters because it moves the discussion beyond technical controls and into the wider question of how businesses stay operational under stress.

The gap between awareness and readiness remains large. The World Economic Forum’s Global Cybersecurity Outlook 2025 found that 66 percent of organizations expected AI to have a major impact on cybersecurity, but only 37 percent had processes in place to assess the security of AI tools before deployment. The same report noted that geopolitical tensions influence cyber strategy in nearly 60 percent of organizations. Taken together, those figures point to a continuity problem hiding inside an AI problem. Companies are moving quickly to adopt AI, but many are still not set up to evaluate how those systems behave under conditions of pressure, disruption, or conflict.

Conflict now tests trust in digital systems

What makes this more urgent is that conflict can stress digital systems in more than one way at once. There is the direct risk of cyberattacks, but there is also the problem of operational trust. Can firms rely on cloud access, partner systems, data flows, and remote operations when geopolitical conditions deteriorate? Can AI-enabled tools continue to function safely if the data around them is disrupted, manipulated, or exposed?

That is one reason guidance like the joint AI Data Security advisory from CISA and partner agencies matters. It emphasizes the need to secure the data used to train and operate AI systems, with attention to integrity, provenance, confidentiality, and access. Those may sound like technical concerns, but in practice they are continuity concerns too. A business cannot rely on AI systems during periods of instability if it cannot trust the data, controls, and environments behind them.

What this means for Southeast Asia

For Southeast Asia, this matters because the region is becoming more dependent on digital infrastructure at the same time that global volatility is rising. Enterprises across the region are investing more heavily in cloud, AI deployment, and data infrastructure. That creates real opportunity, but it also increases exposure to shocks that may originate far beyond the region itself. Your continuity map now has to include not only fuel, freight, and finance, but also cyber escalation, cloud dependency, and AI-related risk pathways.

That is why business continuity planning needs a broader lens than it did even a few years ago. In a more entangled economy, geopolitical conflict does not just raise costs and delay shipments. It can also test the resilience of digital systems that enterprises increasingly treat as essential. As AI becomes more deeply embedded in operations, the ability to keep business running may depend less on whether companies are using AI and more on whether they have built the safeguards, visibility, and resilience to keep that use defensible under stress.


TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Featured image: Louis Hansel on Unsplash

Editor’s note on using AI in contributed content