Attackers and defenders evolving at the speed of AI
From multimillion-dollar deepfake scams to AI-powered extortion campaigns, the past year has shown just how quickly AI can turn from breakthrough to blind spot.
As AI workloads multiply, they generate massive volumes of dynamic and often opaque data — much of it beyond the reach of today’s security tools. Cyber threats are evolving just as fast, exploiting growing visibility gaps across hybrid cloud infrastructure.
Recent research from the Gigamon 2025 Hybrid Cloud Security Survey reveals that 39 percent of organizations in Singapore have experienced a doubling of network traffic due to GenAI over the past two years. That surge isn’t just pushing the limits of visibility, it’s overwhelming it.
This visibility gap has real consequences. Breaches are rising at an alarming rate, driven by blind spots in data in motion. Many organisations in the region lack visibility into East-West traffic — the lateral movement of data within and between systems — where threats often lurk undetected. Compounding this, most existing tools aren’t built to recognise AI-specific workloads or detect ‘shadow AI’: unapproved or unmonitored AI applications adopted by employees outside formal IT oversight.
The old adage rings truer than ever in the AI era: if you can’t see it, you can’t secure it.
Threat actors are exploiting AI noise
Attackers across APAC are capitalizing on the explosive growth of AI to hide in plain sight. As AI traffic grows more dynamic and distributed across hybrid cloud environments, attackers are leveraging the noise to mask their movements. Attacks targeting large language models (LLMs) are on the rise, alongside a wave of AI-powered threats.
Many of these threats are hidden within encrypted traffic or shadow AI activity, contributing to a 17 percent year-over-year rise in data breaches, according to the Hybrid Cloud Security Survey.
In Singapore, 56 percent of organizations report employees using approved third-party GenAI tools, yet over half of security teams (56%) remain unaware of how employees are using these tools. This lack of oversight underscores how modern threats exploit what security teams cannot see. In fact, 61 percent of organizations lack confidence in detecting shadow AI or unregulated deployments, amplifying cybersecurity and data privacy risks. With more than half of Singaporean organisations reporting a breach, the gap between AI usage and visibility is becoming a critical point of exposure.
Security tools must evolve
Most existing security architectures weren’t built to monitor the scale, speed, and nature of AI activity. As a result, security and IT teams are struggling with patchy visibility and rising complexity, often made worse by tool overlap, especially in hybrid environments spanning on-premises data centres, public clouds, and containers. Visibility gaps are a barrier to centralised control, making it harder to manage risk as GenAI deployments grow.
In Singapore, 91 percent of companies are at least partly using AI in threat intelligence. However, with threat detection as the most common area of deployment for AI, only 28 percent are fully automating their defences in this area.
With GenAI investments in APAC expected to hit $54.5 billion by 2028 and growing at an incredible annual rate of 59.2%, CISOs must evolve best practices and reassess their cybersecurity strategies. The answer isn’t to deploy more tools, but to secure best practices that start with observing AI traffic. More than 99,000 companies are developing AI today, so the scale of the problem is expanding rapidly.
The question now is how security teams can adapt their tool stacks to stay ahead.
A clearer path forward: Deep observability
Achieving complete visibility into all data in motion, including encrypted and lateral traffic, is no longer optional. It is a prerequisite for secure and scalable AI operations. That is where deep observability comes in. By combining network-derived telemetry like packets, flows, and metadata with traditional MELT data (metrics, events, logs, and traces), deep observability provides the full context needed to detect, investigate, and respond to emerging threats.
Solutions that can identify and classify GenAI traffic, especially in encrypted and agentless environments, will become crucial. Embedding this intelligence into observability pipelines will expose shadow AI and provide actionable data for governance and control.
Innovation and security can and must coexist
As AI becomes increasingly embedded in everything from national digital identity programmes to predictive healthcare and smart city infrastructure, the role of cybersecurity is shifting. No longer a siloed function, it must become a strategic enabler — designed not just to defend, but to empower trusted innovation.
Deep observability helps them to eliminate blind spots, strengthen policy enforcement, and boost operational efficiency across complex hybrid cloud environments. It is no longer a nice-to-have. It is essential, with 9 in 10 Singaporean security and IT leaders stating that deep observability is a foundational element of cloud security today.
Cybersecurity leaders in Singapore and across APAC face a pivotal moment. The challenge ahead isn’t merely about keeping pace with AI innovation. It’s about anticipating where it leads and building resilient systems that can support it at scale. Investments in deep observability must now converge with a broader cultural shift: one that treats security as a shared responsibility across business and technology teams.
The question is no longer whether organisations should invest in securing AI activity. The real question is: how long can you afford to wait?
David Land is Vice President, APAC, Gigamon.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
Featured image: Olga Kovalski on Unsplash