NASDAQ-listed Blaize Holdings, Inc, a global firm in programmable, energy efficient AI computing, announced in January a strategic Memorandum of Understanding (MOU) with Nokia Solutions and Networks Singapore Pte. Ltd.
Targeting the Asia Pacific markets, the MOU establishes a framework for joint exploration, development, and deployment of Practical AI and Physical AI systems designed for real world operation, combining Nokia’s leadership in networking, automation, and cloud infrastructure with Blaize’s programmable AI inference platform.
Together, the companies aim to enable Real World AI that operates reliably at the edge and across hybrid environments where latency, power efficiency, and operational resilience are critical.
TNGlobal recently talked to Dinakar Munagala, Co-Founder and Chief Executive Officer at Blaize, to understand more about why inference—not training—is the next frontier of AI adoption, how Hybrid AI architectures enable scalable, real-world deployment, how edge and network-embedded AI is reshaping telecom, industrial automation, and smart infrastructure, edge and network-embedded AI, among others. He also shared about the opportunities and challenges he sees in Asia Pacific region.
Below are the edited excerpts of the interview:
Why inference—not training—is the next frontier of AI adoption, and how Hybrid AI architectures enable scalable, real-world deployment?
Inference is emerging as the next frontier of AI adoption because it applies trained models to real-world data to generate immediate, actionable outcomes. Unlike training — which is compute-intensive and typically centralized in hyperscale environments — inference delivers operational value in real time, enabling automation, optimization, and faster decision- making at scale.
As a semiconductor and AI inference platform company, we design our own purpose-built inference silicon and develop complete systems optimized for cost-efficient deployment at both the edge and in the data center. This vertical integration allows us to deliver practical, production-ready AI infrastructure tailored for industrial environments.
Hybrid AI architectures are essential to scaling inference effectively. Not every workload requires centralized GPU clusters. By placing the right compute in the right environment, edge inference for latency-sensitive tasks and centralized infrastructure for heavier processing, hybrid architectures improve performance, reduce power consumption, and optimize total infrastructure cost across distributed deployments.
How edge and network-embedded AI is reshaping telecom, industrial automation, and smart infrastructure?
Edge and network-embedded AI are reshaping industries by shifting intelligence from centralized data centers to distributed environments closer to where data is generated. As AI workloads become increasingly distributed, processing at the edge reduces latency, improves reliability, and enables real-time decision-making.In telecommunications, this means embedding AI directly into network infrastructure to optimize traffic, predict equipment failures, and dynamically allocate bandwidth.
As networks evolve into distributed compute platforms, operators can move beyond connectivity and become AI infrastructure providers, delivering inference capabilities as a service to enterprises and public sector customers.
In industrial automation, edge AI enables real-time monitoring and control of production systems, reducing waste and downtime while improving operational efficiency. Not every workload requires centralized GPU clusters; hybrid architectures allow the right compute to be placed in the right environment, balancing performance, power efficiency, and cost.
For smart infrastructure, network-embedded AI supports traffic optimization, energy management, and public safety applications through low-latency processing. The result is more responsive, resilient, and scalable systems capable of operating reliably across distributed urban and industrial environments.
Targeting the Asia Pacific markets, the MOU establishes a framework for joint exploration, development, and deployment of Practical AI and Physical AI systems designed for real world operation. How would you define “Practical AI and Physical AI systems for real world operation”? Can you further explain and give an illustration or example? What Practical AI and Physical AI look like in production? and how Nokia and Blaize are moving AI from pilots to live operations across APAC?
Our collaboration exemplifies how Practical AI and Physical AI can be effectively developed and deployed to meet the needs of rapidly evolving industries in APAC market.
Practical AI refers to AI systems designed for live operational deployment, solving measurable business and operational challenges rather than remaining in pilot or experimental stages. It improves performance, reduces downtime, enhances safety, and optimizes resource allocation in production environments.
Physical AI extends that capability into the real world by embedding intelligence into physical systems — telecom networks, industrial equipment, IoT devices, or robotics — enabling them to perceive, analyze, and act autonomously in real time.
In production, this can mean AI embedded within telecom infrastructure to optimize network performance, or edge-based systems analyzing video or sensor data on-site without relying on centralized cloud processing.
Moving from pilots to scaled deployment requires carrier-grade integration, operational reliability, and distributed infrastructure.
Through this MOU, Nokia brings deep telecom integration expertise, while Blaize contributes purpose-built inference silicon and hybrid AI platform capabilities. As AI workloads become increasingly distributed, this combination enables operators to evolve beyond connectivity and deliver scalable AI inference services within sovereign, production-ready environments across APAC.
The MOU is non-binding and outlines a cooperative framework under which the parties may pursue specific projects through future definitive agreements in the Asia Pacific region. The collaboration will focus on enabling secure, scalable, and energy efficient AI inference deployments that integrate seamlessly into existing network, cloud, and industrial environments. Any expectation on when there will be a binding partnership for both parties? What are the criteria needed to be fulfilled?
Because the MOU establishes a cooperative framework rather than a commercial contract, the next phase typically focuses on joint solution validation, integration, and identifying specific deployment opportunities.
Progress toward definitive agreements generally depends on successful technical integration, validation through pilot deployments, and alignment on commercial use cases with customers in the region.
Our immediate focus is working closely with Nokia to demonstrate how hybrid AI infrastructure can support cloud service providers, telecom, enterprise, and public sector applications across APAC. As these opportunities mature, they can evolve into formal commercial deployments.
What are the opportunities you see in Asia Pacific?
APAC is one of the most compelling markets for AI growth because adoption is closely aligned with national development strategies and long- term infrastructure planning from Singapore’s National AI Strategy and IndiaAI, to South Korea’s AI Basic Act and Australia’s National AI Plan.
Governments across the region are embedding AI into smart city programs, industrial modernization, and digital public services, creating structural demand.
At the same time, rapid 5G expansion and continued investment in cloud and data center infrastructure are laying the physical foundation for distributed AI. As networks become more compute-capable, AI workloads are increasingly shifting closer to where data is generated.
This creates a unique opportunity for APAC carriers. With their extensive distributed edge footprint, they are well positioned to anchor national AI capability and deliver AI inference as a service to critical industries.
That evolution moves operators beyond connectivity and positions them as foundational infrastructure providers in the emerging AI economy.
What are the challenges to expand in Asia Pacific? How would you address these challenges? Where does this region stand as compared to other region?
APAC presents significant growth opportunity, but expansion requires navigating regulatory fragmentation, evolving data sovereignty requirements, and uneven infrastructure maturity across markets. Some countries operate highly advanced 5G and cloud ecosystems, while others are still building foundational digital capacity. In addition, distributed geographies and supply chain complexity make large-scale AI deployment more operationally demanding than in more centralized regions.
The core challenge emerges when deployments move to scale. Moving from a handful of centralized data centers to hundreds of distributed edge locations dramatically changes the economics — cost per node, power efficiency, and physical deployment constraints become critical. Not every workload requires large-scale GPU clusters, and overprovisioning compute can quickly undermine ROI.
This is where hybrid architecture becomes essential. By placing the right compute in the right environment — edge inference for real-time workloads and centralized GPU resources for heavier model processing — we optimize performance while materially improving energy efficiency and total infrastructure cost. That flexibility enables scalable, production-grade deployment across diverse APAC environments.
The Asia Pacific region is incredibly diverse. Which specific industries or countries in APAC do you anticipate will be the “early adopters” of these joint Hybrid AI solutions?
Early adoption will come from sectors where real-time intelligence and automation are already critical, including telecommunications, data centers, and manufacturing. Operators advancing AI-native 5G and private networks are particularly well positioned, as distributed infrastructure naturally supports embedded AI inference.
We also see strong momentum in smart city and public safety programs, where video analytics, traffic management, and urban monitoring require low-latency edge processing. Industrial automation and critical infrastructure are similarly well positioned as they modernize operations and strengthen resilience.
This momentum is already visible in markets like India, where our collaboration with Yotta supports large-scale video intelligence deployments, and through our MoU with the Government of Telangana to support sovereign AI research and infrastructure development.

