APAC organizations are facing mounting pressure to embrace bold generative AI strategies, prioritizing investments in areas like natural language processing and computer vision – the region’s AI investment is projected to explode fivefold, reaching US$117 billion by 2030. However, the successful deployment and operation of AI systems hinge significantly on the underlying infrastructure, which is often the least understood but most crucial component of an AI stack.

An AMD-commissioned Vanguard Report reveals that infrastructure is one of the toughest challenges for IT oversight – 61 percent of survey respondents cited infrastructure limitations as barriers for organizations from retraining models in production as frequently as desired, making it the most commonly identified factor contributing to AI project abandonment. The immense pressure on IT managers today as they navigate and address differing considerations from various stakeholders is compounded by the demand for more enterprise applications. Everything from traditional online transaction processing systems to highly interactive cloud-native applications is processing more data and demanding more CPU compute power.

AI adoption across an organization requires a collaborative and holistic approach to planning –the key focus should be on infrastructure suitability and whether upgrades offer a worthwhile return on investment.

Why AI needs to be on the table

AI does not comprise a single workload or use case; it encompasses a range of tasks, from routine inferencing to complex, data-intensive model training. It has become a vital tool for many organizations across industries and redefine operations through enriching decision-making, improving customer experiences, enabling new product development, and bolstering risk management.

This wide range of AI applications calls for varying infrastructure setups, making it essential for enterprise architecture teams to adopt a balanced approach that is customized for a specific purpose.

Our survey found a median of 125 models in use and more than a petabyte of data required to train those models in aggregate – and most expect workload requirements to increase. In this environment of AI workload expansion, infrastructure is emerging as a critical bottleneck.

Infrastructure is crucial for successful AI implementation

Essential ingredients for supporting AI include high-powered computing, efficient data handling, and reliable networking. But not every AI workload demands the same level of resources. Oftentimes, general-purpose processors (CPUs) can manage smaller AI workloads, while more specialized applications – like large-scale training models – require advanced accelerators (e.g., GPUs).

As AI workloads continue to proliferate, businesses need to emphasize the need for cost-effective infrastructure strategies given the substantial amount of energy used by data centers operating AI workloads. Enterprise architecture teams should select energy-efficient processors, invest in cooling solutions, and implement sustainable practices to help manage operational costs.

A robust AI infrastructure needs visibility into compute, storage, and networking resources. I&O teams will have to be responsible for equipping data centers with observability tools to help the business understand usage patterns and help ensure the infrastructure can scale as AI demands grow.

The cornerstones of an AI-ready infrastructure

Enterprises should take a pragmatic approach to creating an infrastructure environment that fits the evolving needs of their AI workloads by considering the following three-pillar framework designed to enhance data center efficiency and performance without the need for extensive new infrastructure:

  1. Modernize: Replace outdated servers with newer, more efficient systems to maximize space and energy savings. For instance, the new “Zen 5” core architecture, provides up to 17% better instructions per clock (IPC) for enterprise and cloud workloads and up to 37% higher IPC in AI and high-performance computing (HPC) compared to “Zen 4.”
  2. Utilize a hybrid cloud strategy: For workloads that vary in intensity and scale, virtualized and containerized environments provide a flexible solution. By leveraging both private and hybrid cloud strategies, enterprises can scale AI applications while avoiding unnecessary resource allocation.
  3. Invest in balanced accelerator resources: Organisations should right-size their investments in coprocessors (GPUs) to match specific workload needs. Pairing accelerators with capable CPUs helps ensure maximum performance without breaking the bank.

To put this into perspective, our internal results have shown that by modernizing the data center with a change to the latest generation of processors and accelerators companies would gain the ability to use an estimated 71 percennt less power and ~87 percent fewer servers. This gives CIOs the flexibility to either benefit from the space and power savings or add performance for day-to-day IT tasks while delivering impressive AI performance

AI workloads and use cases are as diverse as they get: a combination of standalone workloads (both large and small), use cases, and functions within other workloads. The path to AI-readiness requires thoughtful planning. The best way to effectively manage the AI workload spread is to take a fit-for-purpose approach that relies on processors and accelerators, with the choice depending on the specific requirements of the tasks and strategic investment.


Peter Chambers is the Managing Director, Sales for the Asia Pacific Region as well as the Country Manager for Australia at AMD. In this role Peter is responsible for developing and implementing the end-to-end engagement and sales strategy covering OEMs, Add-in-Board partners, distribution, resellers, VAR/ SI, retailers, and end-users. Peter’s team is accountable for revenues generated across AMD’s key product verticals including Component, Consumer, Commercial, and Server.

With over 26 years in sales and management, Peter excels in creating innovative strategies for global brand development. His expertise and experience have allowed him to implement key solutions and partnerships to enable customers to effectively and competitively position their AMD platforms in the market.

Previously responsible for leading the Consumer Sales team for AMD APJ, he led the team to 9 consecutive quarters of YoY growth. He is focused on challenging the perceptions of AMD in the market and educating partners and consumers alike on the performance and value AMD products provide.

TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.

Demystifying AI and AI PCs