The recent rise of artificial intelligence (AI) and AI PCs has driven a tremendous amount of market buzz and raised a host of questions. Everyone from individual end-users to major corporate buyers is asking what an AI PC is, what hardware is required to take advantage of AI, which applications leverage it, and whether it is better to deploy such services locally, via the cloud, or in a hybrid environment that blends aspects of both.
This confusion is understandable, AI represents a new frontier for computing. In the long term, it may fundamentally change both how we interact with computers and the ways we integrate them into our everyday lives. Let’s tackle some common questions, starting with one of the basic ones:
What is an AI PC?
An AI PC is a PC designed to optimally execute local AI workloads across a range of hardware, including the CPU (central processing unit), GPU (graphics processing unit), and NPU (neural processing unit). Each of these components has a different part to play when it comes to enabling AI workloads. CPUs offer maximum flexibility, GPUs are the fastest and most popular choice to run AI workloads, and NPUs are designed to execute AI workloads with maximum power efficiency. Combined, these capabilities allow AI PCs to run artificial intelligence aka machine learning tasks more effectively than previous generations of PC hardware.
What is the difference between local and cloud computing?
If a workload is processed locally, that means it runs on specialized silicon inside the user’s laptop or desktop. AI workloads can also be run locally via a discrete GPU (if present), an integrated graphics solution, or directly on the CPU depending on how the application is optimized.
If a workload is processed in the cloud, that means information is being relayed from the end-user’s PC to a remote service provided by a third party. Major AI services like ChatGPT, and Stable Diffusion that are commonly discussed today are all cloud-based services, for example. Cloud-based AI services typically rely on high-end server-class discrete GPUs or specialized data center accelerators.
If you ask a cloud-based generative AI service to draw you a landscape or a bouquet of flowers, your request is relayed and processed by a remote server. Once complete, the image is returned to you. If you’ve experimented with any of the freely available generative AI services for text or speech, you are aware that it can take these cloud-based applications up to several minutes to return results depending on the complexity of your request and the number of other requests that the cloud service is processing.
What are the strengths and weaknesses of cloud vs. local computing?
Each of these approaches has its own merits and demerits. The advantage of local AI computing is that the work is handled locally on your device. It takes less time for the CPU, GPU, or NPU built into a system to spin up and start processing a task than it does to send that same task to a server located hundreds or thousands of kilometers away. Keeping data local may also help to improve user privacy since these devices are designed to keep sensitive information from being inadvertently transmitted or shared.
Cloud computing is not without its own advantages, however. Sending data to a distant server may take a measurable amount of time, but remote data center services, aka cloud-based services, can run a given query on an array of hardware that is far more powerful than any single laptop, desktop, or workstation. The advantage of running a workload in the cloud is scale, which at times outweighs the need for quick response times or the intrinsic desire to keep certain data private.
The question of whether local AI or cloud-based AI is better depends on the end-users’ needs and the characteristics of the application. Both cloud and local AI services are complementary to each other, creating an opportunity for future hybrid services. Cloud-based providers want to reduce the gap between question and response to as little as possible, while AI PC hardware available for local AI processing is rapidly improving. Imagine a conversational chatbot that relied on a cloud service to provide general background information on various topics but relied on local processing any time it needed to reference documents or other files stored on your AI PC.
The argument for adopting AI-enabled PCs as a way of preparing for future AI-powered software releases will only strengthen over time. As existing AI features evolve, and new capabilities emerge in the next few years, individuals and enterprises may want to look into adopting AI PCs now to ensure their devices are capable of running AI workloads by the time they intend to make use of them. Any professionals or organizations that want to be at the forefront of AI adoption should be thinking seriously about this topic.
Peter Chambers currently serves as the Managing Director, Sales for the Asia Pacific Region as well as the Country Manager, Australia. In this role Peter is responsible for developing and implementing the end-to-end engagement and sales strategy covering OEMs, Add-in-Board partners, distribution, resellers, VAR/ SI, retailers, and end-users. Peter’s team is accountable for revenues generated across AMD’s key product verticals including Component, Consumer, Commercial, and Server.
With over 26 years in sales and management, Peter excels in creating innovative strategies for global brand development. His expertise and experience have allowed him to implement key solutions and partnerships to enable customers to effectively and competitively position their AMD platforms in the market.
Previously responsible for leading the Consumer Sales team for AMD APJ, he led the team to 9 consecutive quarters of YoY growth. He is focused on challenging the perceptions of AMD in the market and educating partners and consumers alike on the performance and value AMD products provide.
TNGlobal INSIDER publishes contributions relevant to entrepreneurship and innovation. You may submit your own original or published contributions subject to editorial discretion.
Jet-setting with AI: How generative AI is setting new standards for customer service in travel