GGWPTECH
ReviewsNewsGamingGuidesMusic
GGWPTECH

PC Hardware, Gaming & Music Gear Insights. Honest reviews, benchmarks, and news since 2020.

Content

  • Reviews
  • Tech News
  • Game Reviews
  • Music Gear
  • Guides
  • Opinion

Site

  • About
  • Privacy Policy
  • AI Content Policy

© 2026 GGWPTECH. All rights reserved.

Built with Next.js · Hosted on Vercel

News

CPUs Take a Larger Role in AI Data Centers as Agentic AI Expands Workloads

Ira James·March 28, 2026·3 min read
·Press Release
CPUs Take a Larger Role in AI Data Centers as Agentic AI Expands Workloads

The rise of agentic AI is reshaping how modern data centers are designed, with CPUs now taking on a more critical role alongside GPUs in AI infrastructure.

At its Advancing AI event, AMD CEO Lisa Su described agentic AI as a new class of systems capable of continuously interacting with data, applications, and services to make decisions and execute complex tasks. Unlike traditional AI workloads, these systems operate as persistent agents, increasing the complexity of inference workflows.

This shift is driving renewed focus on CPUs, which are responsible for orchestrating these increasingly multi-step AI processes.

CPUs Move From Support to Coordination Layer

In traditional AI pipelines, GPUs handle parallel processing tasks such as training neural networks and running inference workloads. However, as AI systems become more dynamic, CPUs are increasingly responsible for coordinating the entire pipeline.

Within modern AI clusters, CPUs handle scheduling, memory management, data preparation, and I/O operations. They also manage control flow and ensure that GPUs remain fully utilized.

The relationship can be simplified as orchestration versus execution. GPUs process data at scale, while CPUs ensure that data arrives correctly, workflows remain synchronized, and system resources are efficiently allocated.

Inference Workloads Are Changing the Balance

The growing emphasis on inference is one of the main reasons CPUs are becoming more important.

During training, workloads are largely predictable and GPU-heavy. CPUs act primarily as support, feeding data and managing system operations. In contrast, inference in agentic AI environments involves multiple steps, decision-making loops, and interactions with external systems.

This shifts more responsibility to the CPU, which now handles tasks such as routing outputs, interpreting results, managing API calls, and coordinating iterative processing between models.

As a result, CPU utilization increases significantly in production AI environments, particularly those deploying autonomous or semi-autonomous agents.

Performance and Efficiency Remain Key Differentiators

AMD positions its EPYC processors as a foundation for these evolving workloads, emphasizing both performance and energy efficiency.

According to AMD, a 5th-generation EPYC CPU-based system can deliver up to 2.1 times higher performance per core compared to systems based on Nvidia’s Grace Superchip. The same configuration is also estimated to achieve up to 2.26 times better performance per watt, based on SPECpower benchmarks.

The company also highlights the advantage of x86 architecture, which benefits from a mature software ecosystem and broad compatibility across enterprise workloads without requiring code refactoring.

Balanced Systems Become the Priority

The broader takeaway is a shift toward balanced AI infrastructure.

Rather than focusing solely on GPU performance, data center operators are increasingly optimizing entire systems, including CPUs, networking, and software stacks. CPUs play a central role in this approach by enabling efficient GPU orchestration and supporting enterprise applications running alongside AI workloads.

AMD’s strategy reflects this direction, combining EPYC CPUs with Instinct GPUs, Pensando networking, and the ROCm software platform to deliver integrated AI systems.

What Comes Next

Looking ahead, AMD is preparing its next-generation EPYC processors, codenamed “Venice,” which are expected to power future AI platforms such as the “Helios” rack-scale architecture.

As AI workloads continue to evolve, the role of CPUs is expected to expand further, particularly in managing complex, real-time, and multi-agent systems.

The shift underscores a broader industry trend: AI performance is no longer defined by accelerators alone, but by how effectively the entire system operates as a cohesive unit.

Tags:Press Release

Leave a Comment

Never published. Used for moderation only.

0 / 2000

Related

NVIDIA and Google Optimize Gemma 4 Models for Local AI Across PCs and Edge Devices
News

NVIDIA and Google Optimize Gemma 4 Models for Local AI Across PCs and Edge Devices

Apr 7, 2026 · 3 min read

Fujisoft Showcases AI-Based Site Security System Built on AMD Embedded+ Platform
News

Fujisoft Showcases AI-Based Site Security System Built on AMD Embedded+ Platform

Apr 3, 2026 · 3 min read

Lenovo Expands Hybrid AI Push With NVIDIA, Targets Real-Time Enterprise Workloads
News

Lenovo Expands Hybrid AI Push With NVIDIA, Targets Real-Time Enterprise Workloads

Mar 23, 2026 · 4 min read

Related Articles

NVIDIA and Google Optimize Gemma 4 Models for Local AI Across PCs and Edge Devices
News

NVIDIA and Google Optimize Gemma 4 Models for Local AI Across PCs and Edge Devices

Ira JamesApr 7, 2026 · 3 min read
Fujisoft Showcases AI-Based Site Security System Built on AMD Embedded+ Platform
News

Fujisoft Showcases AI-Based Site Security System Built on AMD Embedded+ Platform

Ira JamesApr 3, 2026 · 3 min read
Lenovo Expands Hybrid AI Push With NVIDIA, Targets Real-Time Enterprise Workloads
News

Lenovo Expands Hybrid AI Push With NVIDIA, Targets Real-Time Enterprise Workloads

Ira JamesMar 23, 2026 · 4 min read