Samsung Unveils Next-gen AI Chips, Deepens NVIDIA Partnership at GTC 2026

0
66
Share this

Samsung Electronics on Monday unveiled its latest artificial intelligence (AI) semiconductor technologies, including its next-generation high-bandwidth memory (HBM), at NVIDIA GTC 2026, as it seeks to strengthen its position in the rapidly growing AI infrastructure market.

The company introduced its sixth-generation HBM4 chips, now in mass production, alongside a more advanced successor, HBM4E, marking a push to meet surging demand for high-performance AI computing.

Samsung said its HBM4 delivers processing speeds of up to 11.7 gigabits per second (Gbps), exceeding the current industry benchmark of 8Gbps, with potential to scale to 13Gbps. The upcoming HBM4E is expected to reach 16Gbps per pin and bandwidth of 4.0 terabytes per second (TB/s), targeting next-generation data centres.

The chips are designed for integration with NVIDIA’s forthcoming Vera Rubin AI platform, underscoring deepening ties between the two firms in building advanced AI systems.

Samsung, the only semiconductor player offering a full-stack AI solution spanning memory, logic, foundry and advanced packaging, also showcased hybrid copper bonding technology, which enables stacking of 16 or more memory layers while reducing heat resistance.

Expanding AI infrastructure ecosystem

At the event, Samsung highlighted a range of AI infrastructure products tailored for NVIDIA systems, including SOCAMM2 server memory modules and its latest solid-state drives such as the PM1763 and PM1753.

The company said SOCAMM2, built on low-power DRAM, is already in mass production and offers improved bandwidth and flexible integration for AI servers.

Meanwhile, its PM1763 SSD, based on the PCIe 6.0 interface, is designed to deliver faster data transfer speeds and higher capacities for AI workloads, while the PM1753 SSD is optimized for energy-efficient inference systems under NVIDIA’s BlueField-4 architecture.

AI factory ambitions

Samsung also outlined its collaboration with NVIDIA on “AI Factory” initiatives, leveraging accelerated computing and NVIDIA Omniverse to build digital twin models of semiconductor manufacturing.

The initiative aims to optimise chip production processes across design, engineering and manufacturing, using AI-driven automation and simulation technologies.

A keynote session by Samsung executive Yong Ho Song at the conference detailed how agentic AI and digital twins are transforming semiconductor production, including applications in electronic design automation and computational lithography.

Pushing AI to the edge

Beyond data centres, Samsung presented memory solutions for on-device AI, including LPDDR5X and LPDDR6 DRAM designed for smartphones, tablets and wearables.

LPDDR5X offers speeds of up to 25Gbps per pin while reducing power consumption by up to 15%, while LPDDR6 is expected to deliver 30–35Gbps with advanced power management features for next-generation edge AI applications.

The company also showcased storage solutions such as PM9E3 and PM9E1 NAND for personal AI supercomputing systems like NVIDIA’s DGX platforms.

Strategic positioning

The announcements come as global demand for AI chips intensifies, with semiconductor companies racing to supply memory and compute solutions for large-scale AI models and infrastructure.

Samsung’s expanded collaboration with NVIDIA signals its ambition to compete more aggressively in the AI hardware stack, where high-bandwidth memory has become a critical component for training and deploying advanced AI systems.

 

Share this