HBM2 Mass Production Begins At Samsung

👤by Tim Harmer Comments 📅20.01.2016 18:00:57

One of the core milestones for high-performance computing in the 2nd half of 2016 appears to now be in place thanks to news from DRAM manufacturer Samsung. The Korean semi-conductor behemoth has announced that production of new 4GB DRAM packages manufactured to the 2nd-generation High Bandwidth Memory standard (HBM2) has ramped up, a product intended to satisfy the requirements of 2016's GPU, Networking and Server markets.

You may recall that first generation HBM debuted with AMD's R9 Fury family of GPUs (including the R9 Nano). These designs feature four stacks of 1GB (8 x 4Gb Core dies) DRAM packages for a total of 4GB, the same quantity now seen on R9 Fury, Fury X and Nano models. Although somewhat restrictive in terms of frame buffer size, the memory bandwidth available to the core is considerably higher than comparable amounts of GDDR5 memory whilst also reducing the overall power required by the memory subsystem. HBM2 is set to improve both memory bandwidth and memory density to greatly improve performance still further.

Samsung's HBM2 Implementation

The process of memory stacking is made possible by Through Silicon Via (TSV) DRAM tech, which passes signals through TSV 'holes' and microbumps between each DRAM layer. It has meant that the amount of physical space memory takes up on the PCB is greatly reduced, a fact that AMD's Radeon R9 Nano in particular took full advantage of, and a far wider bus than GDDR5 memory interfaces.

Samsung's HBM2 memory packages stack four 8 Gigabit layers (known as 'Core dies') on a single buffer die, which then communicates with the processor (GPU, CPU etc.) via a silicon interposer. Each stack can therefore account for a total of four Gigabytes of memory and, assuming similar implementations for next-generation GPUs as with AMD's Fury designs, allows for a total of 16GB VRAM per GPU; the actual number of packages used can of course vary. Samsung's 4GB HBM2 memory package has a total available bandwidth of 256GB/s, twice that of HBM1.

HBM2 memory is likely to be a cornerstone of both AMD and NVIDIA's flagship GPU architectures for 2016 and 2017; NVIDIA in particular will probably leverage the technology in their Pascal-class server solutions. At present mid-tier graphics solutions are likely to remain GDDR5-based, mainly due to the cost of the still developing technology. Neither NVIDIA nor AMD have revealed their own source of HBM2 memory.

Samsung are also in the process of developing 8GB HBM2 packages, a design which they claim would reduce the amount of space taken up my memory on a graphics card's PCB by as much as 95% compared to GDDR5.

SOURCE: Samsung