Samsung has recently introduced HBM3E 12H DRAM with advanced TC NCF technology, which is sure to excite fans of acronyms. For those unfamiliar with the jargon, HBM stands for “high bandwidth memory” and delivers exactly what it promises.

Back in October, Samsung unveiled HBM3E Shinebolt, an upgraded version of the third generation HBM that can achieve a remarkable 9.8Gbps per pin (resulting in a total of 1.2 terabytes per second for the whole package).

Now, let’s talk about 12H. This number simply indicates the amount of chips that have been stacked vertically in each module – in this case, 12 chips. By stacking more chips, Samsung has managed to reach an impressive 36GB capacity with its 12H design, which is 50% more than an 8H configuration. Despite the increase in capacity, the bandwidth remains constant at 1.2 terabytes per second.

Finally, TC NCF stands for Thermal Compression Non-Conductive Film, which refers to the material layered in between the stacked chips. Samsung has been working on making this material thinner, currently at 7µm, allowing the 12H stack to have a similar height to an 8H stack. This enables the use of the same HBM packaging. Additionally, the TC NCF enhances thermal properties for improved cooling and higher yields.

Samsung unveils higher capacity HBM3E memory for faster AI training and inference

What applications will this memory serve? It shouldn’t come as a surprise that AI is the target market. AI requires significant amounts of RAM, and companies like Nvidia are leveraging Samsung’s high-bandwidth memory for their advanced designs.

For example, the Nvidia H200 Tensor Core GPU boasts 141GB of HBM3E running at a total of 4.8 terabytes per second, surpassing consumer GPUs with GDDR. The RTX 4090, on the other hand, has 24GB of GDDR6 running at just 1 terabyte per second.

Reportedly, the H200 uses six 24GB HBM3E 8H modules from Micron (totaling 144GB, with only 141GB usable). Alternatively, the same capacity can be achieved with just four 12H modules or a capacity of 216GB can be reached with six 12H modules.

According to Samsung’s estimates, the enhanced capacity of the new 12H design will accelerate AI training by 34% and allow inference services to handle “more than 11.5 times” the number of users.

The increasing demand for AI accelerators like the H200 ensures a thriving market for memory suppliers like Micron, Samsung, and SK Hynix who are all vying for a share of the market.

Source