Samsung early on Friday revealed the world's first 32 Gb DDR5 DRAM die. The new memory die is made on the company's 12 nm-class DRAM fabrication process and not only offers increased density, but also lowers power consumption. The chip will allow Samsung to build record 1 TB RDIMMs for servers as well as lower costs of high-capacity memory modules. 

“With our 12nm-class 32 Gb DRAM, we have secured a solution that will enable DRAM modules of up to 1 TB, allowing us to be ideally positioned to serve the growing need for high-capacity DRAM in the era of AI (Artificial Intelligence) and big data,” said SangJoon Hwang, executive vice president of DRAM product & technology at Samsung Electronics.

32 Gb memory dies not only enable Samsung to build a regular, single-rank 32 GB module for client PCs using only eight single-die memory chips, but they also allow for higher capacity DIMMs that were not previously possible. We are talking about 1 TB memory modules using 40 8-Hi 3DS memory stacks based on eight 32 Gb memory devices. Such modules may sound overkill, but for artificial intelligence (AI), Big Data, and database servers, more DRAM capacity can easily be put to good use. Eventually, 1TB RDIMMs would allow for up to 12 TB of memory in a single socket server (e.g. AMD's EPYC 9004 platform), something that cannot be done now. 

With regards to power consumption, Samsung says that using the new dies they can build 128 GB DDR5 RDIMMs for servers that consume 10% less power than current-generation modules built around 16 Gb dies. This drop of power consumption can be attributed to both 12 nm-class DRAM production node as well as avoiding the use of 3D stacked (3DS) chips that pack two 16 Gb dies into a single package.

Samsung is not disclosing the speed bins of its 32 Gb memory dies, but finished 16 Gb modules made on the same 12 nm-class technology offer a 7200 MT/s data transfer rate.

Samsung intends to start mass production of 32 Gb memory dies by the end of 2023, but for now the company isn't detailing when it plans to offer finished chips to customers. it's likely that the company will start with client PCs first, though whether that translasts into any cost savings remains to be seen.

Otherwise, for servers it usually takes a while for server platform developers and vendors to validate and qualify new memory components. So while Samsung has 1 TB RDIMMs in its future, it will take some time before we see them in shipping servers.

Source: Samsung

Comments Locked

6 Comments

View All Comments

  • Soulkeeper - Saturday, September 2, 2023 - link

    So they are at parity with DDR4 now ...
    It seems like they are dragging their feet. DDR5 with 64Gb memory dies would be more inline with the next gen replacement.
  • nandnandnand - Sunday, September 3, 2023 - link

    "So they are at parity with DDR4 now ..."

    I'm going to need a fact check on that one. IIRC, the largest DDR4 dies are 16Gb, and DDR5 is at 24Gb. There is no 32 Gb DDR4 die.

    You may be confusing Gb (gigabit) with GB (gigabyte). This development means that instead of 48 GB consumer DDR5 modules, we can have 64 GB in 1-2 years.
  • Kevin G - Sunday, September 3, 2023 - link

    There are some 2 DIMM per channel Epyc 9004 motherboard which would permit 24 TB per socket. A dual socket board would double that to 48 TB. The Xeon side still supports quad and octo socket topologies which boost maximum capacities higher even with the number of channels less per socket. An octosocket Sapphire Rapids can hit 128 TB in a single logical system via its own memory controllers. CLX memory expanders do work but not officially supported by AMD and Intel on their current generation. The next wave of products should formalize CLX memory support to further boost memory capacities.

    While that seems like a lot, systems are quickly approaching a nested page limitation of 256 TB of memory (48 bit addressing mode). On the hardware side of things, support for an additional paging layer for 56 bit addressing does exist but I’m uncertain of its adoption.
  • torbendalum - Monday, September 4, 2023 - link

    Epyc 9004 only support 6TB per socket, so bigger memory modules would not allow you to have more memmory.
  • deil - Monday, September 4, 2023 - link

    It's a good option for expansion, and when modules exist, AMD will target them in next cpu.
    its always good to see bigger and better, even if they outgrew what we handle right now.
    I believe this enables us to use 6TB of ram in 1U form server, but I cannot double-check it.
    I hope PC's will get more as well, as 16GB on everything is very low already...
  • Kevin G - Tuesday, September 5, 2023 - link

    That is the states official maximum but there doesn’t appear to be any artificial limitation on the Epyc 9004 series. This is likely due to AMD not having access to 1 TB memory for validation/testing at the time of Epyc 9004’s release. They should work. It definitely test before deployment or see if they appear on the OEM’s validates list for the motherboard (which should be done even for 6 TB/1 DIMM per channel setups for the same reason).

Log in

Don't have an account? Sign up now