News
TSMC announced last year that it would build a plant in Dresden, Germany. The plant is originally expected to break ground as early as Q4 this year, but now it may start sooner. According to a report from Deutsche Welle, TSMC’s Dresden plant will begin construction within a few weeks, which means it will start this fall, aligning with the company’s previously announced timeline.
The TSMC Germany plant was initially scheduled to begin construction in the second half of 2024 and to start production by late 2027. The new plant is expected to create approximately 2,000 direct high-tech jobs. TSMC will hold a 70% stake in the plant, with Bosch, Infineon, and NXP each holding 10% stakes, and TSMC will operate the facility. The EU and the German government are subsidizing about half of the plant’s investment.
To ensure the plant can commence production smoothly in 2027, the city of Dresden is investing EUR 250 million to build an industrial water supply system and enhance the reliability of the local power grid.
The TSMC Germany plant is expected to use 28/22nm planar CMOS and 16/12nm FinFET process technologies, with a monthly production capacity of approximately 40,000 300mm (12-inch) wafers.
On the other hand, another global semiconductor giant, Intel, was said to have delayed its construction of Fab 29.1 and 29.2 in Magdeburg, Germany, as the new timeline pushed the start of construction to May 2025, according to a report by Tom’s Hardware, citing German media outlet Volksstimme.
Read more
(Photo credit: TSMC)
News
Amidst the tide of artificial intelligence (AI), new types of DRAM represented by HBM are embracing a new round of development opportunities. Meanwhile, driven by server demand, MRDIMM/MCRDIMM have emerged as new sought-afters in the memory industry, stepping onto the “historical stage.”
According to a report from WeChat account DRAMeXchange, currently, the rapid development of AI and big data is boosting an increase in the number of CPU cores in servers. To meet the data throughput requirements of each core in multi-core CPUs, it is necessary to significantly increase the bandwidth of memory systems. In this context, HBM modules for servers, MRDIMM/MCRDIMM, have emerged.
On July 22, JEDEC announced that it will soon release the DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMM) and the next-generation LPDDR6 Compression-Attached Memory Module (CAMM) advanced memory module standards, and introduced key details of these two types of memory, aiming to support the development of next-generation HPC and AI. These two new technical specifications were developed by JEDEC’s JC-45 DRAM Module Committee.
As a follow-up to JEDEC’s JESD318 CAMM2 memory module standard, JC-45 is developing the next-generation CAMM module for LPDDR6, with a target maximum speed of over 14.4GT/s. In light of the plan, this module will also provide 24-bit wide subchannels, 48-bit wide channels, and support “connector array” to meet the needs of future HPC and mobile devices.
DDR5 MRDIMM supports multiplexed rank columns, which can combine and transmit multiple data signals on a single channel, effectively increasing bandwidth without additional physical connections. It is reported that JEDEC has planned multiple generations of DDR5 MRDIMM, with the ultimate goal of increasing its bandwidth to 12.8Gbps, doubling the current 6.4Gbps of DDR5 RDIMM memory and improving pin speed.
In JEDEC’s vision, DDR5 MRDIMM will utilize the same pins, SPD, PMIC, and other designs as existing DDR5 DIMMs, be compatible with the RDIMM platform, and leverage the existing LRDIMM ecosystem for design and testing.
JEDEC stated that these two new technical specifications are expected to bring a new round of technological innovation to the memory market.
In March 2023, AMD announced at the Memcom 2023 event that it is collaborating with JEDEC to develop a new DDR5 MRDIMM standard memory, targeting a transfer rate of up to 17600 MT/s. According to a report from Tom’s Hardware at that time, the first generation of DDR5 MRDIMM aims for a rate of 8800 MT/s, which will gradually increase, with the second generation set to reach 12800 MT/s, and the third generation to 17600 MT/s.
MRDIMM, short for “Multiplexed Rank DIMM,” integrates two DDR5 DIMMs into one, thereby providing double the data transfer rate while allowing access to two ranks.
On July 16, memory giant Micron announced the launch of the new MRDIMM DDR5, which is currently sampling and will provide ultra-large capacity, ultra-high bandwidth, and ultra-low latency for AI and HPC applications. Mass shipment is set to begin in the second half of 2024.
MRDIMM offers the highest bandwidth, largest capacity, lowest latency, and better performance per watt. Micron said that it outperforms current TSV RDIMM in accelerating memory-intensive virtualization multi-tenant, HPC, and AI data center workloads.
Compared to traditional RDIMM DDR5, MRDIMM DDR5 can achieve an effective memory bandwidth increase of up to 39%, a bus efficiency improvement of over 15%, and a latency reduction of up to 40%.
MRDIMM supports capacity options ranging from 32GB to 256GB, covering both standard and high-form-factor (TFF) specifications, suitable for high-performance 1U and 2U servers. The 256GB TFF MRDIMM outruns TSV RDIMM with similar capacity by 35% in performance.
This new memory product is the first generation of Micron’s MRDIMM series and will be compatible with Intel Xeon processors. Micron stated that subsequent generations of MRDIMM products will continue to offer 45% higher single-channel memory bandwidth compared to their RDIMM counterparts.
As one of the world’s largest memory manufacturers, SK hynix already introduced a product similar to MRDIMM, called MCRDIMM, even before AMD and JEDEC.
MCRDIMM, short for “Multiplexer Combined Ranks2 Dual In-line Memory Module,” is a module product that combines multiple DRAMs on a substrate, operating the module’s two basic information processing units, Rank, simultaneously.
In late 2022, SK hynix partnered with Intel and Renesas to develop the DDR5 MCR DIMM, which became the fastest server DRAM product in the industry at the time. As per Chinese IC design company Montage Technology’s 2023 annual report, MCRDIMM can also be considered the first generation of MRDIMM.
Traditional DRAM modules can only transfer 64 bytes of data to the CPU at a time, while SK hynix’s MCRDIMM module can transfer 128 bytes by running two memory ranks simultaneously. This increase in the amount of data transferred to the CPU each time boosts the data transfer speed to over 8Gbps, doubling that of a single DRAM.
At that time, SK hynix anticipated that the market for MCR DIMM would gradually open up, driven by the demand for increased memory bandwidth in HPC. According to SK hynix’s FY2024 Q2 financial report, the company will launch 32Gb DDR5 DRAM for servers and MCRDIMM products for HPC in 2H24.
MCRDIMM/MRDIMM adopts the DDR5 LRDIMM “1+10” architecture, requiring one MRCD chip and ten MDB chips. Conceptually, MCRDIMM/MRDIMM allows parallel access to two ranks within the same DIMM, increasing the capacity and bandwidth of the DIMM module by a large margin.
Compared to RDIMM, MCRDIMM/MRDIMM can offer higher bandwidth while maintaining good compatibility with the existing mature RDIMM ecosystem. Additionally, MCRDIMM/MRDIMM is expected to enable much higher overall server performance and lower total cost of ownership (TCO) for enterprises.
MRDIMM and MCRDIMM both fall under the category of DRAM memory modules, which have different application scenarios relative to HBM as they have their own independent market space. As an industry-standard packaged memory, HBM can achieve higher bandwidth and energy efficiency in a given capacity with a smaller size. However, due to high cost, small capacity, and lack of scalability, its application is limited to a few fields. Thus, from an industry perspective, memory module is the mainstream solution for large capacity, cost-effectiveness, and scalable memory.
Montage Technology believes that, based on its high bandwidth and large capacity advantages, MRDIMM is likely to become the preferred main memory solution for future AI and HPC. As per JEDEC’s plan, the future new high-bandwidth memory modules for servers, MRDIMM, will support even higher memory bandwidth, further matching the bandwidth demands of HPC and AI application scenarios.
Read more
(Photo credit: SK hynix)
News
South Korean memory giant SK hynix announced on July 26th that it has decided to invest about 9.4 trillion won (approximately USD 6.8 billion) in building the first fab and business facilities of the Yongin Semiconductor Cluster after the board resolution today, according to its press release.
The company plans to start the construction of the fab in March next year and complete it in May 2027, while the investment period was planned to start from August 2024 to the end of 2028, SK hynix states.
The company will produce next-generation DRAMs, including HBM, at the 1st fab prepare for production of other products in line with market demand at the time of completion.
“The Yongin Cluster will be the foundation for SK hynix’s mid- to long-term growth and a place for innovation and co-prosperity that we are creating with our partners,” said Vice President Kim Young-sik, Head of Manufacturing Technology at SK hynix. “We want to contribute to revitalizing the national economy by successfully completing the large-scale industrial complex and dramatically enhancing Korea’s semiconductor technology and ecosystem competitiveness,” according to the press release.
The Yongin Cluster, which will be built on a 4.15 million square meter site in Wonsam-myeon, Yongin, Gyeonggi Province, is currently under site preparation and infrastructure construction. SK hynix has decided to build four state-of-the-art fabs that will produce next-generation semiconductors, and a semiconductor cooperation complex with more than 50 small local companies.
After the construction of the 1st fab, the company aims to complete the remaining three fabs sequentially to grow the Yongin Cluster into a “Global AI semiconductor production base,” the press release notes.
The 9.4 trillion investment approved this time included various construction costs necessary for the initial operation of the cluster, including auxiliary facilities1, business support buildings, and welfare facilities along with the 1st fab.
In addition, SK hynix plans to build a “Mini-fab2” within the first phase to help small businesses develop, demonstrate and evaluate technologies. Through the Mini-fab, the company will provide small business partners with an environment similar to the actual production site so that they can improve the technological perfection as much as possible.
Read more
(Photo credit: SK hynix)
News
Due to challenges in exporting high-performance processors based on x86 and Arm architectures to China, the country is gradually adopting domestically designed operating systems.
According to industry sources cited by Tom’s hardware, Tencent Cloud recently launched the TencentOS Server V3 operating system, which supports China’s three major processors: Huawei’s Kunpeng CPUs based on Arm, Sugon’s Hygon CPUs based on x86, and Phytium’s FeiTeng CPUs based on Arm.
The operating system optimizes CPU usage, power consumption, and memory usage. To optimize the operating system and domestic processors for data centers, Tencent has collaborated with Huawei and Sugon to develop a high-performance domestic database platform.
Reportedly, TencentOS Server V3 can run GPU clusters, aiding Tencent’s AI operations. The latest version of the operating system fully supports NVIDIA GPU virtualization, enhancing processor utilization for resource-intensive services such as Optical Character Recognition (OCR). This innovative approach reduces the cost of purchasing NVIDIA products by nearly 60%.
TencentOS Server is already running on nearly 10 million machines, making it one of the most widely deployed Linux operating systems in China. Other companies, such as Huawei, have also developed their own operating systems, like OpenEuler.
Read more
(Photo credit: Tencent Cloud)
News
According to a report from Wccftech, it’s indicated that with soaring market demand, the shipment volume of NVIDIA’s Blackwell architecture GB200 AI servers has also significantly increased.
As NVIDIA claims, the Blackwell series is expected to be its most successful product. Industry sources cited by Wccftech indicate that NVIDIA’s latest GB200 AI servers are drawing significant orders, with strong demand projected to continue beyond 2025. This ongoing demand is enabling NVIDIA to secure additional orders as its newest AI products remain dominant.
The increasing demand for NVIDIA GB200 AI servers has led to revenue performances of Taiwanese suppliers such as Quanta, Foxconn, and Wistron exceeding expectations. Reportedly, NVIDIA is expected to ship 60,000 to 70,000 servers equipped with GB200 AI server. Each server is estimated to cost between USD 2 million and 3 million, resulting in approximately USD 210 billion in annual revenue from the Blackwell servers alone.
NVIDIA’s GB200 AI chip servers, available in NVL72 and NVL36 specifications, have seen greater preference for the less powerful models due to the growing number of AI startups choosing the more financially feasible NVL36 servers.
With Blackwell debuting in the market by Q4 2024, NVIDIA is projected to achieve significant revenue figures, potentially surpassing the performance of the previous Hopper architecture. Furthermore, NVIDIA has reportedly placed orders for around 340,000 CoWoS advanced packaging units with TSMC for 2025.
Notably, according to the industry sources previously cited in a report from Economic Daily News, TSMC is gearing up to start production of NVIDIA’s latest Blackwell platform architecture graphics processors (GPU) on the 4nm process.
The same report further cited sources, revealing that international giants such as Amazon, Dell, Google, Meta, and Microsoft will adopt the NVIDIA Blackwell architecture GPU for AI servers. As demand exceeds expectations,NVIDIA is prompted to increase its orders with TSMC by approximately 25%.
Read more
(Photo credit: NVIDIA)