News
According to the industry sources cited in a report from Economic Daily News, TSMC is gearing up to start production of NVIDIA’s latest Blackwell platform architecture graphics processors (GPU) on the 4nm process. In response to the strong customer demand, NVIDIA has reportedly increased its orders to TSMC by 25%.
This surge not only underscores the unprecedented boom in the AI market but also provides substantial momentum for TSMC’s performance in the second half of the year, setting the stage for an optimistic annual outlook adjustment, the report notes.
TSMC is set to hold an earnings conference call on July 18, in which it is expected to release the financial results of the second quarter as well as the guidance for the third quarter.
As TSMC will reportedly commence the production of NVIDIA’s Blackwell platform architecture GPU, which may be regarded as one of the most powerful AI chips, it is anticipated to be a focal point of discussion at TSMC’s upcoming earnings call.
Packed with 208 billion transistors, NVIDIA’s Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.
The report further cited sources, revealing that international giants such as Amazon, Dell, Google, Meta, and Microsoft will adopt the NVIDIA Blackwell architecture GPU for AI servers. As demand exceeds expectations,NVIDIA is prompted to increase its orders with TSMC by approximately 25%.
As NVIDIA ramps up production of its Blackwell architecture GPUs, shipments of terminal server cabinets, including the GB200 NVL72 and GB200 NVL36 models, have seen a simultaneous significant increase. Initially expected to ship a combined total of 40,000 units, this figure has surged to 60,000 units, marking a 50% increase. Among them, the GB200 NVL36 accounts for the majority with 50,000 units.
The report estimates suggest that the average selling price of the GB200 NVL36 server cabinet is USD 1.8 million, while the GB200 NVL72 server cabinet commands an even higher price of USD 3 million. The GB200 NVL36 features 36 GB200 super chips, 18 Grace CPUs, and 36 enhanced B200 GPUs, whereas the GB200 NVL72 boasts 72 GB200 super chips, 36 Grace CPUs, and 72 B200 GPUs, which all contribute to TSMC’s momentum.
TSMC former Chairman Mark Liu, before handing over the reins in June, had already predicted that the demand for AI applications looks more optimistic compared to a year ago. Current Chairman C.C. Wei has also indicated that AI applications are just beginning, and he is optimistic like everyone else.
Read more
(Photo credit: TSMC)
News
According to a report from BusinessKorea, memory giant SK hynix is deepening its collaboration with TSMC and NVIDIA, and will announce a closer partnership at the Semicon Taiwan exhibition in September.
SK hynix has been collaborating with TSMC for many years. In 2022, TSMC announced the establishment of the OIP 3DFabric Alliance at its North America Technology Symposium, incorporating partners in memory and packaging.
At that time, Kangwook Lee, Senior Vice President and PKG Development Lead at SK hynix, revealed that the company has been closely working with TSMC on previous generations and current high-bandwidth memory (HBM) technologies, supporting compatibility with the CoWoS process and HBM interconnectivity.
After joining the 3DFabric Alliance, SK hynix reportedly plans to deepen its collaboration with TSMC to develop solutions for the next generation of HBM, looking to achieve innovations in system-level products.
SK hynix President, Justin Kim, is reportedly said to be delivering a keynote speech at the International Semiconductor Exhibition in Taipei in September, marking SK hynix’s first participation in such a keynote address. Following the speech, Kim will engage in discussions with senior executives from TSMC, possibly including NVIDIA CEO Jensen Huang, to discuss collaborative plans for the next generation of HBM. This move is expected to further solidify the trilateral alliance between SK hynix, TSMC, and NVIDIA.
Notably, the collaboration among the three giants was hinted in the first half of this year. On April 25th, SK Group Chairman Chey Tae-won traveled to Silicon Valley to meet with NVIDIA CEO Jensen Huang, potentially related to these strategies.
Reportedly, SK hynix will adopt TSMC’s logic process to manufacture the base die for HBM (High Bandwidth Memory). Reports indicate that SK hynix and TSMC have agreed to collaborate on the development and production of HBM4, scheduled for mass production in 2026.
HBM stacks core chips vertically on the base die, which are interconnected. While SK hynix currently produces HBM3e using its own process for the base die, it will switch to TSMC’s advanced logic process for HBM4. The same report further suggested that SK hynix will highlight achievements at forums, including achieving more than a 20% reduction in power consumption compared to initial targets for HBM4.
Read more
(Photo credit: SK hynix)
News
According to a report from Tom’s Hardware, it’s reported that one of GPU giant NVIDIA’s key advantages in data centers is not only its leading GPUs for AI and HPC computing but also its effective scaling of data center processors using its own hardware and software. To compete with NVIDIA’s CUDA, Chinese GPU manufacturer Moore Threads has developed networking technology aimed at achieving the same horizontal scaling of GPU compute power with its related clusters, addressing market demands.
Moore Threads was founded in 2020 by former senior executives from NVIDIA China. After being blacklisted due to U.S. export restrictions, they were unable to access advanced manufacturing processes but continued to develop gaming GPUs.
Per another report from South China Morning Post, Moore Threads has upgraded the AI KUAE data center servers, with a single cluster connecting up to 10,000 GPUs. The KUAE data center server integrates eight MTT S4000 GPUs, designed for training and running large language models and interconnected using MTLink network technology, similar to NVIDIA’s NVLink.
These GPUs use the MUSA architecture, featuring 128 tensor cores and 48GB of GDDR6 memory with a bandwidth of 768GB/s. A cluster with 10,000 GPUs can have 1,280,000 tensor cores, though the actual performance depends on various factors.
However, Moore Threads’ products still lag behind NVIDIA’s GPUs in performance. Even NVIDIA’s 2020 A100 80GB GPU significantly outperforms the MTT S4000 in computing power.
Moore Threads has established strategic partnerships with major telecommunications companies like China Mobile and China Unicom, as well as with China Energy Engineering Group and Big Data Technology Co., Ltd., to develop three new computing clusters aimed at boosting China’s AI development.
Recently, Moore Threads completed a new round of financing, raising CNY 2.5 billion (roughly USD 343.7 million) to support its expansion plans and technological development. However, the inability to access advanced processes from TSMC, Intel, and Samsung presents significant challenges for developing next-generation GPUs.
Read more
(Photo credit: Moore Threads)
News
In order to address the growing demand for high-performance memory solutions fueled by the expansion of the artificial intelligence (AI) market, Samsung Electronics has formed a new “HBM Development Team” within its Device Solutions (DS) Division to enhance its competitive edge in high-bandwidth memory (HBM), according to the latest report from Business Korea. The new team will concentrate on advancing the progress on HBM3, HBM3e, and the next-generation HBM4 technologies, the report noted.
This initiative comes shortly after the Korean memory giant changed its semiconductor business leader in May. Citing industry sources, the report stated that Samsung’s DS Division carried out an organizational restructuring centered on the establishment of the HBM Development Team.
Also, the move attracts attention as on July 4th, a report from Korea media outlet Newdaily indicated that Samsung has finally obtained approval from NVIDIA for qualification of its 5th generation HBM, HBM3e, though the company denied the market rumor afterwards.
Samsung has a long history of dedicating to HBM development. Since 2015, it has maintained an HBM development organization within its Memory Business Division. Earlier this year, the tech heavyweight also created a task force (TF) to boost its HBM competitiveness, and the new team will unify and enhance these ongoing efforts, the report noted.
According to the report, Samsung reached a significant milestone in February by developing the industry’s first HBM3e 12-layer stack, which offers the industry’s largest capacity of 36 gigabytes (GB). Samples of the HBM3e 8-layer and 12-layer stacks have already been sent to NVIDIA for quality testing.
Regarding the latest development, TrendForce reports that Samsung is still collaborating with NVIDIA and other major customers on the qualifications for both 8-hi and 12-hi HBM3e products. Samsung anticipates that its HBM3e qualification will be partially completed by the end of 3Q24.
According to TrendForce’s latest analysis on the HBM market, HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50–60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.
Read more
(Photo credit: Samsung)
News
In early June, NVIDIA CEO Jensen Huang revealed that Samsung’s High Bandwidth Memory (HBM) is still striving on the certification process, but is one step away from beginning supply. On July 4th, a report from Korea media outlet Newdaily indicated that Samsung has finally obtained approval from the GPU giant for qualification of its 5th generation HBM, HBM3e. It is expected that Samsung will soon proceed with the subsequent procedures to officially start mass production for HBM supply, the report noted.
Citing sources from the semiconductor industry, the report stated that Samsung recently received the HBM3e quality test PRA (Product Readiness Approval) notification from NVIDIA. It is expected that negotiations for supply will commence afterward.
However, just one hour after the news reported that Samsung’s HBM3e passed NVIDIA’s tests, another media outlet, Hankyung, noted that Samsung has denied the rumor, clarifying it is “not true,” and that the company is consistently undergoing quality assessments.
TrendForce reports that Samsung is still collaborating with NVIDIA and other major customers on the qualifications for both 8-hi and 12-hi HBM3e products. The successful qualification mentioned in the article was only an internal achievement for Samsung. Samsung anticipates that its HBM3e qualification will be partially completed by the end of 3Q24.
Per a previous report from Reuters, Samsung has been attempting to pass NVIDIA’s tests for HBM3 and HBM3e since last year, while a test for Samsung’s 8-layer and 12-layer HBM3e chips was said to fail in April.
According to the latest report from Newdaily, Samsung’s breakthrough came about a month after the memory heavyweight sent executives in charge of HBM and memory development to the U.S. at NVIDIA’s request. Previously, it was reported that Samsung had failed to pass the quality test as scheduled due to issues such as overheating.
The report further stated that though from Samsung’s perspective, supplying HBM to NVIDIA was crucial, NVIDIA is also eager to receive more HBM, with the overwhelming demand for AI semiconductors and the impending mass production of its next-generation GPU Rubin, which significantly increases HBM usage.
According to the report, Samsung is expected to eliminate uncertainties in HBM and start full-scale mass production, giving a significant boost to its memory business. There are also suggestions that its HBM performance could see a quantum leap starting in the second half of the year.
On the other hand, Samsung’s major rival, SK hynix, is the primary supplier of NVIDIA’s HBM3 and HBM3e. According to an earlier TrendForce’s analysis, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix.
According to a report from the Financial Times in May, SK hynix has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
Read more
(Photo credit: Samsung)