Nvidia


2024-07-16

[News] Samsung’s HBM3e Rumored to be Certified by NVIDIA, Boosting DDR5 Price Increases in Q3

Though Samsung has denied the rumor that its HBM3e passed NVIDIA’s qualification tests, multiple Taiwanese companies in the supply chain reportedly learned that the product is expected to receive certification soon, and will start shipping in Q3. As memory manufacturers are said to shift at least 20-30% of their production capacity to HBM, tightening supply further, DDR5 prices in Q3 will reportedly be on the rise.

It is reported that some of Samsung’s supply chain partners have recently received information to place orders and reserve capacity as soon as possible, which indicates the memory giant’s HBM may begin shipments smoothly in the second half of the year. The move may also imply that the internal capacity allocation within Samsung will accelerate, shifting the focus of production lines to HBM.

Taiwanese memory supply chain sources reportedly believe that the news of Samsung’s HBM certification is likely to be confirmed at the upcoming Samsung financial report meeting, which will take place on July 31. It is said that memory manufacturers will relocate at least 20-30% of their production capacity, driving DDR5 prices to rise.

TrendForce notes that a recovery in demand for general servers—coupled with an increased production share of HBM by DRAM suppliers—has led suppliers to maintain their stance on hiking prices. As a result, the ASP of DRAM in Q3 is expected to continue rising, with an anticipated increase of 8–13%. Due to high average inventory levels of DDR4 among buyers, purchasing momentum will be focused on DDR5.

On the other hand, regarding NAND prices in Q3, TrendForce reports that while the enterprise sector continues to invest in server infrastructure, the consumer electronics market remains lackluster. This, combined with NAND suppliers aggressively ramping up production in the second half of the year, is likely to curb the blended price hike to a modest 5–10%.

According to TrendForce’s latest analysis, Samsung’s initial plan to pass NVIDIA’s certification in Q2 was delayed, making it falling behind SK hynix and Micron. Simultaneously, some HBM suppliers also faced lower-than-expected production yields, leading to concerns about a shortage of HBM3e 8hi materials for the H200 GPU shipments starting in Q2 2024.

However, Samsung adjusted its 1alpha nm front-end production process and back-end stacking process in the first half of 2024, leading the industry to expect that sample production could be completed in Q3 2024, followed by product certification.

Read more

(Photo credit: Samsung)

2024-07-15

[News] TSMC Reportedly Sees Strong 4nm Demand, with NVIDIA’s Order Up by 25%

According to the industry sources cited in a report from Economic Daily News, TSMC is gearing up to start production of NVIDIA’s latest Blackwell platform architecture graphics processors (GPU) on the 4nm process. In response to the strong customer demand, NVIDIA has reportedly increased its orders to TSMC by 25%.

This surge not only underscores the unprecedented boom in the AI market but also provides substantial momentum for TSMC’s performance in the second half of the year, setting the stage for an optimistic annual outlook adjustment, the report notes.

TSMC is set to hold an earnings conference call on July 18, in which it is expected to release the financial results of the second quarter as well as the guidance for the third quarter.

As TSMC will reportedly commence the production of NVIDIA’s Blackwell platform architecture GPU, which may be regarded as one of the most powerful AI chips, it is anticipated to be a focal point of discussion at TSMC’s upcoming earnings call.

Packed with 208 billion transistors, NVIDIA’s Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.

The report further cited sources, revealing that international giants such as Amazon, Dell, Google, Meta, and Microsoft will adopt the NVIDIA Blackwell architecture GPU for AI servers. As demand exceeds expectations,NVIDIA is prompted to increase its orders with TSMC by approximately 25%.

As NVIDIA ramps up production of its Blackwell architecture GPUs, shipments of terminal server cabinets, including the GB200 NVL72 and GB200 NVL36 models, have seen a simultaneous significant increase. Initially expected to ship a combined total of 40,000 units, this figure has surged to 60,000 units, marking a 50% increase. Among them, the GB200 NVL36 accounts for the majority with 50,000 units.

The report estimates suggest that the average selling price of the GB200 NVL36 server cabinet is USD 1.8 million, while the GB200 NVL72 server cabinet commands an even higher price of USD 3 million. The GB200 NVL36 features 36 GB200 super chips, 18 Grace CPUs, and 36 enhanced B200 GPUs, whereas the GB200 NVL72 boasts 72 GB200 super chips, 36 Grace CPUs, and 72 B200 GPUs, which all contribute to TSMC’s momentum.

TSMC former Chairman Mark Liu, before handing over the reins in June, had already predicted that the demand for AI applications looks more optimistic compared to a year ago. Current Chairman C.C. Wei has also indicated that AI applications are just beginning, and he is optimistic like everyone else.

Read more

(Photo credit: TSMC)

Please note that this article cites information from Economic Daily News.

2024-07-11

[News] SK hynix, TSMC, and NVIDIA Reportedly Forge Alliance to Develop Next-Generation HBM

According to a report from BusinessKorea, memory giant SK hynix is deepening its collaboration with TSMC and NVIDIA, and will announce a closer partnership at the Semicon Taiwan exhibition in September.

SK hynix has been collaborating with TSMC for many years. In 2022, TSMC announced the establishment of the OIP 3DFabric Alliance at its North America Technology Symposium, incorporating partners in memory and packaging.

At that time, Kangwook Lee, Senior Vice President and PKG Development Lead at SK hynix, revealed that the company has been closely working with TSMC on previous generations and current high-bandwidth memory (HBM) technologies, supporting compatibility with the CoWoS process and HBM interconnectivity.

After joining the 3DFabric Alliance, SK hynix reportedly plans to deepen its collaboration with TSMC to develop solutions for the next generation of HBM, looking to achieve innovations in system-level products.

SK hynix President, Justin Kim, is reportedly said to be delivering a keynote speech at the International Semiconductor Exhibition in Taipei in September, marking SK hynix’s first participation in such a keynote address. Following the speech, Kim will engage in discussions with senior executives from TSMC, possibly including NVIDIA CEO Jensen Huang, to discuss collaborative plans for the next generation of HBM. This move is expected to further solidify the trilateral alliance between SK hynix, TSMC, and NVIDIA.

Notably, the collaboration among the three giants was hinted in the first half of this year. On April 25th, SK Group Chairman Chey Tae-won traveled to Silicon Valley to meet with NVIDIA CEO Jensen Huang, potentially related to these strategies.

Reportedly, SK hynix will adopt TSMC’s logic process to manufacture the base die for HBM (High Bandwidth Memory). Reports indicate that SK hynix and TSMC have agreed to collaborate on the development and production of HBM4, scheduled for mass production in 2026.

HBM stacks core chips vertically on the base die, which are interconnected. While SK hynix currently produces HBM3e using its own process for the base die, it will switch to TSMC’s advanced logic process for HBM4. The same report further suggested that SK hynix will highlight achievements at forums, including achieving more than a 20% reduction in power consumption compared to initial targets for HBM4.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from BusinessKorea.

2024-07-11

[News] China’s Moore Threads Develops MTLink to Challenge NVIDIA’s NVLink

According to a report from Tom’s Hardware, it’s reported that one of GPU giant NVIDIA’s key advantages in data centers is not only its leading GPUs for AI and HPC computing but also its effective scaling of data center processors using its own hardware and software. To compete with NVIDIA’s CUDA, Chinese GPU manufacturer Moore Threads has developed networking technology aimed at achieving the same horizontal scaling of GPU compute power with its related clusters, addressing market demands.

Moore Threads was founded in 2020 by former senior executives from NVIDIA China. After being blacklisted due to U.S. export restrictions, they were unable to access advanced manufacturing processes but continued to develop gaming GPUs.

Per another report from South China Morning Post, Moore Threads has upgraded the AI KUAE data center servers, with a single cluster connecting up to 10,000 GPUs. The KUAE data center server integrates eight MTT S4000 GPUs, designed for training and running large language models and interconnected using MTLink network technology, similar to NVIDIA’s NVLink.

These GPUs use the MUSA architecture, featuring 128 tensor cores and 48GB of GDDR6 memory with a bandwidth of 768GB/s. A cluster with 10,000 GPUs can have 1,280,000 tensor cores, though the actual performance depends on various factors.

However, Moore Threads’ products still lag behind NVIDIA’s GPUs in performance. Even NVIDIA’s 2020 A100 80GB GPU significantly outperforms the MTT S4000 in computing power.

Moore Threads has established strategic partnerships with major telecommunications companies like China Mobile and China Unicom, as well as with China Energy Engineering Group and Big Data Technology Co., Ltd., to develop three new computing clusters aimed at boosting China’s AI development.

Recently, Moore Threads completed a new round of financing, raising CNY 2.5 billion (roughly USD 343.7 million) to support its expansion plans and technological development. However, the inability to access advanced processes from TSMC, Intel, and Samsung presents significant challenges for developing next-generation GPUs.

Read more

(Photo credit: Moore Threads)

Please note that this article cites information from Tom’s Hardware and South China Morning Post.

2024-07-05

[News] Samsung Establishes New HBM Team to Advance HBM3, HBM3e and HBM4 Development

In order to address the growing demand for high-performance memory solutions fueled by the expansion of the artificial intelligence (AI) market, Samsung Electronics has formed a new “HBM Development Team” within its Device Solutions (DS) Division to enhance its competitive edge in high-bandwidth memory (HBM), according to the latest report from Business Korea. The new team will concentrate on advancing the progress on HBM3, HBM3e, and the next-generation HBM4 technologies, the report noted.

This initiative comes shortly after the Korean memory giant changed its semiconductor business leader in May. Citing industry sources, the report stated that Samsung’s DS Division carried out an organizational restructuring centered on the establishment of the HBM Development Team.

Also, the move attracts attention as on July 4th, a report from Korea media outlet Newdaily indicated that Samsung has finally obtained approval from NVIDIA for qualification of its 5th generation HBM, HBM3e, though the company denied the market rumor afterwards.

Samsung has a long history of dedicating to HBM development. Since 2015, it has maintained an HBM development organization within its Memory Business Division. Earlier this year, the tech heavyweight also created a task force (TF) to boost its HBM competitiveness, and the new team will unify and enhance these ongoing efforts, the report noted.

According to the report, Samsung reached a significant milestone in February by developing the industry’s first HBM3e 12-layer stack, which offers the industry’s largest capacity of 36 gigabytes (GB). Samples of the HBM3e 8-layer and 12-layer stacks have already been sent to NVIDIA for quality testing.

Regarding the latest development, TrendForce reports that Samsung is still collaborating with NVIDIA and other major customers on the qualifications for both 8-hi and 12-hi HBM3e products. Samsung anticipates that its HBM3e qualification will be partially completed by the end of 3Q24.

According to TrendForce’s latest analysis on the HBM market, HBM production will be prioritized due to its profitability and increasing demand. However, limited yields of around 50–60% and a wafer area 60% larger than DRAM products mean a higher proportion of wafer input is required. Based on the TSV capacity of each company, HBM is expected to account for 35% of advanced process wafer input by the end of this year, with the remaining wafer capacity used for LPDDR5(X) and DDR5 products.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Business Korea and Newsdaily.
  • Page 12
  • 46 page(s)
  • 230 result(s)

Get in touch with us