News
Samsung’s HBM, according to a report from TechNews, has yet to pass certification by GPU giant NVIDIA, causing it to fall behind its competitor SK Hynix. As a result, the head of Samsung’s semiconductor division was replaced. Although Samsung denies any issues with their HBM and emphasizes close collaboration with partners, TechNews, citing market sources, indicates that Samsung has indeed suffered a setback.
Samsung invested early in HBM development and collaborated with NVIDIA on HBM and HBM2, but sales were modest. Eventually, the HBM team, according to TechNews’ report, moved to SK Hynix to develop HBM products. Unexpectedly, the surge in generative AI led to a sharp increase in HBM demand, and SK Hynix, benefitting from the trend, seized the opportunity with the help of the team.
Yet, in response to the rumors about changes in the HBM team, SK Hynix has denied the claims that SK Hynix developed HBM with the help of the Samsung team and also denied the information that Samsung’s HBM team transferred to SK Hynix. SK Hynix further emphasized the fact that SK Hynix’s HBM was developed solely by its own engineers.
Samsung’s misfortune is evident; despite years of effort, they faced setbacks just as the market took off. Samsung must now find alternative ways to catch up. The market still needs Samsung, as noted by Wallace C. Kou, President of memory IC design giant Silicon Motion.
Kou reportedly stated that Samsung remains the largest memory producer, and as NVIDIA faces a supply shortage for AI chips, the GPU giant is keen to cooperate with more suppliers. Therefore, it’s only a matter of time before Samsung supplies HBM to NVIDIA.
Furthermore, Samsung also indicated in a recent statement, addressing that they are conducting HBM tests with multiple partners to ensure quality and reliability.
In the statement, Samsung indicates that it is in the process of optimizing their products through close collaboration with its customers, with testing proceeding smoothly and as planned. As HBM is a customized memory product, it requires optimization processes in line with customers’ needs.
Samsung also states that it is currently partnering closely with various companies to continuously test technology and performance, and to thoroughly verify the quality and performance of its HBM.
On the other hand, NVIDIA has various GPUs adopting HBM3e, including H200, B200, B100, and GB200. Although all of them require HBM3e stacking, their power consumption and heat dissipation requirements differ. Samsung’s HBM3e may be more suitable for H200, B200, and AMD Instinct MI350X.
Read more
(Photo credit: SK Hynix)
News
In response to US export bans, NVIDIA, the global leader in AI chips, has commenced to sell H20, its AI chip tailored for the Chinese market earlier this year. However, an oversupply caused the chip to be priced lower than its rival, Huawei, in some cases even at an over 10% discount, according to the latest report by Reuters.
The US Department of Commerce restricted the export of NVIDIA AI chips to China due to concerns about their potential military use in late 2022. In response, NVIDIA has repeatedly reduced product performance to comply with US regulations. The H20 chip, derived from the H800, is specifically designed as a ‘special edition’ for the Chinese market.
However, due to the abundant supply in the market, citing sources familiar with the matter, Reuters noted that H20 chips are being sold at a discount of over 10% compared to Huawei’s Ascend 910B, the most powerful AI chip from the Chinese tech giant.
The chip is reportedly to be sold at approximately 100,000 yuan per unit, while Huawei 910B sold at over 120,000 yuan per unit.
The decreasing prices underscore the difficulties NVIDIA encounters in its China operations amid U.S. sanctions on AI chip exports and rising competition from local rivals.
According to a previous report by The Information, major tech companies such as Alibaba, Baidu, ByteDance, and Tencent have been instructed to reduce their spending on foreign-made chips like NVIDIA’s, according to sources cited by the media outlet.
(Photo credit: Huawei)
News
To alleviate the capacity constraints of CoWoS advanced packaging, NVIDIA is reportedly planning to accelerate the introduction of its GB200, into panel-level fan-out packaging. According to a report from Economic Daily News, originally scheduled for 2026, this shift has been moved up to 2025, sparking opportunities in the panel-level fan-out packaging sector.
Taiwanese companies like Powertech Technology Inc. (PTI) and AU Optronics (AUO) are said to have prepared with the necessary capabilities, expected to seize this market opportunity.
The sources cited by the report from Economic Daily News explain that fan-out packaging has two branches: wafer-level fan-out packaging (FOWLP) and panel-level fan-out packaging (FOPLP). Among Taiwanese packaging and testing companies, PTI is reportedly the fastest in deploying panel-level fan-out packaging.
To capture the high-end logic chip packaging market, PTI has fully dedicated its Hsinchu Plant 3 to panel-level fan-out packaging and TSV CIS (CMOS image sensors) technologies, emphasizing that fan-out packaging can achieve heterogeneous integration of ICs.
PTI previously expressed optimism about the opportunities presented by the era of panel-level fan-out packaging, noting that it can produce chip areas two to three times larger than wafer-level fan-out packaging.
Innolux, a major panel manufacturer, is also optimistic, forecasting that 2024 will be the advanced packaging mass production inaugural year for the group. The first phase capacity of its fan-out panel-level packaging (FOPLP) production line has already been fully booked, with mass production and shipments scheduled to begin in the third quarter of this year.
Chairman of Innolux Jim Hung emphasized that advanced packaging technology (PLP) connects chips through redistribution layers (RDL), meeting the requirements for high reliability, high power output, and high-quality packaging products. This technology has secured process and reliability certifications from top-tier customers, and its yield rates have been well received, with mass production set to commence this year.
Read more
(Photo credit: NVIDIA)
News
Samsung’s latest high bandwidth memory (HBM) chips have reportedly failed Nvidia’s tests, while the reasons were revealed for the first time. According to the latest report by Reuters, the failure was said to be due to issues with heat and power consumption.
Citing sources familiar with the matter, Reuters noted that Samsung’s HBM3 chips, as well as its next generation HBM3e chips, may be affected, which the company and its competitors, SK hynix and Micron, plan to launch later this year.
In response to the concerns raising by heat and power consumption regarding HBM chips, Samsung stated that its HBM testing proceeds as planned.
In an offical statement, Samsung noted that it is in the process of optimizing products through close collaboration with customers, with testing proceeding smoothly and as planned. The company said that HBM is a customized memory product, which requires optimization processes in tandem with customers’ needs.
According to Samsung, the tech giant is currently partnering closely with various companies to continuously test technology and performance, and to thoroughly verify the quality and performance of HBM.
Nvidia, on the other hand, declined to comment.
As Nvidia currently dominates the global GPU market with an 80% lion’s share for AI applications, meeting Nvidia’s stardards would doubtlessly be critical for HBM manufacturers.
Reuters reported that Samsung has been attempting to pass Nvidia’s tests for HBM3 and HBM3e since last year, while a test for Samsung’s 8-layer and 12-layer HBM3e chips was said to fail in April.
According to TrendForce’s analysis earlier, NVIDIA’s upcoming B100 or H200 models will incorporate advanced HBM3e, while the current HBM3 supply for NVIDIA’s H100 solution is primarily met by SK hynix. SK hynix has been providing HBM3 chips to Nvidia since 2022, Reuters noted.
According to a report from the Financial Times in May, SK hynix has successfully reduced the time needed for mass production of HBM3e chips by 50%, while close to achieving the target yield of 80%.
Another US memory giant, Micron, stated in February that its HBM3e consumes 30% less power than its competitors, meeting the demands of generative AI applications. Moreover, the company’s 24GB 8H HBM3e will be part of NVIDIA’s H200 Tensor Core GPUs, breaking the previous exclusivity of SK hynix as the sole supplier for the H100.
Considering major competitors’ progress on HBM3e, if Samsung fails to meet Nvidia’s requirements, the industry and investors may be more concerned on whether the Korean tech heavyweight would further fall behind its rivals in the HBM market.
(Photo credit: Samsung)
News
The market was originally concerned that NVIDIA might face a demand lull during the transition from its Hopper series GPUs to the Blackwell series. However, the company executives clearly stated during the latest financial report release that this is not the case.
According to reports from MarketWatch and CNBC, NVIDIA CFO Colette Kress stated on May 22 that NVIDIA’s data center revenue for the first quarter (February to April) surged 427% year-over-year to USD 22.6 billion, primarily due to shipments of Hopper GPUs, including the H100.
On May 22, during the earnings call, Kress also mentioned that Facebook’s parent company, Meta Platforms, announced the launch of its latest large language model (LLM), “Lama 3,” which utilized 24,000 H100 GPUs. This was the highlight of Q1. She also noted that major cloud computing providers contributed approximately “mid-40%” of NVIDIA’s data center revenue.
NVIDIA CEO Jensen Huang also stated in the call, “We see increasing demand of Hopper through this quarter,” adding that he expects demand to outstrip supply for some time as NVIDIA transitions to Blackwell.
As per a report from MoneyDJ, Wall Street had previously been concerned that NVIDIA’s customers might delay purchases while waiting for the Blackwell series. Sources cited by the report predict that the Blackwell chips will be delivered in the fourth quarter of this year.
NVIDIA’s Q1 (February to April) financial result showed that revenue soared 262% year-over-year to USD 26.04 billion, with adjusted earnings per share at USD 6.12. Meanwhile, NVIDIA’s data center revenue surged 427% year-over-year to USD 22.6 billion.
During Q1, revenue from networking products (mainly Infiniband) surged more than threefold to USD 3.2 billion compared to the same period last year. Revenue from gaming-related products increased by 18% year-over-year to USD 2.65 billion. Looking ahead to this quarter (May to July), NVIDIA predicts revenue will reach USD 28 billion, plus or minus 2%.
NVIDIA’s adjusted gross margin for Q1 was 78.9%. The company predicts that this quarter’s adjusted gross margin will be 75.5%, plus or minus 50 basis points. In comparison, competitor AMD’s gross margin for the first quarter was 52%.
Read more
(Photo credit: NVIDIA)