Nvidia


2024-05-07

[News] South Korea Reportedly Develops AI Chips for Autonomous Vehicles, Challenging NVIDIA

Recently, a report from South Korean media outlet BusinessKorea has indicated that the South Korean government is actively advancing new research and development (R&D) projects, including the development of AI chips for autonomous vehicles, with the aim of surpassing the American semiconductor giant NVIDIA.

The report stated that on May 2nd, the South Korean Ministry of Trade, Industry, and Energy announced that the“Second Strategic Planning and Investment Council,” comprising of representatives from research institutes, universities, etc, approved 62 new R&D projects for 2025, including flagship projects and roadmaps in over 11 domains.

The council prioritizes investments in high-end strategic industries to achieve technological sovereignty and breakthrough growth, while also increasing funding for innovative research that undertakes the risk of failure. It ceases subsidies to individual companies and instead focuses on investments centered around core technologies shared across industries, such as artificial intelligence and compliance with global environmental regulations.

Following this investment strategy, the review council has selected 62 projects. Among them, 12 flagship projects are designed to be world-first and best-in-class, aiming to seize the opportunity of next-generation technologies.

In line with this, the review council plans to develop a universal, open next-generation artificial intelligence chip for Software-Defined Vehicles (SDVs), with a processing speed of up to 10 trillion operations per second (TOPS).

Currently, NVIDIA is advancing the development and commercialization of its next-generation autonomous driving chip rated at 1,000 TOPS. Meanwhile, South Korea is developing autonomous driving chips with performance ranging from tens to 300 TOPS.

The Ministry’s goal is to develop the world’s first commercially viable high-speed autonomous driving vehicle network system and a core semiconductor with a processing speed of 10 gigabits per second (Gbps), enabling full Level 4 and above autonomous driving.

Read more

(Photo credit: Pixabay)

Please note that this article cites information from BusinessKorea.

2024-05-06

[News] TSMC’s Advanced Packaging Capacity Fully Booked by NVIDIA and AMD Through Next Year

With the flourishing of AI applications, two major AI giants, NVIDIA and AMD, are fully committed to the high-performance computing (HPC) market. It’s reported by the Economic Daily News that they have secured TSMC’s advanced packaging capacity for CoWoS and SoIC packaging through this year and the next, bolstering TSMC’s AI-related business orders.

TSMC holds a highly positive outlook on the momentum brought by AI-related applications. During the April earnings call, CEO C.C. Wei revised the visibility of AI orders and their revenue contribution, extending the visibility from the original expectation of 2027 to 2028.

TSMC anticipates that revenue contribution from server AI processors will more than double this year, accounting for a low-teens percentage of the company’s total revenue in 2024. It also expects a 50% compound annual growth rate for server AI processors over the next five years, with these processors projected to contribute over 20% to TSMC’s revenue by 2028.

Per the industry sources cited by the same report from Economic Daily News, they have indicated that the strong demand for AI has led to a fierce competition among the four global cloud service giants, including Amazon AWS, Microsoft, Google, and Meta, to bolster their AI server arsenal. This has resulted in a supply shortage for AI chips from major manufacturers like NVIDIA and AMD.

Consequently, these companies have heavily invested in TSMC’s advanced process and packaging capabilities to meet the substantial order demands from cloud service providers. TSMC’s advanced packaging capacity, including CoWoS and SoIC, for 2024 and 2025 has been fully booked.

To address the massive demand from customers, TSMC is actively expanding its advanced packaging capacity. Industry sources cited by the report have estimated that by the end of this year, TSMC’s CoWoS monthly capacity could reach between 45,000 to 50,000 units, representing a significant increase from the 15,000 units in 2023. By the end of 2025, CoWoS monthly capacity is expected to reach a new peak of 50,000 units.

Regarding SoIC, it is anticipated that the monthly capacity by the end of this year could reach five to six thousand units, representing a multiple-fold increase from the 2,000 units at the end of 2023. Furthermore, by the end of 2025, the monthly capacity is expected to surge to a scale of 10,000 units.

It is understood that NVIDIA’s mainstay H100 chip currently in mass production utilizes TSMC’s 4-nanometer process and adopts CoWoS advanced packaging. Additionally, it supplies customers with SK Hynix’s High Bandwidth Memory (HBM) in a 2.5D packaging form.

As for NVIDIA’s next-generation Blackwell architecture AI chips, including the B100, B200, and the GB200 with Grace CPU, although they also utilize TSMC’s 4-nanometer process, they are produced using an enhanced version known as N4P. The production for the B100, per a previous report from TechNews, is slated for the fourth quarter of this year, with mass production expected in the first half of next year.

Additionally, they are equipped with higher-capacity and updated specifications of HBM3e high-bandwidth memory. Consequently, their computational capabilities will see a multiple-fold increase compared to the H100 series.

On the other hand, AMD’s MI300 series AI accelerators are manufactured using TSMC’s 5-nanometer and 6-nanometer processes. Unlike NVIDIA, AMD adopts TSMC’s SoIC advanced packaging to vertically integrate CPU and GPU dies before employing CoWoS advanced packaging with HBM. Hence, the production process involves an additional step of advanced packaging complexity with the SoIC process.

Read more

(Photo credit: TSMC)

Please note that this article cites information from Economic Daily News and TechNews.

2024-05-03

[News] NVIDIA Reportedly Fueling Samsung and SK Hynix Competition, Impacting HBM Pricing?

According to South Korean media outlet BusinessKorea’s report on May 2nd, NVIDIA is reported to be fueling competition between Samsung Electronics and SK Hynix, possibly in an attempt to lower the prices of High Bandwidth Memory (HBM).

The report on May 2nd has cited sources, indicating that the prices of the third-generation “HBM3 DRAM” have soared more than fivefold since 2023. For NVIDIA, the significant increase in the pricing of critical component HBM is bound to affect research and development costs.

The report from BusinessKorea thus accused that NVIDIA is intentionally leaking information to fan current and potential suppliers to compete against each other, aiming to lower HBM prices. On April 25th, SK Group Chairman Chey Tae-won traveled to Silicon Valley to meet with NVIDIA CEO Jensen Huang, potentially related to these strategies.

Although NVIDIA has been testing Samsung’s industry-leading 12-layer stacked HBM3e for over a month, it has yet to indicate willingness to collaborate. BusinessKorea’s report has cited sources, suggesting this is a strategic move aimed at motivate Samsung Electronics. Samsung only recently announced that it will commence mass production of 12-layer stacked HBM3e starting from the second quarter.

SK Hynix CEO Kwak Noh-Jung announced on May 2nd that the company’s HBM capacity for 2024 has already been fully sold out, and 2025’s capacity is also nearly sold out.  He mentioned that samples of the 12-layer stacked HBM3e will be sent out in May, with mass production expected to begin in the third quarter.

Kwak Noh-Jung further pointed out that although AI is currently primarily centered around data centers, it is expected to rapidly expand to on-device AI applications in smartphones, PCs, cars, and other end devices in the future. Consequently, the demand for memory specialized for AI, characterized by “ultra-fast, high-capacity and low-power,” is expected to skyrocket.

Kwak Noh-Jung also addressed that SK Hynix possesses industry-leading technological capabilities in various product areas such as HBM, TSV-based high-capacity DRAM, and high-performance eSSD. In the future, SK Hynix looks to provide globally top-tier memory solutions tailored to customers’ needs through strategic partnerships with global collaborators.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from BusinessKorea.

2024-04-30

[News] Luxshare Reportedly Enters NVIDIA’s Chain, Eyeing at AI Chip Business

According to a report from Economic Daily News, Luxshare, a crucial player in the Chinese Apple supply chain, is said to be entering NVIDIA’s supply chain for the GB200, as it has announced the development of various components tailored for NVIDIA’s GB200 AI servers.

These components encompass connector, power-related items, and cooling products. The sources cited by the same report have noted that Luxshare’s focus areas align closely with Taiwanese expertise, setting the stage for another direct showdown with Taiwanese manufacturers.

Luxshare, previously not prominent in the server domain, has now reportedly made its move into NVIDIA’s top-tier AI products, attracting market attention. Especially given Luxshare’s swift entry into the iPhone supply chain previously, aggressively competing for orders with Taiwanese Apple suppliers.

As per the same report, Luxshare has revealed in its investor conference records that it has developed solutions corresponding to the NVIDIA GB200 AI server architecture, including products for electrical connection, optical connection, power management, and cooling. The company is reportedly said to be expected to offer solutions priced at approximately CNY 2.09 million and anticipates that the total market size will reach hundreds of billions of CNY.

If Luxshare adopts a similar strategy of leveraging its latecomer advantage in entering the NVIDIA AI supply chain, it will undoubtedly encounter intense competition.

Industry sources cited by the report also point out that Luxshare’s claim to supply components for NVIDIA’s GB200 is in areas where Taiwanese suppliers excel.

For instance, while connector is Luxshare’s core business, Taiwanese firms like JPC Connectivity and Lintes Tech also serve as suppliers of connectors for NVIDIA’s GB200 AI servers. They are poised to compete directly with Luxshare in the future.

In terms of power supply, Delta Electronics leverages its expertise in integrating power, cooling, and passive components to provide a comprehensive range of AI power integration solutions, from the grid to the chip. They cater to orders for power supplies for NVIDIA’s Blackwell architecture series B100, B200, and GB200 servers, and will also compete with Luxshare in the future.

When it comes to thermal management, Asia Vital Components and Auras Technology are currently the anticipated players in the market, and they are also poised to compete with Luxshare.

Read more

(Photo credit: Luxshare)

Please note that this article cites information from Economic Daily News.

2024-04-30

[News] Rumored Sharp Drop in H100 Server Black Market Prices in China Raises Concerns Over Market Stability

The recent rapid downturn in the black market prices of AI servers equipped with NVIDIA’s highest-tier AI chip, the H100, in China, has attracted attention, as per a report from Economic Daily News. This fluctuation, triggered by US sanctions, has reportedly prompted concerns about its impact on overall supply and demand dynamics, and whether it will further squeeze normal market mechanisms.

Industry sources cited by the same report have revealed that the prices of AI servers equipped with the H100 chip have recently plummeted on the Chinese black market. This is primarily due to the imminent launch of NVIDIA’s next-generation high-end AI chip, the H200. With the transition between old and new products, scalpers who previously hoarded H100 chips to drive up prices are now offloading their large inventories.

As per a report from Reuters, despite the US expanding its ban on AI technology-related exports to China last year, some dealers are still taking risks. There is still trading of H100 chips in the Huaqiangbei electronics market in northern Shenzhen, but it has all gone underground. The chips are said to be mainly imported into China through purchasing agents or shell companies set up overseas, making them accessible to Chinese universities, research institutions, and even companies through special dealer channels.

Due to the US ban, both the H100 chip and AI servers equipped with it can only be traded on the black market, not openly. Scalpers have significantly inflated prices, with servers featuring the H100 chip reaching over CNY 3 million (over USD 420,000) in China, compared to the official price of USD 280,000 to USD 300,000, resulting in profits of over 10% for some middlemen after deducting logistics and tariffs.

With the H200 set to launch in the second quarter, the H100 will become the “previous generation” product. Consequently, middlemen who had hoarded H100 chips are eager to sell their inventory, leading to a rapid correction in prices.

Recently, servers with the H100 chip on the China black market have dropped to around CNY 2.7 to 2.8 million, with spot prices in Hong Kong falling to around CNY 2.6 million, representing a decline of over 10%.

According to a previous report from Reuters, in response to Chinese universities and research institutions reportedly acquired high-end AI chips from NVIDIA through distributors, a NVIDIA spokesperson stated that the report does not imply that NVIDIA or any of its partners violated export control regulations, and the proportion of these products in global sales is negligible. Nvidia complies with U.S. regulatory standards.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Economic Daily News and Reuters.

  • Page 24
  • 50 page(s)
  • 249 result(s)

Get in touch with us