H100


2024-09-09

[News] China Bypasses Restrictions to Acquire NVIDIA Chips, with Cloud Service Costs Even Lower Than in the U.S.

Despite U.S. export controls aimed at preventing Chinese companies from acquiring advanced AI chips, small cloud service providers in China have reportedly found ways to obtain NVIDIA’s A100 and H100 chips. The cost of renting cloud services in China is even lower than in the U.S.

According to a report from the Financial Times, four small-scale Chinese cloud providers are offering servers equipped with eight A100 chips each, charging around USD 6 per hour. In comparison, similar services from U.S. cloud providers cost approximately USD 10 per hour.

As Chinese companies are reportedly bypassing U.S. export controls, industry sources cited by the Financial Times have further noted that the lower prices in China may hint at a robust local supply of NVIDIA chips.

Since the fall of 2022, the U.S. has banned NVIDIA from supplying A100 chips to China, and the more powerful H100 chips have not been approved for sale there.

However, industry sources and startups have revealed that these chips are still available in China. Ads for A100 and H100 have appeared on social media platforms like Xiaohongshu and e-commerce sites such as Taobao, with prices higher than those abroad.

At the Huaqiangbei electronics market in Shenzhen, reportedly, industry sources have revealed that the price of NVIDIA’s H100 is quoted at USD 23,000 to USD 30,000, while Chinese online sellers list it at USD 31,000 to USD 33,000.

Meanwhile, larger Chinese cloud providers such as Alibaba and ByteDance emphasize service stability and security in the local market. For servers equipped with A100 chips, they charge two to four times more than smaller cloud providers.

According to another source cited by Financial Times, large companies must consider regulatory compliance, which puts them at a disadvantage because they are reluctant to use smuggled chips. In contrast, smaller providers are less concerned.

The same report also indicate that after the US government tightened export controls in October last year, servers from Supermicro equipped with eight H100 chips were priced as high as approximately CNY 3.2 million. However, as supply constraints eased, the price has dropped to around CNY 2.5 million.

Several sources cited by the report claim that merchants from Malaysia, Japan, and Indonesia frequently ship Supermicro servers or NVIDIA chips to Hong Kong, from where they are then transported to Shenzhen.

In response to these issues, NVIDIA reportedly stated that it primarily sells chips to well-known partners, ensuring that all sales comply with U.S. export regulations.

NVIDIA also mentioned that its used products can be obtained through various channels and, although they cannot track products after sale, they will take appropriate action if they determine a customer is violating U.S. export controls.

Read more

(Photo credit: iStock)

Please note that this article cites information from Financial Times.

2024-08-14

[News] Huawei Rumored to Launch New High-End AI Chip, Potentially Rivaling NVIDIA’s H100

According to a report from The Wall Street Journal citing sources on August 13th, it’s revealed that Chinese internet companies and telecom operators have been testing Huawei’s latest processor, the “Ascend 910C,” in recent weeks. Reportedly, Huawei has informed potential customers that this new chip is comparable to NVIDIA’s H100 GPU, which cannot be directly sold in China.

Huawei’s ability to continue advancing its chip technology is a sign of its efforts to counter U.S. sanctions. However, the report also indicated that Huawei is already experiencing production delays with its current chips. The company faces additional U.S. restrictions, limiting its access to parts for production equipment and the latest memory used in AI hardware.

The sources cited by the same report point out that, TikTok’s parent company ByteDance, search giant Baidu, and state-owned telecom operator China Mobile are in preliminary talks with Huawei to secure the Ascend 910C chip. These negotiations suggest that Huawei could secure orders for more than 70,000 chips, valued at approximately USD 2 billion.

Reportedly, Huawei aims to begin shipping the Ascend 910C in October, but the final delivery schedule might differ from the initial plan and could be subject to adjustments.

Under U.S. sanctions, customers in China are forced to purchase the H20 from NVIDIA, which is a “downgraded” version of the AI chip designed specifically for the Chinese market.

Per a previous report from South China Morning Post, it’s expected that Chinese tech giants may be considering a shift towards local AI products, which could pose a challenge to NVIDIA. Currently, China accounts for 17% of NVIDIA’s revenue in the 2024 fiscal year, making the competition in the Chinese market increasingly fierce for NVIDIA.

Compared to NVIDIA’s customers in China, NVIDIA’s U.S. customers, such as OpenAI, Amazon, and Google, will soon have access to NVIDIA’s latest Blackwell architecture chips, including new products like the GB200, which NVIDIA claims offer significantly improved performance compared to existing products.

Meanwhile,  Wall Street Journal also has cited sources, pointed out that NVIDIA is working on another China-oriented chip called B20, but the design might have trouble getting U.S. approval for China export if the regulations are further tighten.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from The Wall Street Journal and South China Morning Post.

2024-08-06

[News] Intel and NVIDIA’s New Platform Orders Rolling Out, TSMC Unaffected by Market Turbulence

According to a report from Commercial Times, despite ongoing turbulence in the semiconductor industry, including Intel’s capital expenditure cuts and reported bottlenecks in NVIDIA’s B-series GPU, TSMC’s leading position in the industry may remain unshaken.

The sources cited in the report note that the issues with the B-series GPU, stemming from mask replacements to enhance chip stability, have been quickly resolved by the foundry.

The sources cited in the report believe that NVIDIA’s Blackwell started production at the end of the second quarter. To improve stability, NVIDIA replaced some masks, causing about a two-week production delay. The redesign has been completed, and large-scale production will proceed in the fourth quarter.

The same source do not believe it will affect TSMC’s CoWoS revenue, as the idle two-week capacity will be filled by the equally strong demand for H100.

On the other hand, Intel’s CPUs are reportedly facing issues as well. As per the company’s statement, the 13th and 14th generation Intel Core desktop systems are experiencing instability due to a microcode algorithm resulting in incorrect voltage requests to the processor.

Although the company has provided a two-year warranty extension and real-time updates to fix the errors, concerns about design flaws and manufacturing process issues still exist.

In 2024, Intel’s new platforms, Arrow Lake and Lunar Lake, will have their CPU tiles produced using TSMC’s 3nm process, accelerating the production schedule. Lunar Lake and Arrow Lake are expected to ship officially by the end of the third and fourth quarters of this year, respectively.

With the support of the 3nm technology, these measures are expected to alleviate market concerns.

The sources cited by Commercial Times estimate that TSMC’s competitor Intel has begun to strictly cut costs, reducing capital expenditures by 20%. This could affect key capabilities in mass production and defect resolution in wafer manufacturing.

Therefore, sources cited by the report believe that TSMC’s leading position remains difficult to challenge in the short term.

Read more

(Photo credit: TSMC)

Please note that this article cites information from Commercial Times and Intel.

2024-05-23

[News] No Demand Lull? NVIDIA Reportedly Points Out Hopper Will Remain in Short Supply for Some Time

The market was originally concerned that NVIDIA might face a demand lull during the transition from its Hopper series GPUs to the Blackwell series. However, the company executives clearly stated during the latest financial report release that this is not the case.

According to reports from MarketWatch and CNBC, NVIDIA CFO Colette Kress stated on May 22 that NVIDIA’s data center revenue for the first quarter (February to April) surged 427% year-over-year to USD 22.6 billion, primarily due to shipments of Hopper GPUs, including the H100.

On May 22, during the earnings call, Kress also mentioned that Facebook’s parent company, Meta Platforms, announced the launch of its latest large language model (LLM), “Lama 3,” which utilized 24,000 H100 GPUs. This was the highlight of Q1. She also noted that major cloud computing providers contributed approximately “mid-40%” of NVIDIA’s data center revenue.

NVIDIA CEO Jensen Huang also stated in the call, “We see increasing demand of Hopper through this quarter,” adding that he expects demand to outstrip supply for some time as NVIDIA transitions to Blackwell.

As per a report from MoneyDJ, Wall Street had previously been concerned that NVIDIA’s customers might delay purchases while waiting for the Blackwell series. Sources cited by the report predict that the Blackwell chips will be delivered in the fourth quarter of this year.

NVIDIA’s Q1 (February to April) financial result showed that revenue soared 262% year-over-year to USD 26.04 billion, with adjusted earnings per share at USD 6.12. Meanwhile, NVIDIA’s data center revenue surged 427% year-over-year to USD 22.6 billion.

During Q1, revenue from networking products (mainly Infiniband) surged more than threefold to USD 3.2 billion compared to the same period last year. Revenue from gaming-related products increased by 18% year-over-year to USD 2.65 billion. Looking ahead to this quarter (May to July), NVIDIA predicts revenue will reach USD 28 billion, plus or minus 2%.

NVIDIA’s adjusted gross margin for Q1 was 78.9%. The company predicts that this quarter’s adjusted gross margin will be 75.5%, plus or minus 50 basis points. In comparison, competitor AMD’s gross margin for the first quarter was 52%.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from CNBCNVIDIA and MoneyDJ.

2024-05-06

[News] TSMC’s Advanced Packaging Capacity Fully Booked by NVIDIA and AMD Through Next Year

With the flourishing of AI applications, two major AI giants, NVIDIA and AMD, are fully committed to the high-performance computing (HPC) market. It’s reported by the Economic Daily News that they have secured TSMC’s advanced packaging capacity for CoWoS and SoIC packaging through this year and the next, bolstering TSMC’s AI-related business orders.

TSMC holds a highly positive outlook on the momentum brought by AI-related applications. During the April earnings call, CEO C.C. Wei revised the visibility of AI orders and their revenue contribution, extending the visibility from the original expectation of 2027 to 2028.

TSMC anticipates that revenue contribution from server AI processors will more than double this year, accounting for a low-teens percentage of the company’s total revenue in 2024. It also expects a 50% compound annual growth rate for server AI processors over the next five years, with these processors projected to contribute over 20% to TSMC’s revenue by 2028.

Per the industry sources cited by the same report from Economic Daily News, they have indicated that the strong demand for AI has led to a fierce competition among the four global cloud service giants, including Amazon AWS, Microsoft, Google, and Meta, to bolster their AI server arsenal. This has resulted in a supply shortage for AI chips from major manufacturers like NVIDIA and AMD.

Consequently, these companies have heavily invested in TSMC’s advanced process and packaging capabilities to meet the substantial order demands from cloud service providers. TSMC’s advanced packaging capacity, including CoWoS and SoIC, for 2024 and 2025 has been fully booked.

To address the massive demand from customers, TSMC is actively expanding its advanced packaging capacity. Industry sources cited by the report have estimated that by the end of this year, TSMC’s CoWoS monthly capacity could reach between 45,000 to 50,000 units, representing a significant increase from the 15,000 units in 2023. By the end of 2025, CoWoS monthly capacity is expected to reach a new peak of 50,000 units.

Regarding SoIC, it is anticipated that the monthly capacity by the end of this year could reach five to six thousand units, representing a multiple-fold increase from the 2,000 units at the end of 2023. Furthermore, by the end of 2025, the monthly capacity is expected to surge to a scale of 10,000 units.

It is understood that NVIDIA’s mainstay H100 chip currently in mass production utilizes TSMC’s 4-nanometer process and adopts CoWoS advanced packaging. Additionally, it supplies customers with SK Hynix’s High Bandwidth Memory (HBM) in a 2.5D packaging form.

As for NVIDIA’s next-generation Blackwell architecture AI chips, including the B100, B200, and the GB200 with Grace CPU, although they also utilize TSMC’s 4-nanometer process, they are produced using an enhanced version known as N4P. The production for the B100, per a previous report from TechNews, is slated for the fourth quarter of this year, with mass production expected in the first half of next year.

Additionally, they are equipped with higher-capacity and updated specifications of HBM3e high-bandwidth memory. Consequently, their computational capabilities will see a multiple-fold increase compared to the H100 series.

On the other hand, AMD’s MI300 series AI accelerators are manufactured using TSMC’s 5-nanometer and 6-nanometer processes. Unlike NVIDIA, AMD adopts TSMC’s SoIC advanced packaging to vertically integrate CPU and GPU dies before employing CoWoS advanced packaging with HBM. Hence, the production process involves an additional step of advanced packaging complexity with the SoIC process.

Read more

(Photo credit: TSMC)

Please note that this article cites information from Economic Daily News and TechNews.

  • Page 1
  • 4 page(s)
  • 17 result(s)

Get in touch with us