GPU


2024-04-01

[News] Intel Unveils Two New GPUs, Manufactured on TSMC’s 4nm Process while Reportedly Targeting NVIDIA RTX 40 Series

According to wccftech, Intel’s new GPUs will come in two models, namely Battlemage-G10 (abbreviated as BMG-G10) and Battlemage-G21 (abbreviated as BMG-G21).

These two new GPUs from Intel were revealed in an internal document. According to the document, the BMG-G10, targeted at enthusiasts, is a GPU with a TDP of less than 225W, while the BMG-G21 is designed as a mid-range performance product with a maximum TDP not exceeding 150W.

As for specific parameters and performance, the enthusiast-grade BMG-G10 is expected to be equipped with up to 64 Xe2 cores, directly competing with NVIDIA’s RTX 4070. On the other hand, the mid-range BMG-G21 aims at the RTX 4060, both continuing to utilize TSMC’s 4nm manufacturing process.

Therefore, previous rumors suggesting that Intel had canceled the development of BMG-G10 and only retained the BMG-G21 with 40 Xe2 cores appear to be untrue. Moreover, the core count of BMG-G10 is larger than initially reported at 56 Xe2 cores, indicating it is poised to deliver even higher performance.

Recently, per a report from Reuters, Intel, Qualcomm, Google, and other major tech companies are teaming up to challenge NVIDIA’s market dominance and make inroads into the AI software sector. They are expected to look to steer developers away from NVIDIA’s CUDA software platform, a parallel computing platform tailored for GPU acceleration.

Read more

(Photo credit: Intel)

Please note that this article cites information from wccftech and Reuters.

2024-03-21

[News] Blackwell Enters the Scene – A Closer Look at TSMC’s CoWoS Branch

NVIDIA unveiled its Blackwell architecture and the touted powerhouse AI chip GB200 at GTC 2024 held in San Jose, California, on March 19th. Manufactured using TSMC’s 4-nanometer (4NP) process, it is expected to ship later this year.

According to a report from TechNews, TSMC’s CoWoS technology comes in various forms, including CoWoS-R, CoWoS-L, and CoWoS-S, each differing in cost due to variations in the interposer material. Customers can choose the appropriate technology based on their specific requirements.

CoWoS-R, for instance, integrates InFo technology, utilizing RDL wiring in the interposer to connect chips, making it suitable for high-bandwidth memory (HBM) and SoC integration.

On the other hand, CoWoS-L combines the advantages of CoWoS-S and InFO technologies, offering a cost-effective solution with the use of LSI (Local Silicon Interconnect) chips as the interposer for dense chip-to-chip connections. According to market reports, the Blackwell platform adopts CoWoS-L, as this technology is better suited for larger chiplets.

CoWoS-S, utilizing silicon as the interposer material, represents the highest cost variant and is currently the mainstream choice. Notably, NVIDIA’s H100, H200, and AMD’s MI300 chips all employ CoWoS-S.

NVIDIA’s latest Blackwell architecture features AI chips, including the B100, B200, and the GB200 with Grace CPU, all manufactured on TSMC’s 4-nanometer process. As per the industry sources cited by the report, insights suggest that production for the B100 is slated for the fourth quarter of this year, with mass production expected in the first half of next year.

Meanwhile, the B200 and GB200 are set to follow suit with mass production next year. As per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

TSMC’s advanced manufacturing processes and CoWoS packaging technology are expected to continue benefiting, particularly with the adoption of CoWoS-L packaging.

Read more

(Photo credit: TSMC)

Please note that this article cites information from TechNews.

2024-03-19

[News] TSMC’s 4nm Process Powers NVIDIA’s Blackwell Architecture GPU, AI Performance Surpasses Previous Generations by Multiples

Chip giant NVIDIA kicked off its annual Graphics Processing Unit (GPU) Technology Conference (GTC) today, with CEO Jensen Huang announcing the launch of the new artificial intelligence chip, Blackwell B200.

According to a report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.

NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.

As per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

NVIDIA’s HBM supplier, South Korean chipmaker SK Hynix, also issued a press release today announcing the commencement of mass production of its high-performance DRAM new product, HBM3e, with shipments set to begin at the end of March.

Source: SK Hynix

Recently, global tech companies have been heavily investing in AI, leading to increasing demands for AI chip performance. SK Hynix points out that HBM3e is the optimal product to meet these demands. As memory operations for AI are extremely fast, efficient heat dissipation is crucial. HBM3e incorporates the latest Advanced MR-MUF technology for heat dissipation control, resulting in a 10% improvement in cooling performance compared to the previous generation.

Per SK Hynix’s press release, Sungsoo Ryu, the head of HBM Business at SK Hynix, said that mass production of HBM3e has completed the company’s lineup of industry-leading AI memory products.

“With the success story of the HBM business and the strong partnership with customers that it has built for years, SK hynix will cement its position as the total AI memory provider,” he stated.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNews, Tom’s Hardware and SK Hynix.

2024-03-04

[News] Dell Leak Reveals NVIDIA’s Potential B200 Launch Next Year 

NVIDIA has yet to officially announce the exact release dates for its next-generation AI chip architectures, the Blackwell GPU, and the B100 chip. However, Dell’s Chief Operating Officer, Jeff Clarke, recently revealed ahead of schedule during Dell’s Q4 2024 Earnings Call that NVIDIA is set to introduce the Blackwell architecture next year, with plans to release not only the B100 chip but also another variant, the B200 chip.

Following Dell’s recent financial report, Clarke disclosed in a press release that NVIDIA is set to unveil the B200 product featuring the Blackwell architecture in 2025.

Clarke also mentioned that Dell’s flagship product, the PowerEdge XE9680 rack server, utilizes NVIDIA GPUs, making it the fastest solution in the company’s history. He expressed anticipation for NVIDIA’s release of the B100 and B200 chips. This news has sparked significant market interest, as NVIDIA has yet to publicly mention the B200 chip.

Clarke further stated that the B200 chip will showcase Dell’s engineering expertise in high-end servers, especially in liquid cooling systems. As for the progress of the B100 chip, NVIDIA has yet to disclose its specific parameters and release date.

NVIDIA’s current flagship H200 chip in the high-performance computing market adopts the Hopper GPU architecture paired with HBM3e memory chips, considered the most capable chip for AI computing in the industry.

However, NVIDIA continues to accelerate the development of its next-generation AI chip architectures. According to NVIDIA’s previously disclosed development roadmap, the next-generation product after the H200 chip is the B100 chip. Therefore, the expectation was that the B100 chip would be the highest-specification chip based on the Blackwell GPU architecture. Nevertheless, with the emergence of the B200 chip, it has sparked further speculation.

Previously, media speculation cited by the report from Commercial Times stated based on the scale of the H200 chip that the computational power of the B100 chip would be at least twice that of the H200 and four times that of the H100.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commercial Times.

2024-02-28

[News] NVIDIA’s H100 AI Chip No Longer Out of Reach, Inventory Pressure Reportedly Forces Customers to Resell

The previously elusive NVIDIA data center GPU, H100, has seen a noticeable reduction in delivery lead times amid improved market supply conditions, as per a report from Tom’s Hardware. As a result, customers who previously purchased large quantities of H100 chips are reportedly starting to resell them.

The report further points out that the previously high-demand H100 data center GPU, driven by the surge in artificial intelligence applications, has seen a reduction in delivery wait times from a peak of 8-11 months to 3-4 months, indicating a relief in supply pressure.

Additionally, with major cloud providers such as AWS, Google Cloud, and Microsoft Azure offering easier access to AI computing services for customers, enterprises that previously purchased large quantities of H100 GPUs have begun further reselling these GPUs.

For instance, AWS introduced a new service allowing customers to rent GPUs for shorter periods, resolving previous chip demand issues and shortening the waiting time for artificial intelligence chips.

The report also indicates that customers are reselling these GPUs due to reduced scarcity and the high maintenance costs, leading these enterprise customers to make such decisions. This situation contrasts starkly with the market shortage a year ago.

However, even though the current difficulty in obtaining H100 GPUs has significantly decreased, the artificial intelligence market remains robust overall. The demand for large-scale artificial intelligence model computations persists for some enterprises, keeping the overall demand greater than the supply, thereby preventing a significant drop in the price of H100 GPUs.

The report emphasizes that the current ease of purchasing H100 GPUs has also brought about some changes in the market. Customers now prioritize price and practicality when leasing AI computing services from cloud service providers.

Additionally, alternatives to the H100 GPU have emerged in the current market, offering comparable performance and software support but at potentially more affordable prices, potentially contributing to a more equitable market condition.

TrendForce’s newest projections spotlight a 2024 landscape where demand for high-end AI servers—powered by NVIDIA, AMD, or other top-tier ASIC chips—will be heavily influenced by North America’s cloud service powerhouses.

Microsoft (20.2%), Google (16.6%), AWS (16%), and Meta (10.8%) are predicted to collectively command over 60% of global demand, with NVIDIA GPU-based servers leading the charge.

However, NVIDIA still faces ongoing hurdles in development as it contends with US restrictions.

TrendForce has pointed out that, despite NVIDIA’s stronghold in the data center sector—thanks to its GPU servers capturing up to 70% of the AI market—challenges continue to loom.

Three major challenges are set to limit the company’s future growth: Firstly, the US ban on technological exports has spurred China toward self-reliance in AI chips, with Huawei emerging as a noteworthy adversary. NVIDIA’s China-specific solutions, like the H20 series, might not match the cost-effectiveness of its flagship models, potentially dampening its market dominance.

Secondly, the trend toward proprietary ASIC development among US cloud behemoths, including Google, AWS, Microsoft, and Meta, is expanding annually due to scale and cost considerations.

Lastly, AMD presents competitive pressure with its cost-effective strategy, offering products at just 60–70% of the prices of comparable NVIDIA models. This allows AMD to penetrate the market more aggressively, especially with flagship clients. Microsoft is expected to be the most enthusiastic adopter of AMD’s high-end GPU MI300 solutions in 2024.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNews and Tom’s Hardware.

  • Page 4
  • 7 page(s)
  • 33 result(s)

Get in touch with us