Nvidia


2024-03-21

[News] Blackwell Enters the Scene – A Closer Look at TSMC’s CoWoS Branch

NVIDIA unveiled its Blackwell architecture and the touted powerhouse AI chip GB200 at GTC 2024 held in San Jose, California, on March 19th. Manufactured using TSMC’s 4-nanometer (4NP) process, it is expected to ship later this year.

According to a report from TechNews, TSMC’s CoWoS technology comes in various forms, including CoWoS-R, CoWoS-L, and CoWoS-S, each differing in cost due to variations in the interposer material. Customers can choose the appropriate technology based on their specific requirements.

CoWoS-R, for instance, integrates InFo technology, utilizing RDL wiring in the interposer to connect chips, making it suitable for high-bandwidth memory (HBM) and SoC integration.

On the other hand, CoWoS-L combines the advantages of CoWoS-S and InFO technologies, offering a cost-effective solution with the use of LSI (Local Silicon Interconnect) chips as the interposer for dense chip-to-chip connections. According to market reports, the Blackwell platform adopts CoWoS-L, as this technology is better suited for larger chiplets.

CoWoS-S, utilizing silicon as the interposer material, represents the highest cost variant and is currently the mainstream choice. Notably, NVIDIA’s H100, H200, and AMD’s MI300 chips all employ CoWoS-S.

NVIDIA’s latest Blackwell architecture features AI chips, including the B100, B200, and the GB200 with Grace CPU, all manufactured on TSMC’s 4-nanometer process. As per the industry sources cited by the report, insights suggest that production for the B100 is slated for the fourth quarter of this year, with mass production expected in the first half of next year.

Meanwhile, the B200 and GB200 are set to follow suit with mass production next year. As per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

TSMC’s advanced manufacturing processes and CoWoS packaging technology are expected to continue benefiting, particularly with the adoption of CoWoS-L packaging.

Read more

(Photo credit: TSMC)

Please note that this article cites information from TechNews.

2024-03-20

[News] Taiwanese Supply Chain Players Showcase Next-Gen AI Server Solutions at NVIDIA’s GTC

Following NVIDIA’s launch of the new computing platform GB200, as per a report from Commerical Times, Taiwanese supply chain players including Quanta, Pegatron, Wiwynn, Wistron, Gigabyte, and Foxconn’s subsidiary Ingrasys have showcased their solutions and related cooling technologies based on the GB200 at the latest GTC conference, aiming to capture opportunities in the next-generation AI server market.

Quanta Cloud Technology (QCT), a subsidiary of Quanta Computer, demonstrated its systems and AI applications based on the NVIDIA MGX architecture, announcing support for the upcoming NVIDIA GB200 superchip and NVIDIA GB200 NVL72.

QCT showcased their NVIDIA MGX architecture systems, featuring the NVIDIA GH200 chip, employing a modular reference design. System manufacturers can utilize the NVIDIA MGX architecture to tailor models suitable for applications like generative AI, high-performance computing (HPC), and edge deployments.

Pegatron, on the other hand, has become one of NVIDIA’s global partners in advanced GPU computing technology, particularly with the latest NVIDIA GB200 chip. Reportedly, Pegatron is actively developing the GB200 NVL36, designed as a multi-node, liquid-cooled, rack-level platform dedicated to processing compute-intensive workloads. Equipped with the NVIDIA BlueField-3 data processing unit, it enables network acceleration in ultra-scale AI cloud environments and fulfills various GPU computing functionalities.

GIGABYTE, a key supplier of high-end AI GPU servers for NVIDIA last year, showcased their latest offerings at this year’s GTC exhibition. Their subsidiary, GIGABYTE Technology, unveiled the GIGABYTE XH23-VG0, a 2U server featuring the NVIDIA H100 GPU and GH200 architecture, capable of transferring data at speeds of up to 900GB per second. Additionally, they announced the readiness of their product line for the next-generation Blackwell platform, including HGX boards, superchips, and PCIe expansion cards, which will be released gradually over the coming months.

Meanwhile, Wiwynn, included in the first wave of suppliers for the NVIDIA GB200 NVL72 system, showcased its latest AI server cabinet solution based on the NVIDIA GB200 NVL72 at the GTC exhibition. They also presented their newest comprehensive liquid-cooled management system, the UMS100.

Ingrasys also showcased a range of innovations at the exhibition, including NVIDIA MGX architecture servers and the GB200 NVL72 solution. They also demonstrated advanced liquid cooling technologies such as the liquid-to-gas Sidecar technology and liquid-to-liquid Cooling Distribution Unit (CDU).

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commerical Times.

2024-03-19

[News] TSMC’s 4nm Process Powers NVIDIA’s Blackwell Architecture GPU, AI Performance Surpasses Previous Generations by Multiples

Chip giant NVIDIA kicked off its annual Graphics Processing Unit (GPU) Technology Conference (GTC) today, with CEO Jensen Huang announcing the launch of the new artificial intelligence chip, Blackwell B200.

According to a report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.

NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.

As per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

NVIDIA’s HBM supplier, South Korean chipmaker SK Hynix, also issued a press release today announcing the commencement of mass production of its high-performance DRAM new product, HBM3e, with shipments set to begin at the end of March.

Source: SK Hynix

Recently, global tech companies have been heavily investing in AI, leading to increasing demands for AI chip performance. SK Hynix points out that HBM3e is the optimal product to meet these demands. As memory operations for AI are extremely fast, efficient heat dissipation is crucial. HBM3e incorporates the latest Advanced MR-MUF technology for heat dissipation control, resulting in a 10% improvement in cooling performance compared to the previous generation.

Per SK Hynix’s press release, Sungsoo Ryu, the head of HBM Business at SK Hynix, said that mass production of HBM3e has completed the company’s lineup of industry-leading AI memory products.

“With the success story of the HBM business and the strong partnership with customers that it has built for years, SK hynix will cement its position as the total AI memory provider,” he stated.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNews, Tom’s Hardware and SK Hynix.

2024-03-18

[News] TSMC Boosts Investment in Advanced Packaging with NTD 500 Billion Plan to Build Six Plants in Chiayi Science Park

The Executive Yuan and TSMC have reportedly reached a consensus on the investment project for the new advanced packaging plant at the TSMC Science Park in Chiayi. According to a report from Economic Daily News, six new plant sites will be allocated to TSMC in the Science Park, two more than originally anticipated, with a total investment exceeding NTD 500 billion. The expansion is expected to increase CoWoS advanced packaging capacity and to be announced to the public in early April.

TSMC has refrained from commenting on the matter. Regarding the news, the Executive Yuan actively coordinated with TSMC for the establishment of the advanced packaging plant in the Chiayi Science Park located in Taibao. The related environmental assessments and water and electricity facilities have been processed, with construction expected to commence in April, indirectly confirming the rumors.

As per sources cited by the report, Chiayi Science Park is poised to become a new hub for TSMC’s advanced packaging capacity. Among the six new scheduled plants, construction will begin on two this year, aligning with the Executive Yuan’s statement of construction commencement in April.

TSMC’s extensive expansion is primarily driven by the high demand for advanced packaging. For instance,  in the case of the NVIDIA H100, after integrating components via CoWoS, each wafer yields approximately 28 chips. However, for the upcoming B100, with increased volume and integration, the yield per wafer drops to just 16 chips.

On the other hand, TSMC’s advanced processes, per a previous report from Commercial Times, remained fully utilized, with capacity utilization exceeding 90% in February, driven by sustained AI demand. The same report also noted that NVIDIA’s orders to TSMC are robust, pushing TSMC’s 3 and 4-nanometer production capacity to nearly full utilization.

As each new generation of NVIDIA’s AI chips integrates CoWoS, chip output is halved, yet demand for AI servers continues to soar. With terminal demand skyrocketing while chip output dwindles, there’s a “cliff-like gap” in CoWoS advanced packaging capacity. TSMC must ramp up CoWoS production swiftly to ensure uninterrupted customer supply.

Read more

(Photo credit: TSMC)

Please note that this article cites information from Economic Daily News and Commercial Times.

2024-03-15

[News] Following February’s Advance Production of HBM3e, Micron Reportedly Secures Order from NVIDIA for H200

According to a report from the South Korean newspaper “Korea Joongang Daily,” following Micron’s initiation of mass production of the latest high-bandwidth memory HBM3e in February 2024, it has recently secured an order from NVIDIA for the H200 AI GPU. It is understood that NVIDIA’s upcoming H200 processor will utilize the latest HBM3e, which are more powerful than the HBM3 used in the H100 processor.

The same report further indicates that Micron secured the H200 order due to its adoption of 1b nanometer technology in its HBM3e, which is equivalent to the 12-nanometer technology used by SK Hynix in producing HBM. In contrast, Samsung Electronics currently employs 1a nanometer technology, which is equivalent to 14-nanometer technology, reportedly lagging behind Micron and SK Hynix.

The report from Commercial Times indicates that Micron’s ability to secure the NVIDIA order for H200 is attributed to the chip’s outstanding performance, energy efficiency, and seamless scalability.

As per a previous report from TrendForce, starting in 2024, the market’s attention will shift from HBM3 to HBM3e, with expectations for a gradual ramp-up in production through the second half of the year, positioning HBM3e as the new mainstream in the HBM market.

TrendForce reports that SK Hynix led the way with its HBM3e validation in the first quarter, closely followed by Micron, which plans to start distributing its HBM3e products toward the end of the first quarter, in alignment with NVIDIA’s planned H200 deployment by the end of the second quarter.

Samsung, slightly behind in sample submissions, is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out in the second quarter. With Samsung having already made significant strides in HBM3 and its HBM3e validation expected to be completed soon, the company is poised to significantly narrow the market share gap with SK Hynix by the end of the year, reshaping the competitive dynamics in the HBM market.

Read more

(Photo credit: Micron)

Please note that this article cites information from Korea Joongang Daily and Commercial Times.

  • Page 24
  • 46 page(s)
  • 230 result(s)

Get in touch with us