Nvidia


2024-04-01

[News] Intel Unveils Two New GPUs, Manufactured on TSMC’s 4nm Process while Reportedly Targeting NVIDIA RTX 40 Series

According to wccftech, Intel’s new GPUs will come in two models, namely Battlemage-G10 (abbreviated as BMG-G10) and Battlemage-G21 (abbreviated as BMG-G21).

These two new GPUs from Intel were revealed in an internal document. According to the document, the BMG-G10, targeted at enthusiasts, is a GPU with a TDP of less than 225W, while the BMG-G21 is designed as a mid-range performance product with a maximum TDP not exceeding 150W.

As for specific parameters and performance, the enthusiast-grade BMG-G10 is expected to be equipped with up to 64 Xe2 cores, directly competing with NVIDIA’s RTX 4070. On the other hand, the mid-range BMG-G21 aims at the RTX 4060, both continuing to utilize TSMC’s 4nm manufacturing process.

Therefore, previous rumors suggesting that Intel had canceled the development of BMG-G10 and only retained the BMG-G21 with 40 Xe2 cores appear to be untrue. Moreover, the core count of BMG-G10 is larger than initially reported at 56 Xe2 cores, indicating it is poised to deliver even higher performance.

Recently, per a report from Reuters, Intel, Qualcomm, Google, and other major tech companies are teaming up to challenge NVIDIA’s market dominance and make inroads into the AI software sector. They are expected to look to steer developers away from NVIDIA’s CUDA software platform, a parallel computing platform tailored for GPU acceleration.

Read more

(Photo credit: Intel)

Please note that this article cites information from wccftech and Reuters.

2024-03-28

[News] Memory Manufacturers Vie for HBM3e Market

Recently, South Korean media Alphabiz reported that Samsung may exclusively supply 12-layer HBM3e to NVIDIA.

The report indicates NVIDIA is set to commence large-scale purchases of Samsung Electronics’ 12-layer HBM3e as early as September, who will exclusively provide the 12-layer HBM3e to NVIDIA.

NVIDIA CEO Jensen Huang, as per Alphabiz reported, left his signature “Jensen Approved” on a physical 12-layer HBM3e product from Samsung Electronics at GTC 2024, which seems to suggest NVIDIA’s recognition of Samsung’s HBM3e product.

HBM is characterized by its high bandwidth, high capacity, low latency, and low power consumption. With the surge in artificial intelligence (AI) industry, the acceleration of AI large-scale model applications has driven the continuous growth of demand in high-performance memory market.

According to TrendForce’s data, HBM market value accounted for approximately 8.4% of the overall DRAM industry in 2023, and this percentage is projected to expand to 20.1% by the end of 2024.

Senior Vice President Avril Wu notes that by the end of 2024, the DRAM industry is expected to allocate approximately 250K/m (14%) of total capacity to producing HBM TSV, with an estimated annual supply bit growth of around 260%.

HBM3e: Three Major Original Manufacturers Kick off Fierce Rivalry

Following the debut of the world’s first TSV HBM product in 2014, HBM memory technology has now iterated to HBM3e after nearly 10 years of development.

From the perspective of original manufacturers, competition in the HBM3e market primarily revolves around Micron, SK Hynix, and Samsung. It is reported that these three major manufacturers already provided 8-hi (24GB) samples in late July, mid-August, and early October 2023, respectively. It is worth noting that this year, they have kicked off fierce competition in the HBM3e market by introducing latest products.

On February 27th, Samsung announced the launch of its first 12-layer stacked HBM3e DRAM–HBM3e 12H, which marks Samsung’s largest-capacity HBM product to date, boasting a capacity of up to 36GB. Samsung stated that it has begun offering samples of the HBM3e 12H to customers and anticipates starting mass production in the second half of this year.

In early March, Micron announced that it had commenced mass production of its HBM3e solution. The company stated that the NVIDIA H200 Tensor Core GPU will adopt Micron’s 8-layer stacked HBM3e memory with 24GB capacity and shipments are set to begin in the second quarter of 2024.

On March 19th, SK Hynix announced the successful large-scale production of its new ultra-high-performance memory product, HBM3e, designed for AI applications. This achievement symbolizes the world’s first supply of DRAM’s highest-performance HBM3e in existence to customers.

A previous report from TrendForce has indicated that, starting in 2024, the market’s attention will shift from HBM3 to HBM3e, with expectations for a gradual ramp-up in production through the second half of the year, positioning HBM3e as the new mainstream in the HBM market.

TrendForce reports that SK hynix led the way with its HBM3e validation in the first quarter, closely followed by Micron, which plans to start distributing its HBM3e products toward the end of the first quarter, in alignment with NVIDIA’s planned H200 deployment by the end of the second quarter.

Samsung, slightly behind in sample submissions, is expected to complete its HBM3e validation by the end of the first quarter, with shipments rolling out in the second quarter. With Samsung having already made significant strides in HBM3 and its HBM3e validation expected to be completed soon, the company is poised to significantly narrow the market share gap with SK Hynix by the end of the year, reshaping the competitive dynamics in the HBM market.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from DRAMeXchange.

 

2024-03-26

[News] Intel, Qualcomm, Google Reportedly Form Alliance, Posing Challenges to NVIDIA’s Dominance in AI?

Intel, Qualcomm, Google, and other tech giants are reportedly joining forces with over a hundred startups to challenge NVIDIA’s dominance in the market, as per a report from Reuters. Reportedly, their goal is to collectively penetrate the artificial intelligence (AI) software domain, guiding developers to migrate away from NVIDIA’s CUDA software platform.

NVIDIA’s CUDA is a parallel computing platform and programming model designed specifically to accelerate GPU computing. It allows GPU users to fully leverage their chip’s computational power in AI and other applications. As per a previous report from TrendForce, since 2006, NVIDIA has introduced the CUDA architecture, nearly ubiquitous in educational institutions. Thus, almost all AI engineers encounter CUDA during their academic tenure.

However, tech giants are now reportedly aiming to disrupt the current status quo. According to a report from Reuters on March 25th, Intel, Qualcomm, and Google are teaming up to challenge NVIDIA’s dominant position. They plan to provide alternative solutions for developers to reduce dependence on NVIDIA, encourage application migration to other platforms, and thereby break NVIDIA’s software monopoly and weaken its market influence.

The same report from Reuters further indicated that several tech companies have formed the “UXL Foundation,” named after the concept of “Unified Acceleration” (UXL), which aims to harness the power of acceleration computing using any hardware.

The project plans to leverage Intel’s oneAPI technology to develop software and tools supporting multiple AI accelerator chips. The goal is to reduce the technical barriers developers face when dealing with different hardware platforms, streamline the development process, enhance efficiency, and accelerate innovation and application of AI technology.

Vinesh Sukumar, Head of AI and Machine Learning Platform at Qualcomm, stated, “We’re actually showing developers how you migrate out from an NVIDIA platform.”

Bill Magro, Head of High-Performance Computing at Google, expressed, “It’s about specifically – in the context of machine learning frameworks – how do we create an open ecosystem, and promote productivity and choice in hardware.” The foundation is said to aim to finalize technical specifications in the first half of this year and strives to refine technical details by the end of the year.

However, CUDA software has established a solid foundation in the AI field, making it unlikely to be shaken overnight. Jay Goldberg, CEO of financial and strategic advisory firm D2D Advisory, believes that CUDA’s importance lies not only in its software capabilities but also in its 15-year history of usage. A vast amount of code has been built around it, deeply ingraining CUDA in numerous AI and high-performance computing projects. Changing this status quo would require overcoming significant inertia and dependency.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Reuters.

2024-03-25

[News] The Intense Battle of 2-Nanometer Technology Set to Escalate Next Year

As the demand for AI is becoming urgent, according to industry sources cited by the ChinaTimes News, TSMC’s Fab20 P1 plant in Hsinchu’s Baoshan area will undergo equipment installation engineering in April to warm up for mass production of the GAA (gate-all-around) architecture.

Reportedly, it is expected that Baoshan P1, P2, and the three fabs scheduled for advanced process production in Kaohsiung will all commence mass production in 2025, attracting customers such as Apple, NVIDIA, AMD, and Qualcomm to compete for production capacity.

Regarding this rumor, TSMC declined to comment.

Per the industry sources cited by the same report, whether wafer manufacturing is profitable is depending on the yield after mass production. The key lies in the speed at which the yield improves; the longer it takes and the higher the cost, the more challenging it becomes.

As per the same report, TSMC is said to be accelerating its entry into the 2-nanometer realm in April, aiming to shorten the time required for yield improvement in advanced processes. This move not only poses a continuous threat to Samsung and Intel but also widens TSMC’s leading edge.

Industry sources cited by the ChinaTimes’ report have revealed that TSMC has prepared for first tool-in at P1, with trial production expected in the fourth quarter this year and mass production in the second quarter of next year. Equipment manufacturers indicate that they have already deployed personnel and conducted preparatory training in response to TSMC’s customized demands.

As a new milestone in chip manufacturing processes, the 2-nanometer node will provide higher performance and lower power consumption. It adopts Nanosheet technology structure and further develops backside power rail technology. TSMC believes that the 2-nanometer node will enable it to maintain its technological leadership and seize the growth opportunities in AI.

In fact, the cost of producing 2-nanometer chips is exceptionally high. Per the report citing sources, compared to the 3-nanometer node, costs are expected to increase by 50%, with the per-wafer cost reaching USD 30,000. Therefore, the initial adopters are expected to be smartphone chip clients, notably Apple.

Previously, per a report from the media outlet wccftech, Apple’s iPhone, Mac, iPad, and other devices will be the first users of TSMC’s 2nm process. Apple will leverage TSMC’s 2nm process technology to enhance chip performance and reduce power consumption. This advancement is expected to result in longer battery life for future Apple products, such as the iPhone and MacBook.

Unlike with the 3-nanometer node, the complexity of the design means customers must start collaborating with TSMC earlier in the development process. Market speculations suggest that many clients such as MediaTek, Qualcomm, AMD, and NVIDIA have already begun cooperation. TSMC’s earnings call also emphasized that the number of customers for N2 is higher than that for N3 at the same stage of development.

The Fab 20 facility is expected to begin receiving related equipment for 2nm production as early as April, with plans to transition to GAA (Gate-All-Around) technology from FinFET for 2nm mass production by 2025.

The competition in the development of 2-nanometer technology is fierce. ASML plans to produce 10 2-nanometer EUV lithography machines this year, with Intel already reserving 6 of them. Additionally, Japan has mobilized its national efforts to establish Rapidus Semiconductor Manufacturing, which also aims to compete in the 2-nanometer process.

Read more

Please note that this article cites information from ChinaTimes and wccftech

2024-03-21

[News] NVIDIA CEO Jensen Huang Estimates Blackwell Chip Price Around USD 30,000 to USD 40,000

With the Blackwell series chips making a splash in the market, pricing becomes a focal point. According to Commercial Times citing sources, Jensen Huang, the founder and CEO of NVIDIA, revealed in a recent interview that the price range for the Blackwell GPU is approximately USD 30,000 to USD 40,000. However, this is just an approximate figure.

Jensen Huang emphasizes that NVIDIA customizes pricing based on individual customer needs and different system configurations. NVIDIA does not sell individual chips but provides comprehensive services for data centers, including networking and software-related equipment.

Reportedly, Jensen Huang stated that the global data center market is currently valued at USD 1 trillion, with total expenditures on data center hardware and software upgrades reaching USD 250 billion last year alone, a 20% increase from the previous year. He noted that NVIDIA stands to benefit significantly from this USD 250 billion investment in data centers.

According to documents recently released by NVIDIA, 19% of last year’s revenue came from a single major customer, and more than USD 9.2 billion in revenue from a few large cloud service providers in the last quarter alone. Adjusting the pricing of the Blackwell chip series could attract more businesses from various industries to become NVIDIA customers.

As per the report from Commercial Times, Jensen Huang is said to be optimistic about the rapid expansion of the AI application market, emphasizing that AI computing upgrades are just beginning. Reportedly, he believes that future demand will only accelerate, allowing NVIDIA to capture more market share.

According to a previous report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.

NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.

Per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

In the development of the GB200, NVIDIA invested significantly. Jensen Huang revealed that the development of the GB200 was a monumental task, with expenditures exceeding USD 10 billion solely on modern GPU architecture and design.

Given the substantial investment, Huang reportedly confirmed that NVIDIA has priced the Blackwell GPU GB200, tailored for AI and HPC workloads, at USD 30,000 to USD 40,000. Industry sources cited by the report from Commercial Times point out that NVIDIA is particularly keen on selling supercomputers or DGX B200 SuperPODS, as the average selling price (ASP) is higher in situations involving large hardware and software deployments.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commercial Times, Tom’s Hardware and TechNews.

  • Page 23
  • 46 page(s)
  • 230 result(s)

Get in touch with us