Nvidia


2024-06-04

[News] Jensen Huang Disclosed NVIDIA’s Plan for Establishing R&D Center in Taiwan, with at Least 1,000 Engineers Recruited

The 2024 Computex Taipei has kicked off, with NVIDIA CEO Jensen Huang delivering a speech on the industry’s prospects and future amidst the AI wave. According to a report from Commercial Times, during a media interview on the evening of June 3, Huang revealed plans for NVIDIA to establish an R&D center in Taiwan within the next five years.

Jensen Huang pointed out that NVIDIA already has a great AI research team. He confirmed the importance of Taiwanese partners, stating that TSMC is very important to NVIDIA’s operations, as well as expressing gratitude to partners such as Foxconn, Quanta, and ASUS for their support.

Huang further mentioned that within the next five years, NVIDIA will set up a large design center in Taiwan, indicating that the GPU giant is looking for a very spacious location and will hire at least 1,000 engineers.

When asked by the media about the speculation regarding his meeting with AMD CEO Lisa Su, Huang revealed that he did not attend her speech but acknowledged that AMD is a great company. He mentioned that he doesn’t expect to meet Su but didn’t rule it out the possiblity completely, adding that if it happens, he would welcome it.

 

Read more

(Photo credit: AMD)

Please note that this article cites information from Commercial Times.

2024-06-03

[News] AMD Unveils MI325X, Claiming 30% Faster Computing Power than NVIDIA’s H200

AMD Chairman and CEO Lisa Su unveiled the company’s latest AI chip, MI325X, at the opening of Computex Taipei on June 3. She emphasized that the MI325X boasts 30% faster computing speed compared to NVIDIA’s H200. According to a report from CNA, Su also announced that AMD plans to release a new generation of AI chips each respective year, hinting at a strong competitive stance against NVIDIA.

Lisa Su announced that the MI300 series is AMD’s fastest progressing product. The tech giant’s next-generation AI chip, MI325X, features HBM3e and is built on the CDNA 3 architecture.

According to Su, AMD MI325X outperforms NVIDIA’s H200 in performance and bandwidth, more than twice than that of NVIDIA’s H200. On the other hand, MI325X delivers 30% faster computing speed compared to the H200.

Furthermore, Su also announced that AMD will release MI350 in 2025, which will be manufactured with 3nm process, while MI400 is expected to follow, launched in 2026.

On June 3, Lisa Su stated that AMD will continue its collaboration with TSMC, advancing process technology to the 3nm and even 2nm nodes. Yet, Su did not directly address the previous market rumors suggesting that AMD might switch to Samsung’s 3nm technology.

Previously, as per a report on May 29th from The Korea Economic Daily, it has speculated that AMD is likely to become a customer of Samsung Electronics’ 3nm GAA process. Reportedly, during AMD CEO Lisa Su’s appearance at the 2024 ITF World, which was hosted by the Belgian microelectronics research center imec, Su revealed that AMD plans to use the 3nm GAA process for mass-producing next-generation chips.

Per the same report, Lisa Su stated that 3nm GAA transistors can enhance efficiency and performance, with improvements in packaging and interconnect technology. This will make AMD products more cost-effective and power-efficient. The report further addressed that, as Samsung is currently the only chip manufacturer with commercialized 3nm GAA process technology, Su’s comments were interpreted as indicating that AMD will officially collaborate with Samsung for 3nm production.

Read more

(Photo credit: AMD)

Please note that this article cites information from CNA and The Korea Economic Daily.

2024-06-03

[News] A Recap of NVIDIA’s Four Major Application Areas of New Technology from Jensen Huang

During NVIDIA founder and CEO Jensen Huang’s keynote speech on June 2, he shared insights on how the AI era is driving the development of a new global industrial revolution.

According to a report from TechNews, he covered various technologies and application areas, including advancements in accelerated computing, microservices, industrial digitalization, and consumer devices, which are expected to become key focus areas in the evolving AI market.

  • Accelerated Computing

1. Collaboration between the computer industry and NVIDIA to build AI factories and data centers: NVIDIA and leading computer manufacturers worldwide announced today the launch of a series of systems based on the NVIDIA Blackwell architecture. These systems feature Grace CPUs, NVIDIA networking technologies, and infrastructures to assist enterprises in establishing AI factories and data centers.

2. Foxconn utilizes NVIDIA artificial intelligence and Omniverse technology to train robots and streamline assembly operations: Foxconn operates over 170 plants worldwide, with its latest being a virtual plant driving the latest developments in industrial automation technology.

The latest of Foxconn’s plant is a digital twin model of a new factory in Guadalajara, Mexico, a hub for the electronics industry. Engineers at Foxconn define processes and train robots in this virtual environment to enable physical factories to efficiently produce the next generation of accelerated computing engines, the NVIDIA Blackwell HGX system.

3. NVIDIA significantly strengthens Ethernet networks for generative artificial intelligence: NVIDIA announced widespread adoption of the NVIDIA Spectrum-X Ethernet platform and will accelerate the release of new products. CoreWeave, GMO Internet Group, Lambda, Scaleway, STPX Global, and Yotta are the first batch of AI cloud service providers to adopt NVIDIA Spectrum-X, bringing ultimate network performance to their AI infrastructure.

Additionally, NVIDIA’s partners have also released products utilizing the Spectrum platform, including ASRock Rack, ASUS, GIGABYTE Technology, Ingrasys Inc., Inventec, Quanta Cloud Technology, Wistron and Wiwynn. Moreover, Dell Technologies, Hewlett Packard Enterprise, Lenovo, and Super Micro Computer have collaborated with NVIDIA to incorporate the Spectrum platform into their respective products.

  • Microservices

NVIDIA NIM has revolutionized deployment model: NVIDIA has announced that its inference microservice, NVIDIA NIM, optimized in container form, is now available for download by 28 million developers worldwide.

This allows deployment to cloud, data centers, or workstations, enabling developers to effortlessly build generative artificial intelligence applications for assisting partners, such as copilots and chatbots, within minutes, a process that previously took several weeks.

  • Industrial Digitalization

1. Electronics manufacturers adopt NVIDIA AI and Omniverse to drive robotic factories and accelerate industrial digitization: NVIDIA announced that major Taiwanese electronics manufacturers, including Delta Electronics, Foxconn, Pegatron Corporation, and Wistron Corporation, are using NVIDIA’s technology to transform their factories into more autonomous production facilities through new reference workflows.

This workflow combines NVIDIA Metropolis visual artificial intelligence (AI) technology, NVIDIA Omniverse’ physically accurate rendering and simulation technology, and NVIDIA Isaac’s AI robot development and deployment technology.

2. Industry leaders adopt NVIDIA’s robotic technology to develop tens of millions of AI-supported autonomous machines, including BYD Electronics, Siemens, Teradyne Robotics, and Alphabet’s Intrinsic, among more than ten global leading companies in the robotics industry.

These companies integrate NVIDIA Isaac acceleration libraries, physically principled simulation content, and AI models into their software frameworks and robot models to enhance efficiency in factories, warehouses, and distribution centers. This enables human colleagues to work in safer environments and serves as intelligent assistants in executing repetitive or ultra-precise tasks.

3. NVIDIA introduces NVIDIA IGX with Holoscan support, enabling enterprise software to run medical, industrial, and scientific artificial intelligence applications in real-time at the edge: To meet the growing demand for real-time artificial intelligence computing technology at the industrial edge, NVIDIA announces the comprehensive launch of NVIDIA AI Enterprise-IGX software with Holoscan on the NVIDIA IGX platform.

  • Consumer Devices

1. NVIDIA utilizes GeForce RTX AI PC to deliver the real AI assistant experience: NVIDIA announces the launch of the new NVIDIA RTX technology, designed to support AI assistants and digital human platforms running on new GeForce RTX AI laptops.

2. NVIDIA introduces Digital Human Microservices to lay the foundation for future generative AI digital avatars: NVIDIA announces the comprehensive rollout of NVIDIA ACE generative artificial intelligence microservices to accelerate the development of the next wave of digital humans and numerous breakthroughs in generative AI soon to be introduced on the platform.

Companies in customer service, gaming, and healthcare sectors are among the first to adopt ACE technology, making it easier to create, personalize, and interact with realistic digital humans. These microservices have broad applications in customer service, telehealth, gaming, and entertainment.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNews and NVIDIA.

2024-06-03

[News] NVIDIA CEO Jensen Huang Announces the Latest Rubin Architecture – Rubin Ultra GPU to Feature 12 HBM4

NVIDIA CEO Jensen Huang delivered a keynote speech at the NTU (National Taiwan University) Sports Center on June 2. As per a report from TechNews, during the speech, he unveiled the new generation Rubin architecture, showcasing NVIDIA’s accelerated rollout of new architectures, which became the highlight of the evening.

While discussing NVIDIA’s next-generation architecture, Huang mentioned the Blackwell Ultra GPU and indicated that it may continue to be upgraded. He then revealed that the next-generation architecture following Blackwell will be the Rubin architecture.

The Rubin GPU will feature 8 HBM4, while the Rubin Ultra GPU will come with 12 HBM4 chips, he noted.

The new architecture has been named after American astronomer Vera Rubin, who made significant contributions to our understanding of dark matter in the universe and conducted pioneering work on galaxy rotation rates.

Notably, despite the fact that NVIDIA has just launched the new Blackwell platform, it appears that NVIDIA keeps accelerating its roadmap. According to the latest information announced by Jensen Huang, the Rubin GPU will become part of the R series products and is expected to go into mass production in the fourth quarter of 2025. The Rubin GPU and its corresponding platform are anticipated to be released in 2026, followed by the Ultra version in 2027. NVIDIA also confirmed that the Rubin GPU will use HBM4.

Per a report from global media outlet Wccftech, NVIDIA’s Rubin GPU is expected to adopt a 4x reticle design and utilize TSMC’s CoWoS-L packaging technology, along with the N3 process. Moreover, NVIDIA will use next-generation HBM4 DRAM to power its Rubin GPU.

According to Wccftech, currently, NVIDIA employs the fastest HBM3e in its B100 GPU, and it plans to update these chips with the HBM4 version when HBM4 solutions are produced in large quantities by the end of 2025.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from TechNews and Wccftech.

2024-06-03

[News] Heated Competition Driven by the Booming AI Market: A Quick Glance at HBM Giants’ Latest Moves, and What’s Next

To capture the booming demand of AI processors, memory heavyweights have been aggressively expanding HBM (High Bandwidth Memory) capacity, as well as striving to improve its yield and competitiveness. The latest development would be Micron’s reported new plant in Hiroshima Prefecture, Japan.

The fab, targeting to produce chips and HBM as early as 2027, is reported to manufacture DRAM with the most advanced “1γ” (gamma; 11-12 nanometers) process, using extreme ultraviolet (EUV) lithography equipment in the meantime.

Why is HBM such a hot topic, and why is it so important?

HBM: Solution to High Performance Computing; Perfectly Fitted for AI Chips

By applying 3D stacking technology, which enables multiple layers of chips to be stacked on top of each other, HBM’s TSVs (through-silicon vias) process allows for more memory chips to be packed into a smaller space, thus shortening the distance data needs to travel. This makes HBM perfectly fitted to high-performance computing applications, which requires fast data speed. Additionally, replacing GDDR SDRAM or DDR SDRAM with HBM will help control energy consumption.

Thus, it would not be surprising that AMD, the GPU heavyweight, collaborated with memory leader SK hynix to develop HBM in 2013. In 2015, AMD launched the world’s first high-end consumer GPU with HBM, named Fiji. While in 2016, NVIDIA introduced P100, its first AI server GPU with HBM.

Entering the Era of HBM3e

Years after the first AI server GPU with HBM was launched, NVIDIA has now incorporated HBM3e (the 5th generation HBM) in its Blackwell B100/ Hopper H200 models. The GPU giant’s GB200 and B100, which will also adopt HBM3e, are on the way, expected to be launched in 2H24.

The current HBM3 supply for NVIDIA’s H100 is primarily met by SK hynix. In March, it has reportedly started mass production of HBM3e, and secured the order to NVIDIA. In May, yield details regarding HBM3e have been revealed for the first time. According to Financial Times, SK hynix has achieved the target yield of nearly 80%.

On the other hand, Samsung made it into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, while received AMD MI300 certification by 1Q24. In March, Korean media Alphabiz reported that Samsung may exclusively supply its 12-layer HBM3e to NVIDIA as early as September. However, rumors have it that it failed the test with NVIDIA, though Samsung denied the claims, noting that testing proceeds smoothly and as planned.

According to Korea Joongang Daily, Micron has roused itself to catch up in the heated competition of HBM3e. Following the mass production in February, it has recently secured an order from NVIDIA for H200.

Regarding the demand, TrendForce notes that HBM3e may become the market mainstream for 2024, which is expected to account for 35% of advanced process wafer input by the end of 2024.

HBM4 Coming Soon? Major Players Gear up for Rising Demand

As for the higher-spec HBM4, TrendForce expects its potential launch in 2026. With the push for higher computational performance, HBM4 is set to expand from the current 12-layer (12hi) to 16-layer (16hi) stacks. HBM4 12hi products are set for a 2026 launch, with 16hi in 2027.

The Big Three have all revealed product roadmaps for HBM4. SK hynix, according to reports from Wccftech and TheElec, stated to commence large-scale production of HBM4 in 2026. The chip will, reportedly, be the first chip from SK hynix made through its 10-nm class Gen 6 (1c) DRAM.

As the current market leader in HBM, SK hynix shows its ambition in capacity expansion as well as industrial collaboration. According to Nikkei News, it is considering expanding the investment to Japan and the US to increase HBM production and meet customer demand.

In April, it disclosed details regarding the collaboration with TSMC, of which SK hynix plans to adopt TSMC’s advanced logic process (possibly CoWoS) for HBM4’s base die so additional functionality can be packed into limited space.

Samsung, on the other hand, claimed to introduce HBM4 in 2025, according to Korea Economic Daily. The memory heavyweight stated at CES 2024 that its HBM chip production volume will increase 2.5 times compared to last year and is projected to double again next year. In order to embrace the booming demands, the company spent KRW 10.5 billion to acquire the plant and equipment of Samsung Display located in Tianan City, South Korea, for HBM capacity expansion. It also plans to invest KRW 700 billion to 1 trillion in building new packaging lines.

Meanwhile, Micron anticipates launching 12-layer and 16-layer HBM4 with capacities of 36GB to 48GB between 2026 and 2027. After 2028, HBM4e will be introduced, pushing the maximum bandwidth beyond 2+ TB/s and increasing stack capacity to 48GB to 64GB.

Look back at history. As the market demand for AI chips keeps its momentum, GPU companies tend to diversify their sources, while memory giants vie for their favor by improving yields and product competitiveness.

In the era of HBM3, the supply for NVIDIA’s H100 solution is primarily met by SK hynix at first. Afterwards, Samsung’s entry into NVIDIA’s supply chain with its 1Znm HBM3 products in late 2023, though initially minor, signifies its breakthrough in this segment. This trend of diversifying suppliers may continue in HBM4. Who would be able to claim the lion’s share in the next-gen HBM market? Time will tell sooner or later.

Read more

(Photo credit: Samsung)

  • Page 17
  • 46 page(s)
  • 230 result(s)

Get in touch with us