Articles


2024-03-22

[News] AMD Hosts Innovation Summit, CEO Lisa Su Highlights the Beginning of AIPC Era

Following NVIDIA’s GTC 2024, AMD also hosted an AI PC Innovation Summit on March 21st, with CEO Lisa Su leading the top executives in attendance. As per a report from Commercial Times, by collaborating with partners including brands ASUS, MSI, and Acer, AMD has showcased its exciting applications in AI PCs.

AMD highlights that future language models will evolve in two directions: one is the large-scale models introduced by tech giants, which use increasingly more parameters and become more complex in operation, with closed architecture being a major characteristic.

The other direction is small open-source models, which are becoming more widely accepted by the public. These models with fewer parameters can run smoothly on edge devices, especially AI PCs, expecting a significant influx of developers.

Furthermore, the AI compute requirements for large and small language models are entirely different. AMD has different hardware positioning to meet all demands.

Lisa Su emphasizes that artificial intelligence is driving a revolution, reshaping every aspect of the tech industry, from data centers to AI PCs and edge computing. AMD is excited about the opportunities presented by this new era of computing.

TrendForce previously issued an analysis in a press release, indicating that the AI PC market is propelled by two key drivers: Firstly, demand for terminal applications, mainly dominated by Microsoft through its Windows OS and Office suite, is a significant factor. Microsoft is poised to integrate Copilot into the next generation of Windows, making Copilot a fundamental requirement for AI PCs. Secondly, Intel, as a leading CPU manufacturer, is advocating for AI PCs that combine CPU, GPU, and NPU architectures to enable a variety of terminal AI applications.

Introduced around the end of 2023, Qualcomm’s Snapdragon X Elite platform is set to be the first to meet Copilot standards, with shipments expected in the second half of 2024. This platform is anticipated to deliver around 45 TOPS.

Following closely behind, AMD’s Ryzen 8000 series (Strix Point) is also expected to meet these requirements. Intel’s Meteor Lake, launched in December 2023 with a combined CPU+GPU+NPU power of 34 TOPS, falls short of Microsoft’s standards. However, Intel’s upcoming Lunar Lake might surpass the 40 TOPS threshold by the end of the year.

The race among Qualcomm, Intel, and AMD in the AI PC market is set to intensify the competition between the x86 and Arm CPU architectures in the Edge AI market. Qualcomm’s early compliance with Microsoft’s requirements positions it to capture the initial wave of AI PC opportunities, as major PC OEMs like Dell, HPE, Lenovo, ASUS, and Acer develop Qualcomm CPU-equipped models in 2024, presenting a challenge to the x86 camp.

Read more

(Photo credit: AMD)

Please note that this article cites information from Commercial Times and Bloomberg.

2024-03-22

[News] Micron’s Financial Report Reveals High Demand for HBM in 2025, Capacity Nears Full Allocation

Micron, the major memory manufacturer in the United States, has benefited from AI demand, turning losses into profits last quarter (ending in February) and issuing optimistic financial forecasts.

During its earnings call on March 20th, Micron CEO Sanjay Mehrotra stated that the company’s HBM (High Bandwidth Memory) capacity for this year has been fully allocated, with most of next year’s capacity already booked. HBM products are expected to generate hundreds of millions of dollars in revenue for Micron in the current fiscal year.

Per a report from Washington Post, Micron expects revenue for the current quarter (ending in May) to be between USD 6.4 billion and USD 6.8 billion, with a midpoint of USD 6.6 billion, surpassing Wall Street’s expectation of USD 6 billion.

Last quarter, Micron’s revenue surged 58% year-on-year to USD 5.82 billion, exceeding Wall Street’s expectation of USD 5.35 billion. The company posted a net profit of USD 790 million last quarter, a turnaround from a loss of USD 2.3 billion in the same period last year. Excluding one-time charges, Micron’s EPS reached USD 0.42 last quarter. Mehrotra reportedly attributed Micron’s return to profitability last quarter to the company’s efforts in pricing, product, and operational costs.

Over the past year, memory manufacturers have reduced production, coupled with the explosive growth of the AI industry, which has led to a surge in demand for NVIDIA AI processors, benefiting upstream memory manufacturers.

Mehrotra stated, “We believe Micron is one of the biggest beneficiaries in the semiconductor industry of the multiyear opportunity enabled by AI.”

The projected growth rates for DRAM and NAND Flash bit demand in 2024 are close to 15% and in the mid-teens, respectively. However, the supply growth rates for DRAM and NAND Flash bits in 2024 are both lower than the demand growth rates.

Micron utilizes 176 and 232-layer processes for over 90% of its NAND Flash production. As for HBM3e, it is expected to contribute to revenue starting from the second quarter.

Per a previous TrendForce press release, the three major original HBM manufacturers held market shares as follows in 2023: SK Hynix and Samsung were both around 46-49%, while Micron stood at roughly 4-6%.

In terms of capital expenditures, the company maintains an amount of USD 7.5 to USD 8 billion (taking into account U.S. government subsidies), primarily allocated for enhancing HBM-related capacity.

Micron stated that due to the more complex packaging of HBM, it consumes three times the DRAM capacity of DDR5, indirectly constraining the capacity for non-HBM products, thereby improving overall DRAM market supply.

As per Micron’s report, regarding growth outlooks for various end markets in 2024, the annual growth rate for the data center industry has been revised upward from mid-single digits to mid-to-high single digits, while the PC industry’s annual growth rate remains at low to mid-single digits. AI PCs are expected to capture a certain market share in 2025. The annual growth rate for the mobile phone industry has been adjusted upward from modest growth to low to mid-single digits.

Read more

(Photo credit: Micron)

Please note that this article cites information from Micron and Washington Post.

2024-03-22

[News] Samsung Reportedly Commits to Advanced Packaging, Targets Over USD100 Million in Related Revenue This Year

Amid the AI boom driving a surge in demand for advanced packaging, South Korean semiconductor giant Samsung Electronics is aggressively entering the advanced packaging arena. On the 20th, it announced its ambitions to achieve record-high revenue in advanced packaging this year, aiming to surpass the USD 100 million mark.

According to reports from Reuters and The Korea Times, Samsung’s annual shareholders’ meeting took place on March 20th.

During the meeting, Han Jong-hee, the vice chairman of the company, stated as follows: “Although the macroeconomic environment is expected to be uncertain this year, we see an opportunity for increased growth through next-generation technology innovation.”

“Samsung plans to apply AI to all devices, including smartphones, foldable devices, accessories and extended reality (XR), to provide customers with a new experience where generative AI and on-device AI unfold,” Han added.

Samsung established the Advanced Package Business Team under the Device Solutions business group in December last year. Samsung Co-CEO Kye-Hyun Kyung stated that he expects the results of Samsung’s investment to come out in earnest from the second half of this year.

Kyung further noted that for a future generation of HBM chips called HBM4, likely to be released in 2025 with more customised designs, Samsung will take advantage of having memory chips, chip contract manufacturing and chip design businesses under one roof to satisfy customer needs.

According to a previous report from TrendForce, Samsung led the pack with the highest revenue growth among the top manufacturers in Q4 as it jumped 50% QoQ to hit $7.95 billion, largely due to a surge in 1alpha nm DDR5 shipments, boosting server DRAM shipments by over 60%. In the fourth quarter of last year, Samsung secured a market share of 45.5% in DRAM chips.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Reuters and The Korea Times.

 

2024-03-21

[News] MediaTek Partners with Ranovus to Enter Niche Market, Expands into Heterogeneous Integration Co-Packaged Optics Industry

MediaTek has reportedly made its foray into the booming field of Heterogeneous Integration Co-Packaged Optics (CPO), announcing on March 20th a partnership with optical communications firm Ranovus to launch a customized Application-Specific Integrated Circuit (ASIC) design platform for CPO. This platform is reported to provide advantages such as low cost, high bandwidth density, and low power consumption, expanding MediaTek’s presence in the thriving markets of AI, Machine Learning (ML), and High-Performance Computing (HPC).

According to its press release, on the eve of the 2024 Optical Fiber Communication Conference (OFC 2024), MediaTek announced the launch of a new-generation customized chip design platform, offering heterogeneous integration solutions for high-speed electronic and optical signal transmission interfaces (I/O).

MediaTek stated that it will be demonstrating a serviceable socketed implementation that combines 8x800G electrical links and 8x800G optical links for a more flexible deployment. It integrates both MediaTek’s in-house SerDes for electrical I/O as well as co-packaged Odin® optical engines from Ranovus for optical I/O.

As per the same release, leveraging the heterogeneous solution that includes both 112G LR SerDes and optical modules, this CPO demonstration is said to be delivering reduced board space and device costs, boosts bandwidth density, and lowers system power by up to 50% compared to existing solutions.

MediaTek emphasizes that its ASIC design platform covers all aspects from design to production, offering a comprehensive solution with the latest industry technologies such as MLink, UCIe’s Die-to-Die Interface, InFO, CoWoS, Hybrid CoWoS advanced packaging technologies, PCIe high-speed transmission interfaces, and integrated thermals and mechanical design.

“The emergence of Generative AI has resulted in significant demand not only for higher memory bandwidth and capacity, but also for higher I/O density and speeds, integration of electrical and optical I/O is the latest technology that allows MediaTek to deliver the most flexible leading edge data center ASIC solutions.” said Jerry Yu, Senior Vice President at MediaTek.

As per Economy Daily News citing Industry sources, they have predicted that as the next-generation of optical communication transitions to 800G transmission speeds, the physical limitations of materials will necessitate the use of optical signals instead of electronic signals to achieve high-speed data transmission. This, reportedly, will lead to a rising demand for CPOs with optical-to-electrical conversion capabilities, becoming one of the new focal points for semiconductor manufacturers to target.

Read more

(Photo credit: MediaTek)

Please note that this article cites information from MediaTek and Economy Daily News.

2024-03-21

[News] NVIDIA CEO Jensen Huang Estimates Blackwell Chip Price Around USD 30,000 to USD 40,000

With the Blackwell series chips making a splash in the market, pricing becomes a focal point. According to Commercial Times citing sources, Jensen Huang, the founder and CEO of NVIDIA, revealed in a recent interview that the price range for the Blackwell GPU is approximately USD 30,000 to USD 40,000. However, this is just an approximate figure.

Jensen Huang emphasizes that NVIDIA customizes pricing based on individual customer needs and different system configurations. NVIDIA does not sell individual chips but provides comprehensive services for data centers, including networking and software-related equipment.

Reportedly, Jensen Huang stated that the global data center market is currently valued at USD 1 trillion, with total expenditures on data center hardware and software upgrades reaching USD 250 billion last year alone, a 20% increase from the previous year. He noted that NVIDIA stands to benefit significantly from this USD 250 billion investment in data centers.

According to documents recently released by NVIDIA, 19% of last year’s revenue came from a single major customer, and more than USD 9.2 billion in revenue from a few large cloud service providers in the last quarter alone. Adjusting the pricing of the Blackwell chip series could attract more businesses from various industries to become NVIDIA customers.

As per the report from Commercial Times, Jensen Huang is said to be optimistic about the rapid expansion of the AI application market, emphasizing that AI computing upgrades are just beginning. Reportedly, he believes that future demand will only accelerate, allowing NVIDIA to capture more market share.

According to a previous report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.

NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.

Per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

In the development of the GB200, NVIDIA invested significantly. Jensen Huang revealed that the development of the GB200 was a monumental task, with expenditures exceeding USD 10 billion solely on modern GPU architecture and design.

Given the substantial investment, Huang reportedly confirmed that NVIDIA has priced the Blackwell GPU GB200, tailored for AI and HPC workloads, at USD 30,000 to USD 40,000. Industry sources cited by the report from Commercial Times point out that NVIDIA is particularly keen on selling supercomputers or DGX B200 SuperPODS, as the average selling price (ASP) is higher in situations involving large hardware and software deployments.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commercial Times, Tom’s Hardware and TechNews.

  • Page 217
  • 447 page(s)
  • 2235 result(s)

Get in touch with us