News
According to a report from Bloomberg, Jun Young-hyun, head of Samsung’s chip business, recently sent a stern warning to employees about the need to reform the company’s culture to avoid falling into a vicious cycle.
Jun stated that the recent improvement in Samsung’s performance was due to a rebound in the memory market. To sustain this progress, Samsung must take measures to eliminate communication barriers between departments and stop concealing or avoiding problems.
Earlier this week, Samsung announced its Q2 earnings, showcasing the fastest net profit growth since 2010. However, Jun Young-hyun highlighted several issues which may undermine Samsung’s long-term competitiveness.
He emphasized the need to rebuild the semiconductor division’s culture of vigorous debate, warning that relying solely on market recovery without restoring fundamental competitiveness would lead to a vicious cycle and repeating past mistakes.
Samsung is still striving to close the gap with its competitors. The company is working to improve the maturity of its 2nm process to meet the high-performance, low-power demands of advanced processes. Samsung’s the first-generation 3nm GAA process has achieved yield maturity and is set for mass production in the second half of the year.
In memory, Samsung is beginning to narrow the gap with SK Hynix in high-bandwidth memory (HBM). According to Bloomberg, Samsung has received certification for HBM3 chips from NVIDIA and expects to gain certification for the next-generation HBM3e within two to four months.
Jun emphasized that although Samsung is in a challenging situation, he is confident that with accumulated experience and technology, the company can quickly regain its competitive edge.
Read more
(Photo credit: Samsung)
News
According to a report from Commercial Times, IC design giant MediaTek is making its move into the AI accelerator sector. MediaTek CEO Rick Tsai emphasized that MediaTek aims to be the best partner for edge AI, focusing on advanced technologies and 3nm process to optimize the power consumption and efficiency of its SoCs (system-on-chips).
Tsai mentioned that MediaTek will unveil its Dimensity 9400 flagship series of chips in October, designed to perfectly support most large language models on the market. He expressed confidence in achieving a more than 50% year-over-year revenue increase for flagship devices.
For the ASIC (Application-Specific Integrated Circuit) market, MediaTek has confirmed its entry into the AI accelerator field and will integrate CPUs as needed. ASICs are expected to start the revenue contribution in the second half of next year.
Additionally, the TAM (Total Addressable Market) for generative AI is still in its early stages. MediaTek focuses on providing leading interconnect capabilities, such as SerDes IP and Ethernet PHY.
Regarding AI technology in the smartphone sector, Rick Tsai believes that flagship smartphones are seeing an increase in ASP, and there is a gradual shift towards high-end smartphones in the Chinese market. He noted that Chinese brands are actively developing AI, particularly in model development, such as open-source models like LLaMA 3.
At MediaTek’s earnings call on July 31st, Rick Tsai noted that the company anticipates a return to normal seasonal patterns in the second half of the year. The outlook for the fourth quarter will largely depend on consumer product demand.
Regarding the outlook in the third quarter, MediaTek expects the revenue to be flattish, ranging from NTD 123.5 billion to NTD 132.4 billion, compared with NTD 127.27 billion last quarter. Gross margin is projected to slide to 45.5-48.5%, from 48.8%, down 3.6 percentage points quarter over quarter and up 1.3 percentage points year-over-year.
Aside from the boost expected from the Dimensity 9400 flagship release, the company’s fourth-quarter market demand is projected to be relatively moderate, which is why the annual outlook remains unchanged. MediaTek expects its full-year gross margin to be between 46% and 48%.
Regarding TSMC’s pricing adjustments, Rick Tsai remains unperturbed, noting that all industry peers face similar cost pressures. MediaTek aims to reflect cost increases through pricing, with a gross margin target of 47% for new products. Progress has also been made in 3nm and 2nm processes, and MediaTek has secured its capacity needs for 2025 with TSMC.
Moreover, MediaTek and NVIDIA are collaborating on automotive chips, with plans to launch their first chip in early 2025. Rick Tsai mentioned that details are yet to be disclosed, but significant advancements in the automotive sector are expected between 2027 and 2028.
Read more
(Photo credit: MediaTek)
News
NVIDIA’s next-generation Blackwell architecture AI superchip is about to ship. According to a report from Commercial Times, on July 29 during the SIGGRAPH conference in Denver, USA, NVIDIA announced a series of software updates and revealed that samples of the new AI chip architecture Blackwell have been distributed, sparking optimism about the company’s continued record-breaking performance.
Industry sources cited by the report have indicated that the Blackwell series is regarded by Jensen Huang as the most successful product in history. It is expected to drive a new wave of AI server data center construction by cloud service providers (CSPs).
The report notes further that in addition to TSMC’s 4nm process being in high demand, the increasing penetration of water cooling technology, which is projected to reach up to 10%, is likely to benefit Cooling Distribution Unit suppliers such as Vertiv, as well as companies like Asia Vital Components, AURAS Technology, Delta Electronics, and Cool IT.
Furthermore, the new AI superchip is expected to start shipping to clients in the fourth quarter, with full-scale production set for 2025. Assembly plants will also benefit, including Wistron, Foxconn (through its subsidiary Ingrasys), which are involved in front-end manufacturing of substrates, computing boards, and switch boards.
Companies such as Wiwynn, Quanta (Quanta Cloud Technology), Inventec, GIGABYTE, ASUS, and ASRock are also expected to see increased orders for their rack-mounted systems. Among these, Quanta, Wiwynn, and Inventec have indicated that their related products are expected to start shipping in the fourth quarter, with further increases in volume anticipated in the first half of next year.
The NVIDIA Blackwell platform is set to become the main solution for NVIDIA’s high-end GPUs. TrendForce estimates that GB200 NVL36 shipments are expected to reach 60,000 units in 2025, with Blackwell GPU usage between 2.1 to 2.2 million units, making Blackwell the mainstream platform and accounting for over 80% of NVIDIA’s high-end GPUs.
TrendForce observes that the GB200 NVL36 architecture will initially utilize a combination of air and liquid cooling solutions, while the NVL72, due to higher cooling demands, will primarily employ liquid cooling.
Read more
(Photo credit: NVIDIA)
News
On July 30, AMD announced its second-quarter financial results (ending June 29), with profits exceeding Wall Street expectations. According to a report from TechNews, the most notable highlight is that nearly half of AMD’s sales now come from data center products, rather than from PC chips, gaming consoles, or industrial and automotive embedded chips.
AMD’s growth this quarter is may attribute to the MI300 accelerator. AMD CEO Lisa Su highlighted that the company’s chip sales for the quarter just surpassed USD 1 billion, with contributions also coming from EPYC CPUs.
As per a report from The Verge, AMD is following a similar path as NVIDIA, producing new AI chips annually and accelerating all R&D efforts to maintain a competitive edge. During the earnings call, AMD reaffirmed that the MI325X will launch in Q4 of this year, followed by the next-generation MI350 next year, and the MI400 in 2026.
Lisa Su emphasized that the MI350 should be very competitive compared to NVIDIA’s Blackwell. NVIDIA launched its most powerful AI chip, Blackwell, in March of this year and has recently started providing samples to buyers.
Regarding the MI300, Su noted that while AMD is striving to sell as many products as possible and the supply chain is improving, supply is still expected to be tight until 2025.
Per a reports from TechNews, despite AMD’s data center business doubling in growth this year, it still constitutes only a small fraction of NVIDIA’s scale. NVIDIA’s latest quarterly revenue reached USD 22.6 billion, with data center performance also hitting new highs.
A report from anue further indicates that, AMD’s core business remains the CPUs for laptops and servers. The PC sales, categorized under the company’s Client segment, saw a 49% increase year-over-year, reaching USD 1.5 billion. Sales of AMD’s AI chips continue to grow, and with strong demand expected to persist, the company forecasts that third-quarter revenue will exceed market expectations.
Additionally, AMD produces chips for gaming consoles and GPUs for 3D graphics, which fall under the company’s Gaming segment. Although sales for PlayStation and Xbox have declined, leading to a 59% drop in revenue from this segment compared to last year, totaling USD 648 million, AMD notes that sales of its Radeon 6000 GPUs have actually been growing year over year.
Read more
(Photo credit: AMD)
News
Apple’s latest technical document reveals that the two main AI models behind Apple Intelligence are trained using Google’s Tensor Processing Units (TPUs) instead of NVIDIA GPUs. According to a report from Commercial Times, this suggests that the demand for NVIDIA chips has outstripped supply, prompting some tech giants to seek alternatives.
Apple first introduced an AI technical document in June, briefly stating that its AI models were trained using TPUs. The latest technical document, which spans 47 pages, provides a detailed explanation of how Apple’s foundational models (AFM) and AFM servers are trained in Cloud TPU Clusters. This indicates that Apple rents cloud servers from cloud service providers to train its AI models.
In the document, Apple stated: “This system allows us to train the AFM models efficiently and scalably, including AFM-on-device, AFM-server, and larger models.”
Apple further mentioned that the on-device AFM models for iPhones and iPads are trained using a total of 2,048 TPUv5p chips, which are currently the most advanced TPU chips on the market. The AFM servers are trained using a total of 8,192 TPUv4 chips.
Google initially launched TPUs in 2015 for internal training use only and started offering TPU rental services to external clients in 2017. These TPUs are currently the most mature custom chips used for AI training. According to Google’s official website, the rental cost of their most advanced TPUs is approximately USD 2 per hour based on a three-year contract.
Though NVIDIA’s GPUs are currently dominating the high-end AI chip market, the enormous number of chips required for AI model training has led to a severe shortage. This is because major tech companies like OpenAI, Microsoft, Google, Meta, Oracle, and Tesla all use NVIDIA chips to develop their AI technologies.
Since the rise of ChatGPT at the end of 2022, which spurred the generative AI market, Silicon Valley tech giants have been racing to invest in AI research and development. In contrast, Apple has lagged behind its competitors and now has to intensify its efforts to bolster Apple Intelligence. On July 29th, Apple released a preview version of Apple Intelligence for certain devices.
Read more
(Photo credit: NVIDIA)