Articles


2023-08-28

How Are Autotech Giants Revving Up Their R&D Game Amid the Downturn?

In the face of adversities within the autonomous vehicle market, car manufacturers are not hitting the brakes. Rather, they’re zeroing in, adopting more focused and streamlined strategies, deeply rooted in core technologies.

Eager to expedite the mass-scale rollout of Robotaxis, Tesla recently announced an acceleration in the development of their Dojo supercomputer. They are now committing an investment of $1 billion and set to have 100,000 NVIDIA A100 GPUs ready by early 2024, potentially placing them among the top five global computing powerhouses.

While Tesla already boasts a supercomputer built on NVIDIA GPUs, they’re still passionate about crafting a highly efficient one in-house. This move signifies that computational capability is becoming an essential arsenal for automakers, reflecting the importance of mastering R&D in this regard.

HPC Fosters Collaboration in the Car Ecosystem

According to forecasts from TrendForce, the global high-performance computing(HPC) market could touch $42.6 billion by 2023, further expanding to $56.8 billion by 2027 with an annual growth rate of over 7%. And it is highly believed that the automotive sector is anticipated to be the primary force propelling this growth.

Feeling the heat of industry upgrades, major automakers like BMW, Continental, General Motors, and Toyota aren’t just investing in high-performance computing systems; they’re also forging deep ties with ecosystem partners, enhancing cloud, edge, chip design, and manufacturing technologies.

For example, BMW, who’s currently joining forces with EcoDataCenter, is currently seeking to extend its high-performance computing footprint, aiming to elevate their autonomous driving and driver-assist systems.

On another front, Continental, the leading tier-1 supplier, is betting on its cross-domain integration and scalable CAEdge (Car Edge framework). Set to debut in the first half of 2023, this solution for smart cockpits offers automakers a much more flexible development environment.

In-house Tech Driving Towards Level 3 and Beyond

To successfully roll out autonomous driving on a grand scale, three pillars are paramount: extensive real-world data, neural network training, and in-vehicle hardware/software. None can be overlooked, thereby prompting many automakers and Tier 1 enterprises to double down on their tech blueprints.

Tesla has already made significant strides in various related products. Beyond their supercomputer plan, their repertoire includes the D1 chip, Full Self-Driving (FSD) computation, multi-camera neural networks, and automated tagging, with inter-platform data serving as the backbone for their supercomputer’s operations.

In a similar vein, General Motors’ subsidiary, Cruise, while being mindful of cost considerations, is gradually phasing out NVIDIA GPUs, opting instead to develop custom ASIC chips to power its vehicles.

Another front-runner, Valeo, unveiled their Scala 3 in the first half of 2023, nudging LiDAR technology closer to Level 3, and laying a foundation for robotaxi(Level 4) deployment.

All this paints a picture – even with a subdued auto market, car manufacturers’ commitment to autonomous tech R&D hasn’t waned. In the long run, those who steadfastly stick to their tech strategies and nimbly adjust to market fluctuations are poised to lead the next market resurgence, becoming beacons in the industry.

For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at lattechung@trendforce.com

(Photo credit: Tesla)

2023-08-25

TrendForce Dives into NVIDIA’s Product Positioning and Supply Chain Shifts Post Earnings Release

NVIDIA’s latest financial report for FY2Q24 reveals that its data center business reached US$10.32 billion—a QoQ growth of 141% and YoY increase of 171%. The company remains optimistic about its future growth. TrendForce believes that the primary driver behind NVIDIA’s robust revenue growth stems from its data center’s AI server-related solutions. Key products include AI-accelerated GPUs and AI server HGX reference architecture, which serve as the foundational AI infrastructure for large data centers.

TrendForce further anticipates that NVIDIA will integrate its software and hardware resources. Utilizing a refined approach, NVIDIA will align its high-end, mid-tier, and entry-level GPU AI accelerator chips with various ODMs and OEMs, establishing a collaborative system certification model. Beyond accelerating the deployment of CSP cloud AI server infrastructures, NVIDIA is also partnering with entities like VMware on solutions including the Private AI Foundation. This strategy extends NVIDIA’s reach into the edge enterprise AI server market, underpinning steady growth in its data center business for the next two years.

NVIDIA’s data center business surpasses 76% market share due to strong demand for cloud AI

In recent years, NVIDIA has been actively expanding its data center business. In FY4Q22, data center revenue accounted for approximately 42.7%, trailing its gaming segment by about 2 percentage points. However, by FY1Q23, data center business surpassed gaming—accounting for over 45% of revenue. Starting in 2023, with major CSPs heavily investing in ChatBOTS and various AI services for public cloud infrastructures, NVIDIA reaped significant benefits. By FY2Q24, data center revenue share skyrocketed to over 76%.

NVIDIA targets both Cloud and Edge Data Center AI markets

TrendForce observes and forecasts a shift in NVIDIA’s approach to high-end GPU products in 2H23. While the company has primarily focused on top-tier AI servers equipped with the A100 and H100, given positive market demand, NVIDIA is likely to prioritize the higher-priced H100 to effectively boost its data-center-related revenue growth.

NVIDIA is currently emphasizing the L40s as their flagship product for mid-tier GPUs, meaning several strategic implications: Firstly, the high-end H100 series is constrained by the limited production capacity of current CoWoS and HBM technologies. In contrast, the L40s primarily utilizes GDDR memory. Without the need for CoWos packaging, it can be rapidly introduced to the mid-tier AI server market, filling the gap left by the A100 PCle interface in meeting the needs of enterprise customers.

Secondly, the L40s also target enterprise customers who don’t require large parameter models like ChatGPT. Instead, it focuses on more compact AI training applications in various specialized fields, with parameter counts ranging from tens of billions to under a hundred billion. They can also address edge AI inference or image analysis tasks. Additionally, in light of potential geopolitical issues that might disrupt the supply of the high-end GPU H series for Chinese customers, the L40s can serve as an alternative. As for lower-tier GPUs, NVIDIA highlights the L4 or T4 series, which are designed for real-time AI inference or image analysis in edge AI servers. These GPUs underscore affordability while maintaining a high-cost-performance ratio.

HGX and MGX AI server reference architectures are set to be NVIDIA’s main weapons for AI solutions in 2H23

TrendForce notes that recently, NVIDIA has not only refined its product positioning for its core AI chip GPU but has also actively promoted its HGX and MGX solutions. Although this approach isn’t new in the server industry, NVIDIA has the opportunity to solidify its leading position with this strategy. The key is NVIDIA’s absolute leadership stemming from its extensive integration of its GPU and CUDA platform—establishing a comprehensive AI ecosystem. As a result, NVIDIA has considerable negotiating power with existing server supply chains. Consequently, ODMs like Inventec, Quanta, FII, Wistron, and Wiwynn, as well as brands such as Dell, Supermicro, and Gigabyte, are encouraged to follow NVIDIA’s HGX or MGX reference designs. However, they must undergo NVIDIA’s hardware and software certification process for these AI server reference architectures. Leveraging this, NVIDIA can bundle and offer integrated solutions like its Arm CPU Grace, NPU, and AI Cloud Foundation.

It’s worth noting that for ODMs or OEMs, given that NVIDIA is expected to make significant achievements in the AI server market for CSPs from 2023 to 2024, there will likely be a boost in overall shipment volume and revenue growth of AI servers. However, with NVIDIA’s strategic introduction of standardized AI server architectures like HGX or MGX, the core product architecture for AI servers among ODMs and others will become more homogenized. This will intensify the competition among them as they vie for orders from CSPs. Furthermore, it’s been observed that large CSPs such as Google and AWS are leaning toward adopting in-house ASIC AI accelerator chips in the future, meaning there’s a potential threat to a portion of NVIDIA’s GPU market. This is likely one of the reasons NVIDIA continues to roll out GPUs with varied positioning and comprehensive solutions. They aim to further expand their AI business aggressively to Tier-2 data centers (like CoreWeave) and edge enterprise clients.

2023-08-25

[News] TSMC Partners with ASE and Siliconware to Boost CoWoS Packaging Capacities

According to the news from Liberty Times Net, NVIDIA’s Q2 financials and Q3 forecasts have astounded the market, driven by substantial growth in their AI-centric data center operations. NVIDIA addresses CoWoS packaging supply issues by collaborating with other suppliers, boosting future capacity, and meeting demand. This move is echoed in South Korea’s pursuit of advanced packaging strategies.

South Korea’s Swift Pursuit on Advanced Packaging

The semiconductor industry highlights that the rapid development of generative AI has outpaced expectations, causing a shortage of advanced packaging production capacity. Faced with this supply-demand gap, TSMC has outsourced some of its capacity, with Silicon Interposer production being shared by facilities under the United Microelectronics Corporation and Siliconware Precision Industries. UMC has also strategically partnered with Siliconware Precision Industries, and Amkor’s Korean facilities have joined the ranks of suppliers to augment production capacity.

Due to equipment limitations, TSMC’s monthly CoWoS advanced packaging capacity is expected to increase from 10,000 units to a maximum of 12,000 units by the end of this year. Meanwhile, other suppliers could potentially raise their CoWoS monthly capacity to 3,000 units. TSMC aims to boost its capacity to 25,000 units by the end of next year, while other suppliers might elevate theirs to 5,000 units.

According to the source South Korean media, Samsung entered the scene, competing for advanced packaging orders against NVIDIA. South Korea initiated a strategic research project to rapidly narrow the gap in packaging technology within 5~7 years, targeting giants like TSMC, Amkor, and China’s JCET.

(Source: https://ec.ltn.com.tw/article/paper/1601162)
2023-08-25

[News] NVIDIA Establishes Non-TSMC CoWoS Supply Chain, UMC Doubles Interposer Capacity

According to a report from Taiwan’s Commercial Times, NVIDIA is aggressively establishing a non-TSMC CoWoS supply chain. Sources in the supply chain reveal that UMC is proactively expanding silicon interposer capacity, doubling it in advance, and now planning to further increase production by over two times. The monthly capacity for silicon interposers will surge from the current 3 kwpm (thousand wafers per month) to 10 kwpm, potentially aligning its capacity with TSMC’s next year, significantly alleviating the supply strain in the CoWoS process.

A prior report from Nomura Securities highlighted NVIDIA’s efforts since the end of Q2 this year to construct a non-TSMC supply chain. Key players include UMC for wafer fabrication, Amkor and SPIL for packaging and testing. NVIDIA aims to add suppliers to meet the surging demand for CoWoS solutions.

The pivotal challenge in expanding CoWoS production lies in insufficient silicon interposer supply. In the future, UMC will provide the silicon interposers for front-end CoW process, while Amkor and SPLI will take charge of the back-end WoS packaging. These collaborations will establish a non-TSMC CoWoS supply chain.

UMC states its current silicon interposer capacity stands at 3 kwpm. However, the company has decided to undertake a one-fold expansion at its Singaporean plant, targeting a capacity of around 6 kwpm. The additional capacity is anticipated to be progressively operational within 6 to 9 months, with the earliest projections for the first quarter of next year.

Yet, due to persistent robust market demand, it’s expected that even with UMC’s capacity expansion to 6 kwpm, it may not completely meet market needs. Consequently, industry sources suggest UMC has opted to further amplify silicon interposer capacity to 10 kwpm, aiming for a two-fold acceleration of production expansion. Addressing these expansion rumors, UMC affirms that growth in advanced packaging demand is an inherent trend and future focus, asserting their evaluation of capacity options and not ruling out the possibility of continuous enlargement of silicon interposer capabilities.

(Photo credit: Amkor)

2023-08-24

[News] Foxconn Rumored to Secure Significant Orders for NVIDIA’s New GH200, L40S Module

According to a report by Taiwan’s Economic Daily, the latest GH200 module released by NVIDIA has seen its assembly orders exclusively undertaken by Foxconn, while the assembly orders for L40S are also entirely managed by Foxconn.

Foxconn has traditionally refrained from commenting on individual business and order dynamics. It is believed that AI chip modules constitute the highest-margin product within the entire server supply chain.

Foxconn has been a longstanding partner of NVIDIA, providing an end-to-end solution across chip modules, baseboards, motherboards, servers, and chassis. Foxconn’s capabilities have facilitated the creation of a comprehensive solution for NVIDIA’s AI server supply chain.

Previously, Foxconn had an exclusive assembly partnership with NVIDIA for the “H100” and “H800” modules, not only retaining the existing orders but also securing a substantial portion of the HGX module orders. Now, reports indicate that Foxconn will exclusively supply even NVIDIA’s newly unveiled GH 200, and the L40S.

Industry sources indicate that due to severe constraints on TSMC’s advanced CoWoS packaging capacity, the scaling up of NVIDIA’s AI chip production has been hindered. However, with new CoWoS production capacity set to gradually open up in the late third quarter to the fourth quarter, shipments of Foxconn’s AI chip modules are anticipated to rapidly increase.

Industry sources reveal that in business negotiations, NVIDIA is known for demanding from its suppliers, but it is also generous in its offerings. As long as suppliers provide products that meet or even exceed expectations, NVIDIA is willing to offer reasonable prices, fostering mutually beneficial relationships with its partners.

(Photo credit: NVIDIA)

  • Page 358
  • 445 page(s)
  • 2224 result(s)

Get in touch with us