server


2023-09-01

[News] Inventec’s AI Strategy Will Boost Both NVIDIA’s and AMD’s AI Server Chips to Grow

According to Liberty Times Net, Inventec, a prominent player in the realm of digital technology, is making significant strides in research and development across various domains, including artificial intelligence, automotive electronics, 5G, and the metaverse. The company has recently introduced a new all-aluminum liquid-cooled module for its general-purpose graphics processing units (GPGPU) powered by NVIDIA’s A100 chips. Additionally, this innovative technology is being applied to AI server products featuring AMD’s 4th Gen EPYC dual processors, marking a significant step towards the AI revolution.

Inventec has announced that their Rhyperior general-purpose graphics processors previously offered two cooling solutions: air cooling and air cooling with liquid cooling. The new all-aluminum liquid-cooled module not only reduces material costs by more than 30% compared to traditional copper cooling plates but also comes with 8 graphics processors (GPUs) and includes 6 NVIDIA NVSwitch nodes. This open-loop cooling system eliminates the need for external refrigeration units and reduces fan power consumption by approximately 50%.

Moreover, Inventec’s AI server product, the K885G6, equipped with AMD’s 4th Gen EPYC dual processors, has demonstrated a significant reduction in data center air conditioning energy consumption of approximately 40% after implementing this new cooling solution. The use of water as a coolant, rather than environmentally damaging and costlier chemical fluids, further enhances the product’s appeal, as it can support a variety of hardware configurations to meet the diverse needs of AI customers.

Inventec’s new facility in Mexico has commenced mass production, with plans to begin supplying high-end NVIDIA AI chips, specifically the H100 motherboards, in September. They are poised to increase production further in the fourth quarter. Additionally, in the coming year, the company is set to release more Application-Specific Integrated Circuit (ASIC) products, alongside new offerings from NVIDIA and AMD. Orders for server system assembly from U.S. customers (L11 assembly line) are steadily growing. The management team anticipates showcasing their innovations at the Taiwan Excellence Exhibition in Dongguan, China, starting on October 7th, as they continue to deepen their collaboration with international customers.

(Source: https://ec.ltn.com.tw/article/breakingnews/4412765)
2023-09-01

[News] Rumored AI Chip Demand Spurs Price Hikes at TSMC, UMC, ASE

TSMC’s CoWoS advanced packaging capacity shortage is causing limitations in NVIDIA’s AI chip output. Reports are emerging that NVIDIA is willing to pay a premium for alternative manufacturing capacity outside of TSMC, setting off a surge in massive overflow orders. UMC, the supplier of interposer materials for CoWoS, has reportedly raised prices for super hot runs and initiated plans to double its production capacity to meet client demand. ASE, an advanced packaging provider, is also seeing movement in its pricing.

In response to this, both UMC and ASE declined to comment on pricing and market rumors. In addressing the CoWoS advanced packaging capacity issue, NVIDIA previously confirmed during its financial report conference that it had certified other CoWoS packaging suppliers for capacity support and would collaborate with them to increase production, with industry speculation pointing towards ASE and other professional packaging factories.

TSMC’s CEO, C.C. Wei, openly stated that their advanced packaging capacity is at full utilization, and as the company actively expands its capacity, they will also outsource to professional packaging and testing factories.

It’s understood that the overflow effect from the inadequate CoWoS advanced packaging capacity at TSMC is gradually spreading. As the semiconductor industry as a whole adjusts its inventory, advanced packaging has become a market favorite.

Industry insiders point out that the interposer, acting as a communication medium within small chips, is a critical material in advanced packaging. With a broad uptick in demand for advanced packaging, the market for interposer materials is growing in parallel. Faced with high demand and limited supply, UMC has raised prices for super-hot-run interposer components.

UMC revealed that it has a comprehensive solution in the interposer field, including carriers, customed ASICs, and memory, with cooperation from multiple factories forming a substantial advantage. If other competitors are entering this space now, they might not have the quick responsiveness or abundant peripheral resources that UMC does.

UMC emphasized that compared to competitors, its competitive advantage in the interposer field lies in its open architecture. Currently, UMC’s interposer production primarily takes place in its Singapore plant, with a current capacity of about 3,000 units, with a target of doubling to six or seven thousand to meet customer demand.

Industry analysts attribute TSMC’s tight CoWoS advanced packaging capacity to a sudden surge in NVIDIA’s orders. TSMC’s CoWoS packaging had primarily catered to long-term partners, with production schedules already set, making it unable to provide NVIDIA with additional capacity. Moreover, even with tight capacity, TSMC won’t arbitrarily raise prices, as it would disrupt existing client production schedules. Therefore, NVIDIA’s move to secure additional capacity support through a premium likely involves temporary outsourced partners.

(Photo credit: NVIDIA)

2023-09-01

[News] With US Expanding AI Chip Control, the Next Chip Buying Frenzy Looms

According to a report by Taiwan’s Commercial Times, NVIDIA is facing repercussions from the US chip restriction, leading to controls on the export of high-end AI GPU chips to certain countries in the Middle East. Although NVIDIA claims that these controls won’t have an immediate impact on its performance, and industry insiders in the Taiwanese supply chain believe the initial effects are minimal. However, looking at the past practice of prohibiting exports to China, this could potentially trigger another wave of preemptive stockpiling.

Industry sources from the supply chain note that following the US restrictions on exporting chips to China last year, the purchasing power of Chinese clients increased rather than decreased, resulting in a surge in demand for secondary-level and below chip products, setting off a wave of stockpiling.

Take NVIDIA’s previous generation A100 chip for instance. After the US implemented export restrictions on China, NVIDIA replaced it with the lower-tier A800 chip, which quickly became a sought-after product in the Chinese market, driving prices to surge. It’s reported that the A800 has seen a cumulative price increase of 60% from the start of the year to late August, and it remains one of the primary products ordered by major Chinese CSPs.

Furthermore, the recently launched L40S GPU server by NVIDIA in August has become a market focal point. While it may not match the performance of systems like HGX H100/A100 in large-scale AI algorithm training, it outperforms the A100 in AI inference or small-scale AI algorithm training. As the L40S GPU is positioned in the mid-to-low range, it is currently not included in the list of chips subject to export controls to China.

Supply chain insiders suggest that even if the control measures on exporting AI chips to the Middle East are further enforced, local clients are likely to turn to alternatives like the A800 and  L40S. However, with uncertainty about whether the US will extend the scope of controlled chip categories, this could potentially trigger another wave of purchasing and stockpiling.

The primary direct beneficiaries in this scenario are still the chip manufacturers. Within the Taiwanese supply chain, Wistron, which supplies chip brands in the AI server front-end GPU board sector, stands to gain. Taiwanese supply chain companies producing A800 series AI servers and the upcoming L40S GPU servers, such as Quanta, Inventec, Gigabyte, and ASUS, have the opportunity to benefit as well.

(Photo credit: NVIDIA)

2023-08-28

[News] Taiwanese Computer Brand Manufacturers Rush into the AI Server Market

According to a report by Taiwan’s Economic Daily, a trend is taking shape as computer brand manufacturers venture into the AI server market. Notably swift on this path are Taiwan’s ASUS, Gigabyte, MSI, and MITAC. All four companies hold a positive outlook on the potential of AI server-related business, with expectations of reaping benefits starting in the latter half of this year and further enhancing their business contributions next year.

Presently, significant bulk orders for AI servers are stemming from large-scale cloud service providers (CSPs), which has also presented substantial opportunities for major electronic manufacturing services (EMS) players like Wistron and Quanta that have an early foothold in server manufacturing. As the popularity of generative AI surges, other internet-based enterprises, medical institutions, academic bodies, and more are intensifying their procurement of AI servers, opening doors for brand server manufacturers to tap into this burgeoning market.

ASUS asserts that with the sustained growth of data center/CSP server operations in recent years, the company’s internal production capacity is primed for action, with AI server business projected to at least double in growth by next year. Having established a small assembly plant in California, USA, and repurposing their Czech Republic facility from a repair center to a PC manufacturing or server assembly line, ASUS is actively expanding its production capabilities.

In Taiwan, investments are also being made to bolster server manufacturing capabilities. ASUS ‘s Shulin factory has set up a dedicated server assembly line, while the Luzhu plant in Taoyuan is slated for reconstruction to produce low-volume, high-complexity servers and IoT devices, expected to come online in 2024.

Gigabyte covers the spectrum of server products from L6 to L10, with a focus this year on driving growth in HPC and AI servers. Gigabyte previously stated that servers contribute to around 25% of the company’s revenue, with AI servers already in delivery and an estimated penetration rate of approximately 30% for AI servers equipped with GPUs.

MSI’s server revenue stands at around NT$5 billion, constituting roughly 2.7% of the company’s total revenue. While MSI primarily targets small and medium-sized customers with security and networking servers, the company has ventured into the AI server market with servers equipped with GPUs such as the NVIDIA RTX 4080/4090. In response to the surging demand for NVIDIA A100 and H100 AI chips, MSI plans to invest resources, with server revenue expected to grow by 20% to NT$6 billion in 2024, with AI servers contributing 10% to server revenue.

MITAC ‘s server business encompasses both OEM and branding. With MITAC’s takeover of Intel’s Data Center Solutions Group (DSG) business in July, the company inherited numerous small and medium-sized clients that were previously under Intel’s management.

(Photo credit: ASUS)

2023-08-28

How Are Autotech Giants Revving Up Their R&D Game Amid the Downturn?

In the face of adversities within the autonomous vehicle market, car manufacturers are not hitting the brakes. Rather, they’re zeroing in, adopting more focused and streamlined strategies, deeply rooted in core technologies.

Eager to expedite the mass-scale rollout of Robotaxis, Tesla recently announced an acceleration in the development of their Dojo supercomputer. They are now committing an investment of $1 billion and set to have 100,000 NVIDIA A100 GPUs ready by early 2024, potentially placing them among the top five global computing powerhouses.

While Tesla already boasts a supercomputer built on NVIDIA GPUs, they’re still passionate about crafting a highly efficient one in-house. This move signifies that computational capability is becoming an essential arsenal for automakers, reflecting the importance of mastering R&D in this regard.

HPC Fosters Collaboration in the Car Ecosystem

According to forecasts from TrendForce, the global high-performance computing(HPC) market could touch $42.6 billion by 2023, further expanding to $56.8 billion by 2027 with an annual growth rate of over 7%. And it is highly believed that the automotive sector is anticipated to be the primary force propelling this growth.

Feeling the heat of industry upgrades, major automakers like BMW, Continental, General Motors, and Toyota aren’t just investing in high-performance computing systems; they’re also forging deep ties with ecosystem partners, enhancing cloud, edge, chip design, and manufacturing technologies.

For example, BMW, who’s currently joining forces with EcoDataCenter, is currently seeking to extend its high-performance computing footprint, aiming to elevate their autonomous driving and driver-assist systems.

On another front, Continental, the leading tier-1 supplier, is betting on its cross-domain integration and scalable CAEdge (Car Edge framework). Set to debut in the first half of 2023, this solution for smart cockpits offers automakers a much more flexible development environment.

In-house Tech Driving Towards Level 3 and Beyond

To successfully roll out autonomous driving on a grand scale, three pillars are paramount: extensive real-world data, neural network training, and in-vehicle hardware/software. None can be overlooked, thereby prompting many automakers and Tier 1 enterprises to double down on their tech blueprints.

Tesla has already made significant strides in various related products. Beyond their supercomputer plan, their repertoire includes the D1 chip, Full Self-Driving (FSD) computation, multi-camera neural networks, and automated tagging, with inter-platform data serving as the backbone for their supercomputer’s operations.

In a similar vein, General Motors’ subsidiary, Cruise, while being mindful of cost considerations, is gradually phasing out NVIDIA GPUs, opting instead to develop custom ASIC chips to power its vehicles.

Another front-runner, Valeo, unveiled their Scala 3 in the first half of 2023, nudging LiDAR technology closer to Level 3, and laying a foundation for robotaxi(Level 4) deployment.

All this paints a picture – even with a subdued auto market, car manufacturers’ commitment to autonomous tech R&D hasn’t waned. In the long run, those who steadfastly stick to their tech strategies and nimbly adjust to market fluctuations are poised to lead the next market resurgence, becoming beacons in the industry.

For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at lattechung@trendforce.com

(Photo credit: Tesla)

  • Page 5
  • 10 page(s)
  • 46 result(s)

Get in touch with us