News
According to a report from Taiwan’s TechNews, despite running at a loss in the first half of the year, BOE believes that its target of shipping 120 million OLED panels still has a chance of being achieved. As the industry enters the traditional off-season in the fourth quarter, strict adherence to production capacity by panel manufacturers is expected to help stabilize panel pricing.
BOE mentioned that the price increases witnessed over the past few months were primarily driven by panel manufacturers, who maintained their bargaining power in price negotiations. This upward pricing trend continued into the third quarter. However, as the industry enters the off-season in the fourth quarter, some price fluctuations may occur. Nevertheless, panel manufacturers are expected to adjust their production capacity downward, making any potential price reductions manageable.
Furthermore, next year holds a brighter outlook for panel demand, with the average TV size projected to increase to 51.6 inches, compared to 50.2 inches in 2023, driving an 8% annual growth in TV demand for the coming year, compared to this year’s 3%. Replacement cycles are expected to contribute to overall demand for laptops and monitors next year.
In the first half of this year, BOE shipped approximately 50 million flexible OLED panels, marking a 76% year-on-year increase. The largest customer accounted for 43% of the total shipments. BOE believes there is still an opportunity to achieve the annual target of shipping 120 million panels. While the OLED business experienced an overall loss in the first half of this year due to increased capacity from second-tier suppliers and intensified entry-level competition, recent months have seen signs of a bottoming-out rebound in prices. Coupled with seasonal order increases, profitability is expected to continue to improve next year.
(Photo credit: BOE)
News
According to Liberty Times Net, Inventec, a prominent player in the realm of digital technology, is making significant strides in research and development across various domains, including artificial intelligence, automotive electronics, 5G, and the metaverse. The company has recently introduced a new all-aluminum liquid-cooled module for its general-purpose graphics processing units (GPGPU) powered by NVIDIA’s A100 chips. Additionally, this innovative technology is being applied to AI server products featuring AMD’s 4th Gen EPYC dual processors, marking a significant step towards the AI revolution.
Inventec has announced that their Rhyperior general-purpose graphics processors previously offered two cooling solutions: air cooling and air cooling with liquid cooling. The new all-aluminum liquid-cooled module not only reduces material costs by more than 30% compared to traditional copper cooling plates but also comes with 8 graphics processors (GPUs) and includes 6 NVIDIA NVSwitch nodes. This open-loop cooling system eliminates the need for external refrigeration units and reduces fan power consumption by approximately 50%.
Moreover, Inventec’s AI server product, the K885G6, equipped with AMD’s 4th Gen EPYC dual processors, has demonstrated a significant reduction in data center air conditioning energy consumption of approximately 40% after implementing this new cooling solution. The use of water as a coolant, rather than environmentally damaging and costlier chemical fluids, further enhances the product’s appeal, as it can support a variety of hardware configurations to meet the diverse needs of AI customers.
Inventec’s new facility in Mexico has commenced mass production, with plans to begin supplying high-end NVIDIA AI chips, specifically the H100 motherboards, in September. They are poised to increase production further in the fourth quarter. Additionally, in the coming year, the company is set to release more Application-Specific Integrated Circuit (ASIC) products, alongside new offerings from NVIDIA and AMD. Orders for server system assembly from U.S. customers (L11 assembly line) are steadily growing. The management team anticipates showcasing their innovations at the Taiwan Excellence Exhibition in Dongguan, China, starting on October 7th, as they continue to deepen their collaboration with international customers.
News
TSMC’s CoWoS advanced packaging capacity shortage is causing limitations in NVIDIA’s AI chip output. Reports are emerging that NVIDIA is willing to pay a premium for alternative manufacturing capacity outside of TSMC, setting off a surge in massive overflow orders. UMC, the supplier of interposer materials for CoWoS, has reportedly raised prices for super hot runs and initiated plans to double its production capacity to meet client demand. ASE, an advanced packaging provider, is also seeing movement in its pricing.
In response to this, both UMC and ASE declined to comment on pricing and market rumors. In addressing the CoWoS advanced packaging capacity issue, NVIDIA previously confirmed during its financial report conference that it had certified other CoWoS packaging suppliers for capacity support and would collaborate with them to increase production, with industry speculation pointing towards ASE and other professional packaging factories.
TSMC’s CEO, C.C. Wei, openly stated that their advanced packaging capacity is at full utilization, and as the company actively expands its capacity, they will also outsource to professional packaging and testing factories.
It’s understood that the overflow effect from the inadequate CoWoS advanced packaging capacity at TSMC is gradually spreading. As the semiconductor industry as a whole adjusts its inventory, advanced packaging has become a market favorite.
Industry insiders point out that the interposer, acting as a communication medium within small chips, is a critical material in advanced packaging. With a broad uptick in demand for advanced packaging, the market for interposer materials is growing in parallel. Faced with high demand and limited supply, UMC has raised prices for super-hot-run interposer components.
UMC revealed that it has a comprehensive solution in the interposer field, including carriers, customed ASICs, and memory, with cooperation from multiple factories forming a substantial advantage. If other competitors are entering this space now, they might not have the quick responsiveness or abundant peripheral resources that UMC does.
UMC emphasized that compared to competitors, its competitive advantage in the interposer field lies in its open architecture. Currently, UMC’s interposer production primarily takes place in its Singapore plant, with a current capacity of about 3,000 units, with a target of doubling to six or seven thousand to meet customer demand.
Industry analysts attribute TSMC’s tight CoWoS advanced packaging capacity to a sudden surge in NVIDIA’s orders. TSMC’s CoWoS packaging had primarily catered to long-term partners, with production schedules already set, making it unable to provide NVIDIA with additional capacity. Moreover, even with tight capacity, TSMC won’t arbitrarily raise prices, as it would disrupt existing client production schedules. Therefore, NVIDIA’s move to secure additional capacity support through a premium likely involves temporary outsourced partners.
(Photo credit: NVIDIA)
News
According to a report by Taiwan’s Commercial Times, NVIDIA is facing repercussions from the US chip restriction, leading to controls on the export of high-end AI GPU chips to certain countries in the Middle East. Although NVIDIA claims that these controls won’t have an immediate impact on its performance, and industry insiders in the Taiwanese supply chain believe the initial effects are minimal. However, looking at the past practice of prohibiting exports to China, this could potentially trigger another wave of preemptive stockpiling.
Industry sources from the supply chain note that following the US restrictions on exporting chips to China last year, the purchasing power of Chinese clients increased rather than decreased, resulting in a surge in demand for secondary-level and below chip products, setting off a wave of stockpiling.
Take NVIDIA’s previous generation A100 chip for instance. After the US implemented export restrictions on China, NVIDIA replaced it with the lower-tier A800 chip, which quickly became a sought-after product in the Chinese market, driving prices to surge. It’s reported that the A800 has seen a cumulative price increase of 60% from the start of the year to late August, and it remains one of the primary products ordered by major Chinese CSPs.
Furthermore, the recently launched L40S GPU server by NVIDIA in August has become a market focal point. While it may not match the performance of systems like HGX H100/A100 in large-scale AI algorithm training, it outperforms the A100 in AI inference or small-scale AI algorithm training. As the L40S GPU is positioned in the mid-to-low range, it is currently not included in the list of chips subject to export controls to China.
Supply chain insiders suggest that even if the control measures on exporting AI chips to the Middle East are further enforced, local clients are likely to turn to alternatives like the A800 and L40S. However, with uncertainty about whether the US will extend the scope of controlled chip categories, this could potentially trigger another wave of purchasing and stockpiling.
The primary direct beneficiaries in this scenario are still the chip manufacturers. Within the Taiwanese supply chain, Wistron, which supplies chip brands in the AI server front-end GPU board sector, stands to gain. Taiwanese supply chain companies producing A800 series AI servers and the upcoming L40S GPU servers, such as Quanta, Inventec, Gigabyte, and ASUS, have the opportunity to benefit as well.
(Photo credit: NVIDIA)
News
According to the news from Chinatimes, Asus, a prominent technology company, has announced on the 30th of this month the release of AI servers equipped with NVIDIA’s L40S GPUs. These servers are now available for order. The L40S GPU was introduced by NVIDIA in August to address the shortage of H100 and A100 GPUs. Remarkably, Asus has swiftly responded to this situation by unveiling AI server products within a span of less than two weeks, showcasing their optimism in the imminent surge of AI applications and their eagerness to seize the opportunity.
Solid AI Capabilities of Asus Group
Apart from being among the first manufacturers to introduce the NVIDIA OVX server system, Asus has leveraged resources from its subsidiaries, such as TaiSmart and Asus Cloud, to establish a formidable AI infrastructure. This not only involves in-house innovation like the Large Language Model (LLM) technology but also extends to providing AI computing power and enterprise-level generative AI applications. These strengths position Asus as one of the few all-encompassing providers of generative AI solutions.
Projected Surge in Server Business
Regarding server business performance, Asus envisions a yearly compounded growth rate of at least 40% until 2027, with a goal of achieving a fivefold growth over five years. In particular, the data center server business catering primarily to Cloud Service Providers (CSPs) anticipates a tenfold growth within the same timeframe, driven by the adoption of AI server products.
Asus CEO recently emphasized that Asus’s foray into AI server development was prompt and involved collaboration with NVIDIA from the outset. While the product lineup might be more streamlined compared to other OEM/ODM manufacturers, Asus had secured numerous GPU orders ahead of the AI server demand surge. The company is optimistic about the shipping momentum and order visibility for the new generation of AI servers in the latter half of the year.
Embracing NVIDIA’s Versatile L40S GPU
The NVIDIA L40S GPU, built on the Ada Lovelace architecture, stands out as one of the most powerful general-purpose GPUs in data centers. It offers groundbreaking multi-workload computations for large language model inference, training, graphics, and image processing. Not only does it facilitate rapid hardware solution deployment, but it also holds significance due to the current scarcity of higher-tier H100 and A100 GPUs, which have reached allocation stages. Consequently, businesses seeking to repurpose idle data centers are anticipated to shift their focus toward AI servers featuring the L40S GPU.
Asus’s newly introduced L40S GPU servers include the ESC8000-E11/ESC4000-E11 models with built-in Intel Xeon processors, as well as the ESC8000A-E12/ESC4000A-E12 models utilizing AMD EPYC processors. These servers can be configured with up to 4 or a maximum of 8 NVIDIA L40S GPUs. This configuration assists enterprises in enhancing training, fine-tuning, and inference workloads, facilitating AI model creation. It also establishes Asus’s platforms as the preferred choice for multi-modal generative AI applications.