Facebook


2021-11-16

Explosive Growth of Commercial Opportunities Driven by Metaverse Projected to Generate Improvements in Computing Cores, Telecommunications, and Display Technologies, Says TrendForce

By leveraging advantages such as lifelike interaction and virtual simulation, the metaverse will enable the growth of various applications ranging from virtual meetings, digital modeling and analysis, to virtual communities, gaming, and content creation, in the infancy of its development. According to TrendForce’s latest investigations, constructing the metaverse, which is more complex than the existing internet world, requires more powerful data processing cores, networking environments capable of transferring enormous data, and user-side AR/VR devices with improved display performances. These requirements will further drive forward the development of memory products, advanced process technologies, 5G telecommunications, and display technologies.

Regarding memory products, the conceptual framework of the metaverse is heavily contingent on the support provided by compute nodes. The data center industry will therefore experience more catalysts brought about by the metaverse, and there will be a corresponding growth in micro-servers and edge processing applications. The metaverse will also require an increase in the performance of storage devices. This means that SSDs, which are substantially faster than HDDs in writing data, will become an indispensable storage solution. On the DRAM front, take VR devices as an example; most existing devices are equipped with 4GB LPDRAM, which has the dual advantage of low power consumption and high performance. In the short run, manufacturers will not plan to massively upgrade the applications processors in these devices, which also operate in relatively simple processing environments. Hence, the growth in VR devices’ DRAM density will remain relatively stable. In terms of storage, on the other hand, because most AR/VR devices are equipped with Qualcomm chips whose specifications closely resemble those of flagship smartphone SoCs, AR/VR devices will also feature UFS 3.1 solutions.

Regarding advanced process technologies, the integration of AI and the increase in demand for computing power have resulted in a corresponding demand for high-performance chips, which enable improved graphics rendering and computation of massive amounts of data. Advanced process technologies allow the production of high-performance chips that deliver enhancements in performance, power consumption, and chip size. The realization of the metaverse requires high-performance chips for data and graphics processing, so high-performance CPUs and GPUs will assume key roles in this regard. TrendForce’s investigations indicate that, with respect to CPUs, the current mainstream products from Intel and AMD are manufactured at the Intel 7 node (equivalent to the 10nm node) and TSMC’s 7nm node, respectively, and the two companies will migrate to TSMC’s 3nm and 5nm nodes in 2022. With regards to GPUs, AMD’s wafer input plans for GPUs are basically in lockstep with its plans for CPUs, whereas Nvidia has been inputting wafers at TSMC’s 7nm node and Samsung’s 8nm node. Nvidia is currently planning to input wafers at the 5nm node, and the resultant GPUs will likely be released to market in early 2023.

Regarding networking and telecommunications, due to the metaverse’s demand for virtual interactions that are instant, lifelike, and stable, greater attention will be paid to the bandwidth and latency of data transmissions. 5G communication is able to meet this demand as it features high bandwidth, low latency, and support for a greater number of connected devices. Hence, the arrival of the metaverse will likely bring about the commercialization of 5G-related technologies at an increasingly rapid pace. Notably, some of these 5G technologies that are set to become the backbone of network environments powering the metaverse include SA (standalone) 5G networks, which delivers greater flexibility via network slicing; MEC (multi-access edge computing), which increases the computing capabilities of the cloud; and TSN (time sensitive networking), which improves the reliability of data transmissions. In addition, 5G networks will also be combined with Wi-Fi 6 in order to extend the range of indoor wireless connections. In light of their importance in enabling the metaverse, all of these aforementioned technologies have become major drivers of network service development in recent years.

Regarding display technologies, the immersive experiences of VR/AR devices depend on the integration of higher resolutions and refresh rates. In particular, an increase in resolution will receive much more attention in the market now that Micro LED and Micro OLED technologies have gained gradual adoption as display technologies shrink in terms of physical dimensions. As well, the traditional 60Hz refresh rate can no longer satisfy the visual demands of advanced display applications, meaning display solutions with higher than 120Hz refresh rates will become the mainstream going forward. In addition, the metaverse’s emphasis on interactivity demands display technologies that are not limited by traditional physical designs. The market for flexible display panels, which allow for free form factors, is expected to benefit as a result. At the same time, the metaverse is also expected to generate some demand for transparent displays, which serve as an important interface between the virtual world and real life.

(Image credit: Ready Player One)

2021-11-15

HBM/CXL Emerge in Response to Demand for Optimized Hardware Used in AI-driven HPC Applications, Says TrendForce

According to TrendForce’s latest report on the server industry, not only have emerging applications in recent years accelerated the pace of AI and HPC development, but the complexity of models built from machine learning applications and inferences that involve increasingly sophisticated calculations has also undergone a corresponding growth as well, resulting in more data to be processed. While users are confronted with an ever-growing volume of data along with constraints placed by existing hardware, they must make tradeoffs among performance, memory capacity, latency, and cost. HBM (High Bandwidth Memory) and CXL (Compute Express Link) have thus emerged in response to the aforementioned conundrum. In terms of functionality, HBM is a new type of DRAM that addresses more diverse and complex computational needs via its high I/O speeds, whereas CXL is an interconnect standard that allows different processors, or xPUs, to more easily share the same memory resources.

HBM breaks through bandwidth limitations of traditional DRAM solutions through vertical stacking of DRAM dies

Memory suppliers developed HBM in order to be free from the previous bandwidth constraints posed by traditional memory solutions. Regarding memory architecture, HBM consists of a base logic die with DRAM dies vertically stacked on top of the logic die. The 3D-stacked DRAM dies are interconnected with TSV and microbumps, thereby enabling HBM’s high-bandwidth design. The mainstream HBM memory stacks involve four or eight DRAM die layers, which are referred to as “4-hi” or “8-hi”, respectively. Notably, the latest HBM product currently in mass production is HBM2e. This generation of HBM contains four or eight layers of 16Gb DRAM dies, resulting in a memory capacity of 8GB or 16GB per single HBM stack, respectively, with a bandwidth of 410-460GB/s. Samples of the next generation of HBM products, named HBM3, have already been submitted to relevant organizations for validation, and these products will likely enter mass production in 2022.

TrendForce’s investigations indicate that HBM comprises less than 1% of total DRAM bit demand for 2021 primarily because of two reasons. First, the vast majority of consumer applications have yet to adopt HBM due to cost considerations. Second, the server industry allocates less than 1% of its hardware to AI applications; more specifically, servers that are equipped with AI accelerators account for less than 1% of all servers currently in use, not to mention the fact that most AI accelerators still use GDDR5(x) and GDDR6 memories, as opposed to HBM, to support their data processing needs.

Although HBM currently remains in the developmental phase, as applications become increasingly reliant on AI usage (more precise AI needs to be supported by more complex models), computing hardware will then require the integration of HBM to operate these applications effectively. In particular, FPGA and ASIC represent the two hardware categories that are most closely related to AI development, with Intel’s Stratix and Agilex-M as well as Xilinx’s Versal HBM being examples of FPGA with onboard HBM. Regarding ASIC, on the other hand, most CSPs are gradually adopting their own self-designed ASICs, such Google’s TPU, Tencent’s Enflame DTU, and Baidu’s Kunlun – all of which are equipped with HBM – for AI deployments. In addition, Intel will also release a high-end version of its Sapphire Rapids server CPU equipped with HBM by the end of 2022. Taking these developments into account, TrendForce believes that an increasing number of HBM applications will emerge going forward due to HBM’s critical role in overcoming hardware-related bottlenecks in AI development.

A new memory standard born out of demand from high-speed computing, CXL will be more effective in integrating resources of whole system

Evolved from PCIe Gen5, CXL is a memory standard that provides high-speed and low-latency interconnections between the CPU and other accelerators such as the GPU and FPGA. It enables memory virtualization so that different devices can share the same memory pool, thereby raising the performance of a whole computer system while reducing its cost. Hence, CXL can effectively deal with the heavy workloads related to AI and HPC applications.

CXL is just one of several interconnection technologies that feature memory sharing. Other examples that are also in the market include NVLink from NVIDIA and Gen-Z from AMD and Xilinx. Their existence is an indication that the major ICT vendors are increasingly attentive to the integration of various resources within a computer system. TrendForce currently believes that CXL will come out on top in the competition mainly because it is introduced and promoted by Intel, which has an enormous advantage with respect to the market share for CPUs. With Intel’s support in the area of processors, CXL advocates and hardware providers that back the standard will be effective in organizing themselves into a supply chain for the related solutions. The major ICT companies that have in turn joined the CXL Consortium include AMD, ARM, NVIDIA, Google, Microsoft, Facebook (Meta), Alibaba, and Dell. All in all, CXL appears to be the most favored among memory protocols.

The consolidation of memory resources among the CPU and other devices can reduce communication latency and boost the computing performance needed for AI and HPC applications. For this reason, Intel will provide CXL support for its next-generation server CPU Sapphire Rapids. Likewise, memory suppliers have also incorporated CXL support into their respective product roadmaps. Samsung has announced that it will be launching CXL-supported DDR5 DRAM modules that will further expand server memory capacity so as to meet the enormous resource demand of AI computing. There is also a chance that CXL support will be extended to NAND Flash solutions in the future, thus benefiting the development of both types of memory products.

Synergy between HBM and CXL will contribute significantly to AI development; their visibility will increase across different applications starting in 2023

TrendForce believes that the market penetration rate of CXL will rise going forward as this interface standard is built into more and more CPUs. Also, the combination of HBM and CXL will be increasingly visible in the future hardware designs of AI servers. In the case of HBM, it will contribute to a further ramp-up of data processing speed by increasing the memory bandwidth of the CPU or the accelerator. As for CXL, it will enable high-speed interconnections among CPU and other devices. By working together, HBM and CXL will raise computing power and thereby expedite the development of AI applications.

The latest advances in memory pooling and sharing will help overcome the current hardware bottlenecks in the designs of different AI models and continue the trend of more sophisticated architectures. TrendForce anticipates that the adoption rate of CXL-supported Sapphire Rapids processors will reach a certain level, and memory suppliers will also have put their HBM3 products and their CXL-supported DRAM and SSD products into mass production. Hence, examples of HBM-CXL synergy in different applications will become increasingly visible from 2023 onward.

For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at lattechung@trendforce.com

2021-10-13

Taiwanese Server ODMs Expected to Account for About 90% of Global Server Production in 2021 by Expanding Production Capacities Outside of Domestic China, Says TrendForce

Escalating trade tensions between the US and China, rising geopolitical issues, increased tariffs, and uncertainties stemming from the COVID-19 pandemic’s emergence last year have compelled server ODMs to actively shift their operations closer to clients as well as engage in risk mitigation strategies, according to TrendForce’s latest investigations. Taiwanese ODMs, in particular, are shifting their production bases away from domestic China and accelerating the installation of additional overseas production lines. TrendForce expects the share of servers manufactured in domestic China by global server ODMs to undergo a 7% YoY decrease this year as these ODMs shift their production bases mainly to Taiwan. Furthermore, Taiwanese ODMs are expected to account for about 90% of total server production this year.

On the other hand, server assembly operations, which are closely related to motherboard manufacturing operations, are also dynamically reserving their L6 capacities. Server assembly facilities located in New Mexico and the Czech Republic are gradually installing new production lines for server motherboards there. Inventec, Wistron (including Wiwynn), and Foxconn all currently possess sufficient motherboard manufacturing capacities for allocation as needed.

While future changes in the overall server supply chain remains to be seen, it should be pointed out that the migration of production bases pertaining to US companies is of particular importance. For instance, North American CSPs have requested their server ODM partners to migrate L6 assembly lines to locations such as Taiwan and Southeast Asia in response to potential geopolitical factors going forward. However, servers to be shipped to non-US regions will still be manufactured in China in accordance with prior plans. Aside from Google and Facebook, both of which have production lines in Taiwan, AWS and Microsoft have also transitioned their production lines to Taiwan.

Regarding major server ODMs’ current progress, most of them have installed new production lines in Taiwan, with Inventec, Wistron, Quanta, and Foxconn making the most headway. For instance, after installing three additional production lines in Guishan, Taoyuan at the end of 2020, Inventec currently operate a total of eight production lines, while Wistron has not only installed several spare production lines in the Southern Taiwan Science Park, but also planned to expand production bases in Southeast Asia at the end of 2021 for capacity allocation purposes. Quanta is aiming to capitalize on demand from 5G-related applications and data center build-outs by continually adjusting its production capacity for motherboards in Taiwan and Thailand. Finally, by expanding the physical capacity of its Taoyuan facility, Foxconn is able to avoid incurring tariffs for its North American clients’ L6 assembly operations.

2021-05-12

Foxconn Dominates ODM Server Market by Taking Nearly 50% of AWS/Azure Server Business

The “new normal” in the post-pandemic era has seen the meteoric rise of high-speed and high-bandwidth 5G applications, which subsequently brought about a corresponding increase in cloud services demand. As such, the global server shipment for 2021 will likely reach 13.6 million units, a 5.4% increase YoY. As commercial opportunities in white-box servers begin to emerge, Taiwanese ODMs, including Quanta, Wiwynn, and Foxconn are likely to benefit.

The prevailing business model of the server supply chain involves having the ODM responsible for the design, hardware installation, and assembly processes, after which servers are delivered to server brands (such as HPE, Dell, Inspur, and Lenovo), which then sell the servers to end-clients. In contrast, a new business model has recently started to emerge; this business model involves having server ODMs responsible for manufacturing specific and customized server hardware, available directly for purchase by such end-clients as cloud service providers, thereby bypassing brands as the middlemen.

With regards to market share, Foxconn accounts for nearly half of the total server demand from Microsoft Azure and from AWS, while Quanta accounts for about 60-65% of Facebook’s server demand.

According to TrendForce’s investigations, ODMs including Quanta, Inventec, Foxconn, Wiwynn, and QCT have all received server orders from clients in the cloud services sector in 1H21. In particular, both Quanta and Inventec received orders from Microsoft Azure, AWS, Facebook, and Google Cloud. With regards to market share, Foxconn accounts for nearly half of the total server demand from Microsoft Azure and from AWS, while Quanta accounts for about 60-65% of Facebook’s server demand, in turn giving Foxconn and Quanta the lion’s shares in the ODM market.

The aforementioned Taiwanese ODMs have been aggressive in growing their presence in the private industrial 5G network and edge computing markets, with Quanta subsidiary QCT being a good case in point as an ODM that supplies servers to both telecom operators and private industrial networks for these clients’ respective 5G infrastructures build-outs.

More specifically, QCT stated the following in a press release dated Jan. 4, 2021:

“Quanta Cloud Technology (QCT), a global data center solution provider, independently developed Taiwan’s first 5G standalone (SA) core network, which recently passed interoperability and performance verifications for 5G Open Network Lab operated by Taiwan’s Industrial Technology Research Institute (ITRI). The core network was successfully connected to partner radio access networks (RAN) and third-party user equipment, realizing end-to-end 5G signal transmission from edge to core and achieving significant acceleration in both uplink and downlink speeds.”

In response to the edge computing demand generated by global 5G commercialization efforts, Wiwynn recently released the EP100 server, which is a 5G edge computing solution compliant with the OCP openEDGE specification. Developed in collaboration with U.S.-based 5G software solutions provider Radisys, the EP100 can function as an O-DU or an O-CU depending on the various 5G RAN needs of telecom operators.

Furthermore, Wiwynn is continuing to develop the next generation of edge computing servers targeted at the enterprise networking and edge computing segments.

Foxconn, on the other hand, has been focusing on developing vertical solutions for private industrial 5G networks. Foxconn’s hardware infrastructure offerings include edge computing servers, TSN network switches, and gateways. The company also offers a slew of software solutions such as data management platforms and other apps, hosted by Asia Pacific Telecom. Last but not least, Foxconn recently announced an additional US$35.6 million investment in its Wisconsin project; this injection of capital will make the company well equipped to meet the demand for servers as well as 5G O-RAN and other telecom equipment.

(Cover image source:Pixabay)

2021-04-28

GCP, AWS Projected to Become Main Drivers of Global Server Demand with 25-30% YoY Increase in Server Procurement, Says TrendForce

Thanks to their flexible pricing schemes and diverse service offerings, CSPs have been a direct, major driver of enterprise demand for cloud services, according to TrendForce’s latest investigations. As such, the rise of CSPs have in turn brought about a gradual shift in the prevailing business model of server supply chains from sales of traditional branded servers (that is, server OEMs) to ODM Direct sales instead.

Incidentally, the global public cloud market operates as an oligopoly dominated by North American companies including Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP), which collectively possess an above-50% share in this market. More specifically, GCP and AWS are the most aggressive in their data center build-outs. Each of these two companies is expected to increase its server procurement by 25-30% YoY this year, followed closely by Azure.

TrendForce indicates that, in order to expand the presence of their respective ecosystems in the cloud services market, the aforementioned three CSPs have begun collaborating with various countries’ domestic CSPs and telecom operators in compliance with data residency and data sovereignty regulations. For instance, thanks to the accelerating data transformation efforts taking place in the APAC regions, Google is ramping up its supply chain strategies for 2021.

As part of Google’s efforts at building out and refreshing its data centers, not only is the company stocking up on more weeks’ worth of memory products, but it has also been increasing its server orders since 4Q20, in turn leading its ODM partners to expand their SMT capacities. As for AWS, the company has benefitted from activities driven by the post-pandemic new normal, including WFH and enterprise cloud migrations, both of which are major sources of data consumption for AWS’ public cloud.

Conversely, Microsoft Azure will adopt a relatively more cautious and conservative approach to server procurement, likely because the Ice Lake-based server platforms used to power Azure services have yet to enter mass production. In other words, only after these Ice Lake servers enter mass production will Microsoft likely ramp up its server procurement in 2H21, during which TrendForce expects Microsoft’s peak server demand to take place, resulting in a 10-15% YoY growth in server procurement for the entirety of 2021.

Finally, compared to its three competitors, Facebook will experience a relatively more stable growth in server procurement owing to two factors. First, the implementation of GDPR in the EU and the resultant data sovereignty implications mean that data gathered on EU residents are now subject to their respective country’s legal regulations, and therefore more servers are now required to keep up the domestic data processing and storage needs that arise from the GDPR. Secondly, most servers used by Facebook are custom spec’ed to the company’s requirements, and Facebook’s server needs are accordingly higher than its competitors’. As such, TrendForce forecasts a double-digit YoY growth in Facebook’s server procurement this year.

Chinese CSPs are limited in their pace of expansions, while Tencent stands out with a 10% YoY increase in server demand

On the other hand, Chinese CSPs are expected to be relatively weak in terms of server demand this year due to their relatively limited pace of expansion and service areas. Case in point, Alicloud is currently planning to procure the same volume of servers as it did last year, and the company will ramp up its server procurement going forward only after the Chinese government implements its new infrastructure policies. Tencent, which is the other dominant Chinese CSP, will benefit from increased commercial activities from domestic online service platforms, including JD, Meituan, and Kuaishou, and therefore experience a corresponding growth in its server colocation business.

Tencent’s demand for servers this year is expected to increase by about 10% YoY. Baidu will primarily focus on autonomous driving projects this year. There will be a slight YoY increase in Baidu’s server procurement for 2021, mostly thanks to its increased demand for roadside servers used in autonomous driving applications. Finally, with regards to Bytedance, its server procurement will undergo a 10-15% YoY decrease since it will look to adopt colocation services rather than run its own servers in the overseas markets due to its shrinking presence in those markets.

Looking ahead, TrendForce believes that as enterprise clients become more familiar with various cloud services and related technologies, the competition in the cloud market will no longer be confined within the traditional segments of computing, storage, and networking infrastructure. The major CSPs will pay greater attention to the emerging fields such as edge computing as well as the software-hardware integration for the related services.

With the commercialization of 5G services that is taking place worldwide, the concept of “cloud, edge, and device” will replace the current “cloud” framework. This means that cloud services will not be limited to software in the future because cloud service providers may also want to offer their branded hardware in order to make their solutions more comprehensive or all-encompassing. Hence, TrendForce expects hardware to be the next battleground for CSPs.

For more information on reports and market data from TrendForce’s Department of Semiconductor Research, please click here, or email Ms. Latte Chung from the Sales Department at lattechung@trendforce.com

  • Page 2
  • 2 page(s)
  • 10 result(s)