News
As the U.S. presidential election approaches, uncertainties also arise. Compared with the stance in President Biden’s term, U.S. presidential candidate Trump shows a remarkably different attitude regarding the “Taiwan issue,” while he highlights the “America First” agenda further.
However, according to a report by Technews, Trump may overlook the fact that Taiwan’s semiconductor is closely tied to shaping the “America First” stance that he values. By standing as a crucial ally in semiconductor, Taiwan could help the U.S. secure a foothold in the arms, AI, and technology race. Without Taiwan’s support, it is hard to say whether U.S. may face the risk of being overtaken by China, as the latter is developing semiconductor at full throttle. Read below for more analysis from Technews:
Intel’s “Five Nodes in Four Years” Roadmap: Details of Intel 20A Still Vague
Let’s look at Intel’s progress first. The tech giant has announced a plan to advance through five nodes in four years (5N4Y), as the latest update includes Intel 14A in its top-tier node strategy.
However, in the chart below, Intel 7, which has been categorized as a mature process, is already being caught up by SMIC’s 7nm and 5nm processes. This is happening despite the U.S.-China trade war, with the U.S. placing SMIC on the entity list and imposing equipment restrictions.
From the perspective of advanced nodes, Intel’s latest Lunar Lake platform will be manufactured with TSMC’s 3nm process this year. In addition, its next-generation Nova Lake processors will also adopt TSMC’s 2nm process, with a potential release date in 2026.
Intel CEO Pat Gelsinger has stated that the first-generation Gate-All-Around (GAA) RibbonFET process, Intel 20A, is expected to launch this year, with Intel 18A anticipated to go into production in the first half of 2025.
However, it is worth noting that Intel 20A was originally reported to be used for Arrow Lake and Lunar Lake processors, but Gelsinger confirmed at COMPUTEX that the latter will use TSMC’s 3nm process, with no mention of Arrow Lake’s progress. The market expects that some Arrow Lake processor orders may be outsourced to TSMC, which also suggests that the progress of Intel 20A may not meet expectations.
On the other hand, SMIC, limited by equipment constraints, has progressed to 7nm but faces delays with 5nm, so it will advance gradually with N+1, N+2, and N+3 processes.
Without Taiwan’s Semiconductor Manufacturing, the AI Computing Power of the U.S. May Eventually Be Caught up by China
Industry experts believe that without Taiwan’s semiconductor manufacturing, it would be difficult for the industry to progress, especially for AI and HPC chips that require significant computing power and advanced processes.
Currently, AI chips primarily adopt TSMC’s 4nm and 3nm nodes and will continue to use the 2nm process in the future. Without TSMC’s technology, the U.S., if it solely relies on Intel for its foundry and capacity, may progress relatively slow in AI computing power, which may make the country eventually lose the AI race with China, falling behind in future commercial and military equipment advantages.
According to a report by the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) in December last year, the global semiconductor IC design industry was valued at USD 248 billion in 2022, while integrated device manufacturers (IDM) were valued at USD 412 billion, totaling USD 660 billion. The U.S. accounted for 53% of this value, while Taiwan only accounted for 6%.
On the other hand, the global foundry services in 2022 was valued at USD 139 billion, while the packaging and testing industry was valued at USD 50 billion, totaling USD 190 billion. Taiwan accounted for 63%, while the U.S. only accounted for 8%.
Despite this, the overall semiconductor industry value in the U.S. remains at USD 365 billion, making it the largest beneficiary in the sector. That of Taiwan, on the other hand, is only USD 159 billion, less than one-third of the U.S. total.
Sanctioning Taiwan Would Be “Shooting Oneself in the Foot,” Making the U.S. Harder to Win the Tech War with China
Regarding government subsidies, China is launching the third phase of its Big Fund, with a registered capital of 344 billion RMB (about USD 47.5 billion), which is significantly higher than the previous two phases. This represents a nationwide effort to invest in semiconductors, with a focus on enhancing semiconductor equipment and the overall supply chain.
The U.S. CHIPS Act, on the other hand, has a scale of USD 52.7 billion, which is comparable to China’s subsidies. However, as technology and arms races are long-term competitions, how related policies may evolve would also be subject to the results of the election.
On the other hand, China is currently working hard to better its semiconductor eco-industrial chain, expand its market share in mature processes, and continue advancing to more advanced process technologies, which may further shorten its gap with the U.S.
As the U.S. IC design sector is closely related to Taiwan’s semiconductor manufacturing technology, Taiwan’s role in the game has become a key factor for the U.S. to maintain its leading edge with China. Without Taiwan’s technological support, the techonological dominance of the U.S. might be threatened, as China’s semiconductor industry has gradually catching up.
News
South Korean memory giant SK hynix, after announcing soaring financial results in Q2 and its massive investment in Yongin Semiconductor Cluster last week, is now reportedly considering another move: US IPO for its Solidigm subsidiary.
According to the reports by Blocks & Files and Korea media outlet Hankyung, Solidigm has achieved its first profit after 12 consecutive quarters of losses. On July 25th, SK hynix announced second-quarter revenue of 16.42 trillion Korean won, a 125% year-on-year increase, setting a historical record. At the same time, profits reached their highest level since 2018. This was mainly due to strong demand for AI memory, including HBM, and overall price increases for DRAM and NAND products.
The reports stated that the rumor regarding the U.S. IPO seems to be plausible, as SK hynix had previously planned to spin off Solidigm, and the company’s recent rebound makes such a move more feasible. In addition, an IPO for Solidigm would allow SK hynix to obtain cash for part of its stake in the company and assist in covering the planned capital expenditures, according to the reports.
The company had just announced an ambitious plan of expanding its memory manufacturing capacity with an approximately 9.4 trillion won (USD 6.8 billion) investment to build an HBM fabrication plant at the Yongin Semiconductor Cluster, Korea. Construction of the fab will begin in March 2025 and is expected to be completed by May 2027. Following this, SK Hynix intends to add three more plants to the cluster.
However, the reports also pointed out that SK hynix’s success in this venture will likely depend on how the new organization is structured—such as which assets are included in Solidigm versus those retained by SK hynix—and how both entities address future technology plans. This is particularly important considering that the current roadmap for the memory giant’s NAND business at Dalian, China, including the QLC components that have contributed to Solidigm’s recent success in high-capacity enterprise SSDs, appears to conclude at 196 layers.
In 2020, SK hynix acquired Intel’s NAND and SSD division through a two-phase deal. The first phase involved the former purchasing Intel’s SSD business and NAND fabrication plant in Dalian, China, for USD 7 billion. The second phase will see SK hynix pay Intel an additional USD 2 billion in 2025 for intellectual property related to NAND flash wafer manufacturing and design, as well as for R&D employees and the Dalian fab workforce. SK hynix named the acquired business Solidigm in December, 2021, and has since developed and launched a few successful products, including the D5-P5336 61.44 TB QLC (4 bits/cell) SSD, the reports noted.
Regarding the rumor, SK hynix clarifies that Solidigm is exploring various growth strategies, but no decision has been made at this time.
Read more
(Photo credit: Solidigm)
News
Amidst the tide of artificial intelligence (AI), new types of DRAM represented by HBM are embracing a new round of development opportunities. Meanwhile, driven by server demand, MRDIMM/MCRDIMM have emerged as new sought-afters in the memory industry, stepping onto the “historical stage.”
According to a report from WeChat account DRAMeXchange, currently, the rapid development of AI and big data is boosting an increase in the number of CPU cores in servers. To meet the data throughput requirements of each core in multi-core CPUs, it is necessary to significantly increase the bandwidth of memory systems. In this context, HBM modules for servers, MRDIMM/MCRDIMM, have emerged.
On July 22, JEDEC announced that it will soon release the DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMM) and the next-generation LPDDR6 Compression-Attached Memory Module (CAMM) advanced memory module standards, and introduced key details of these two types of memory, aiming to support the development of next-generation HPC and AI. These two new technical specifications were developed by JEDEC’s JC-45 DRAM Module Committee.
As a follow-up to JEDEC’s JESD318 CAMM2 memory module standard, JC-45 is developing the next-generation CAMM module for LPDDR6, with a target maximum speed of over 14.4GT/s. In light of the plan, this module will also provide 24-bit wide subchannels, 48-bit wide channels, and support “connector array” to meet the needs of future HPC and mobile devices.
DDR5 MRDIMM supports multiplexed rank columns, which can combine and transmit multiple data signals on a single channel, effectively increasing bandwidth without additional physical connections. It is reported that JEDEC has planned multiple generations of DDR5 MRDIMM, with the ultimate goal of increasing its bandwidth to 12.8Gbps, doubling the current 6.4Gbps of DDR5 RDIMM memory and improving pin speed.
In JEDEC’s vision, DDR5 MRDIMM will utilize the same pins, SPD, PMIC, and other designs as existing DDR5 DIMMs, be compatible with the RDIMM platform, and leverage the existing LRDIMM ecosystem for design and testing.
JEDEC stated that these two new technical specifications are expected to bring a new round of technological innovation to the memory market.
In March 2023, AMD announced at the Memcom 2023 event that it is collaborating with JEDEC to develop a new DDR5 MRDIMM standard memory, targeting a transfer rate of up to 17600 MT/s. According to a report from Tom’s Hardware at that time, the first generation of DDR5 MRDIMM aims for a rate of 8800 MT/s, which will gradually increase, with the second generation set to reach 12800 MT/s, and the third generation to 17600 MT/s.
MRDIMM, short for “Multiplexed Rank DIMM,” integrates two DDR5 DIMMs into one, thereby providing double the data transfer rate while allowing access to two ranks.
On July 16, memory giant Micron announced the launch of the new MRDIMM DDR5, which is currently sampling and will provide ultra-large capacity, ultra-high bandwidth, and ultra-low latency for AI and HPC applications. Mass shipment is set to begin in the second half of 2024.
MRDIMM offers the highest bandwidth, largest capacity, lowest latency, and better performance per watt. Micron said that it outperforms current TSV RDIMM in accelerating memory-intensive virtualization multi-tenant, HPC, and AI data center workloads.
Compared to traditional RDIMM DDR5, MRDIMM DDR5 can achieve an effective memory bandwidth increase of up to 39%, a bus efficiency improvement of over 15%, and a latency reduction of up to 40%.
MRDIMM supports capacity options ranging from 32GB to 256GB, covering both standard and high-form-factor (TFF) specifications, suitable for high-performance 1U and 2U servers. The 256GB TFF MRDIMM outruns TSV RDIMM with similar capacity by 35% in performance.
This new memory product is the first generation of Micron’s MRDIMM series and will be compatible with Intel Xeon processors. Micron stated that subsequent generations of MRDIMM products will continue to offer 45% higher single-channel memory bandwidth compared to their RDIMM counterparts.
As one of the world’s largest memory manufacturers, SK hynix already introduced a product similar to MRDIMM, called MCRDIMM, even before AMD and JEDEC.
MCRDIMM, short for “Multiplexer Combined Ranks2 Dual In-line Memory Module,” is a module product that combines multiple DRAMs on a substrate, operating the module’s two basic information processing units, Rank, simultaneously.
In late 2022, SK hynix partnered with Intel and Renesas to develop the DDR5 MCR DIMM, which became the fastest server DRAM product in the industry at the time. As per Chinese IC design company Montage Technology’s 2023 annual report, MCRDIMM can also be considered the first generation of MRDIMM.
Traditional DRAM modules can only transfer 64 bytes of data to the CPU at a time, while SK hynix’s MCRDIMM module can transfer 128 bytes by running two memory ranks simultaneously. This increase in the amount of data transferred to the CPU each time boosts the data transfer speed to over 8Gbps, doubling that of a single DRAM.
At that time, SK hynix anticipated that the market for MCR DIMM would gradually open up, driven by the demand for increased memory bandwidth in HPC. According to SK hynix’s FY2024 Q2 financial report, the company will launch 32Gb DDR5 DRAM for servers and MCRDIMM products for HPC in 2H24.
MCRDIMM/MRDIMM adopts the DDR5 LRDIMM “1+10” architecture, requiring one MRCD chip and ten MDB chips. Conceptually, MCRDIMM/MRDIMM allows parallel access to two ranks within the same DIMM, increasing the capacity and bandwidth of the DIMM module by a large margin.
Compared to RDIMM, MCRDIMM/MRDIMM can offer higher bandwidth while maintaining good compatibility with the existing mature RDIMM ecosystem. Additionally, MCRDIMM/MRDIMM is expected to enable much higher overall server performance and lower total cost of ownership (TCO) for enterprises.
MRDIMM and MCRDIMM both fall under the category of DRAM memory modules, which have different application scenarios relative to HBM as they have their own independent market space. As an industry-standard packaged memory, HBM can achieve higher bandwidth and energy efficiency in a given capacity with a smaller size. However, due to high cost, small capacity, and lack of scalability, its application is limited to a few fields. Thus, from an industry perspective, memory module is the mainstream solution for large capacity, cost-effectiveness, and scalable memory.
Montage Technology believes that, based on its high bandwidth and large capacity advantages, MRDIMM is likely to become the preferred main memory solution for future AI and HPC. As per JEDEC’s plan, the future new high-bandwidth memory modules for servers, MRDIMM, will support even higher memory bandwidth, further matching the bandwidth demands of HPC and AI application scenarios.
Read more
(Photo credit: SK hynix)
News
As TSMC and other major chip manufacturers compete for AI business opportunities, chip production capacity is unable to keep up with demand. Industry sources cited in a report from NIKKEI claimed that the slow expansion of high-end chip production capacity is due to different packaging and testing technologies used by various companies and calls for the industry to standardize as soon as possible.
Jim Hamajima, President of the Japan office of the Semiconductor Equipment and Materials International (SEMI), recently stated in an interview with NIKKEI that leading chip manufacturers like Intel and TSMC should adopt international standards for back-end processes to effectively and quickly increase production capacity.
Hamajima further noted that each company is trying to apply unique solutions in back-end processes, with TSMC and Intel using different technical standards, which leads to inefficiencies.
Semiconductor manufacturing is divided into two major parts: front-end and back-end processes. While the photolithography technology used in front-end processes widely adopts international standards set by SEMI, packaging and testing in back-end processes vary among manufacturers. For example, TSMC uses CoWoS technology for advanced packaging, while Samsung Electronics uses I-Cube technology.
In recent years, chip manufacturers have actively invested in the development of advanced packaging technologies, primarily because front-end processes face technical bottlenecks, making back-end processes the key to gaining a competitive edge.
Hamajima believes that the current state of back-end processes in the semiconductor industry is “Balkanized,” with each company adhering to its own technologies, leading to a fragmented industry. He warns that this issue will start to impact profit margins as more powerful chips are produced in the future.
Hamajima stated that if semiconductor manufacturers adopt standardized automated production technologies and material specifications, it will be easier to acquire manufacturing equipment and upstream material supplies when expanding production capacity.
Hamajima is a director of a recently launched consortium led by Intel and 14 Japanese companies to jointly develop automated systems for back-end processes. The collaborating companies include Japanese companies such as Omron, Yamaha Motor, Resonac, and Shin-Etsu Polymer, a subsidiary of Shin-Etsu Chemical Industry.
Hamajima noted that Japan, with its numerous automation equipment and semiconductor material suppliers, is an ideal location to test international standards for back-end processes.
He also acknowledged that currently, Intel is the only multinational chip manufacturer in the alliance, which might lead to the development of technical standards that favor Intel. However, he emphasized that the alliance welcomes other chip manufacturers to join, and the research outcomes will serve as a reference for future industry standard-setting.
Read more
(Photo credit: TSMC)
News
Recently, after reporting a loss of USD 7 billion in its manufacturing business for 2023, Intel stated that its investment in France and Italy could not be realized for the time being, which is worth several billion euros and can potentially create thousands of jobs. Relevant investment plans for chip plants mentioned above may have been suspended.
Intel noted in a statement, “Investment in France has been paused,” citing “significant changes in economic and market conditions” since 2022.
The company had selected a location southwest of Paris as a new R&D center for artificial intelligence (AI) and high-performance computing (HPC). The center is planned to open by the end of 2024 and will employ 450 people.
Intel added that the “scope” of the project is undergoing adjustment, and France remains a choice for Intel’s future R&D center.
Two years ago, Intel began negotiations with Italy on plans to invest up to EUR 4.5 billion to build a manufacturing plant in the country. This plant would create 1,500 jobs for Intel and 3,500 jobs for suppliers.
When it comes to the status of the Italian plant, Intel said it currently focused on its active manufacturing projects in Ireland, Germany, and Poland. However, Italy’s Minister of Business, Adolfo Urso, stated in March of this year that Intel had delayed its investment in Italy.
Read more
(Photo credit: Intel)