Intel


2024-08-02

[News] Intel’s Earnings Fall Short, Cutting Over 15% of Workforce

Intel not only reported earnings and forecasts that fell short of Wall Street expectations but also announced plans to cut more than 15% of its workforce, halt dividend payments for Q4 2024 (October-December), and reduce its full-year capital expenditure forecast by more than 20%.

According to Intel’s official announcement, its Q2 (April-June) earnings: adjusted earnings per share were $0.02, far below the analyst estimate of $0.10; revenue decreased by 1% year-over-year to USD 12.83 billion, missing the market expectation of USD 12.94 billion; and adjusted gross margin was 35.4%

During Q2, Intel’s Client Computing Group, responsible for producing PC processors, saw its revenue increase by 9% year-over-year to USD 7.41 billion, meeting the market expectation of USD 7.42 billion. However, the revenue from the Data Center and AI Group fell by 3% year-over-year to USD 3.05 billion, missing the market expectation of USD 3.14 billion.

Intel stated that sales of PC chips capable of handling AI tasks exceeded internalAI expectations, with shipments expected to surpass 40 million units in 2024.

Looking ahead to Q3, Intel forecasts revenue between USD 12.5 billion and USD 13.5 billion and an adjusted loss per share of $0.03. According to a report from Reuters citing an LSEG survey, analysts had originally predicted Q3 revenue to reach USD 14.35 billion with an adjusted earnings per share of $0.31. Intel’s adjusted gross margin for the quarter is expected to be 38%.

Intel CEO Pat Gelsinger stated that the latest layoff plan will affect about 15,000 employees. This is the largest single layoff action tracked by tech layoff monitoring site Layoffs.fyi since it began operations in March 2020. Intel currently employs around 110,000 people, meaning over 15% of its workforce will be impacted.

Gelsinger further pointed out that Intel must align its cost structure with the latest operational model and fundamentally change the way the company operates. He indicated that Intel’s revenue growth has not met expectations and has not yet benefited from powerful trends such as AI.

According to Intel’s statement, Intel will suspend dividend payments starting in Q4 until cash flow improves significantly. Since 1992, Intel has consistently paid dividends without interruption.

Intel has also decided to reduce its total capital expenditure budget for new plants and equipment in 2024 by over 20% to between USD 25 billion and USD 27 billion. The estimated total capital expenditure for 2025 will be between USD 20 billion and USD 23 billion.

Read more

(Photo credit: Intel)

Please note that this article cites information from Intel and Reuters .

2024-07-30

[News] Intel Hires Micron Executive to Lead Its Foundry Business

According to a report from Commercial Times, after suffering a multi-billion-dollar loss in its foundry business, Intel has recruited Naga Chandrasekaran, a veteran responsible for process technology development at Micron, as its Chief Operating Officer.

Intel is reportedly facing setbacks in developing chip manufacturing. After experiencing a staggering USD 7 billion loss in its foundry business in 2023, the company incurred an additional USD 2.5 billion loss in the first quarter of this year.

Thus,to drive the growth of its foundry business, Intel has recruited Naga Chandrasekaran from Micron, who will oversee all of Intel’s manufacturing operations and report directly to CEO Pat Gelsinger.

Chandrasekaran’s appointment will take effect on August 12. He will oversee Intel Foundry’s global manufacturing operations and strategic planning, including assembly and test manufacturing, wafer fabrication, and supply chain management. Essentially, Chandrasekaran will be responsible for all of Intel’s manufacturing activities.

In the announcement of the employment, Intel CEO Pat Gelsinger noted, “Naga is a highly accomplished executive whose deep semiconductor manufacturing and technology development expertise will be a tremendous addition to our team.”

“As we continue to build a globally resilient semiconductor supply chain and create the world’s first systems foundry for the AI era, Naga’s leadership will help us to accelerate our progress and capitalize on the significant long-term growth opportunities ahead,”  Gelsinger said.

As per a report from tom’s hardware, Chandrasekaran has spent over 20 years at Micron, holding various management positions. Most recently, he led global technology development and engineering focused on scaling memory devices, advanced packaging, and emerging technology solutions. His extensive background encompasses process and equipment development, device technology, and mask technology.

He will replace Keyvan Esfarjani, who is set to retire at the end of the year. Esfarjani, who has served at Intel for nearly 30 years, will remain with the company to assist with the transition. He has made significant contributions to Intel’s global supply chain resilience and manufacturing operations.

On the other hand, in an attempt to narrow down the gap with TSMC, Intel is also said to be recruiting the foundry giant’s senior engineers for its foundry division, according to a report by Commercial Times.

Read more

(Photo credit: Intel)

Please note that this article cites information from Commercial TimesIntel and tom’s hardware.

2024-07-30

[News] Without Taiwan’s Semiconductor Manufacturing, U.S.’ AI Capabilities May Eventually Be Caught up by China

As the U.S. presidential election approaches, uncertainties also arise. Compared with the stance in President Biden’s term, U.S. presidential candidate Trump shows a remarkably different attitude regarding the “Taiwan issue,” while he highlights the “America First” agenda further.

However, according to a report by Technews, Trump may overlook the fact that Taiwan’s semiconductor is closely tied to shaping the “America First” stance that he values. By standing as a crucial ally in semiconductor, Taiwan could help the U.S. secure a foothold in the arms, AI, and technology race. Without Taiwan’s support, it is hard to say whether U.S. may face the risk of being overtaken by China, as the latter is developing semiconductor at full throttle. Read below for more analysis from Technews:

Intel’s “Five Nodes in Four Years” Roadmap: Details of Intel 20A Still Vague

Let’s look at Intel’s progress first. The tech giant has announced a plan to advance through five nodes in four years (5N4Y), as the latest update includes Intel 14A in its top-tier node strategy.

However, in the chart below, Intel 7, which has been categorized as a mature process, is already being caught up by SMIC’s 7nm and 5nm processes. This is happening despite the U.S.-China trade war, with the U.S. placing SMIC on the entity list and imposing equipment restrictions.

From the perspective of advanced nodes, Intel’s latest Lunar Lake platform will be manufactured with TSMC’s 3nm process this year. In addition, its next-generation Nova Lake processors will also adopt TSMC’s 2nm process, with a potential release date in 2026.

Intel CEO Pat Gelsinger has stated that the first-generation Gate-All-Around (GAA) RibbonFET process, Intel 20A, is expected to launch this year, with Intel 18A anticipated to go into production in the first half of 2025.

However, it is worth noting that Intel 20A was originally reported to be used for Arrow Lake and Lunar Lake processors, but Gelsinger confirmed at COMPUTEX that the latter will use TSMC’s 3nm process, with no mention of Arrow Lake’s progress. The market expects that some Arrow Lake processor orders may be outsourced to TSMC, which also suggests that the progress of Intel 20A may not meet expectations.

On the other hand, SMIC, limited by equipment constraints, has progressed to 7nm but faces delays with 5nm, so it will advance gradually with N+1, N+2, and N+3 processes.

Without Taiwan’s Semiconductor Manufacturing, the AI Computing Power of the U.S. May Eventually Be Caught up by China

Industry experts believe that without Taiwan’s semiconductor manufacturing, it would be difficult for the industry to progress, especially for AI and HPC chips that require significant computing power and advanced processes.

Currently, AI chips primarily adopt TSMC’s 4nm and 3nm nodes and will continue to use the 2nm process in the future. Without TSMC’s technology, the U.S., if it solely relies on Intel for its foundry and capacity, may progress relatively slow in AI computing power, which may make the country eventually lose the AI race with China, falling behind in future commercial and military equipment advantages.

According to a report by the U.S. Department of Commerce’s Bureau of Industry and Security (BIS) in December last year, the global semiconductor IC design industry was valued at USD 248 billion in 2022, while integrated device manufacturers (IDM) were valued at USD 412 billion, totaling USD 660 billion. The U.S. accounted for 53% of this value, while Taiwan only accounted for 6%.

On the other hand, the global foundry services in 2022 was valued at USD 139 billion, while the packaging and testing industry was valued at USD 50 billion, totaling USD 190 billion. Taiwan accounted for 63%, while the U.S. only accounted for 8%.

Despite this, the overall semiconductor industry value in the U.S. remains at USD 365 billion, making it the largest beneficiary in the sector. That of Taiwan, on the other hand, is only USD 159 billion, less than one-third of the U.S. total.

Sanctioning Taiwan Would Be “Shooting Oneself in the Foot,” Making the U.S.  Harder to Win the Tech War with China

Regarding government subsidies, China is launching the third phase of its Big Fund, with a registered capital of 344 billion RMB (about USD 47.5 billion), which is significantly higher than the previous two phases. This represents a nationwide effort to invest in semiconductors, with a focus on enhancing semiconductor equipment and the overall supply chain.

The U.S. CHIPS Act, on the other hand, has a scale of USD 52.7 billion, which is comparable to China’s subsidies. However, as technology and arms races are long-term competitions, how related policies may evolve would also be subject to the results of the election.

On the other hand, China is currently working hard to better its semiconductor eco-industrial chain, expand its market share in mature processes, and continue advancing to more advanced process technologies, which may further shorten its gap with the U.S.

As the U.S. IC design sector is closely related to Taiwan’s semiconductor manufacturing technology, Taiwan’s role in the game has become a key factor for the U.S. to maintain its leading edge with China. Without Taiwan’s technological support, the techonological dominance of the U.S. might be threatened, as China’s semiconductor industry has gradually catching up.

 

Please note that this article cites information from Technews.
2024-07-29

[News] SK hynix is Reportedly Considering U.S. IPO for its NAND/SSD Subsidiary Solidigm

South Korean memory giant SK hynix, after announcing soaring financial results in Q2 and its massive investment in Yongin Semiconductor Cluster last week, is now reportedly considering another move: US IPO for its Solidigm subsidiary.

According to the reports by Blocks & Files and Korea media outlet Hankyung, Solidigm has achieved its first profit after 12 consecutive quarters of losses. On July 25th, SK hynix announced second-quarter revenue of 16.42 trillion Korean won, a 125% year-on-year increase, setting a historical record. At the same time, profits reached their highest level since 2018. This was mainly due to strong demand for AI memory, including HBM, and overall price increases for DRAM and NAND products.

The reports stated that the rumor regarding the U.S. IPO seems to be plausible, as SK hynix had previously planned to spin off Solidigm, and the company’s recent rebound makes such a move more feasible. In addition, an IPO for Solidigm would allow SK hynix to obtain cash for part of its stake in the company and assist in covering the planned capital expenditures, according to the reports.

The company had just announced an ambitious plan of expanding its memory manufacturing capacity with an approximately 9.4 trillion won (USD 6.8 billion) investment to build an HBM fabrication plant at the Yongin Semiconductor Cluster, Korea. Construction of the fab will begin in March 2025 and is expected to be completed by May 2027. Following this, SK Hynix intends to add three more plants to the cluster.

However, the reports also pointed out that SK hynix’s success in this venture will likely depend on how the new organization is structured—such as which assets are included in Solidigm versus those retained by SK hynix—and how both entities address future technology plans. This is particularly important considering that the current roadmap for the memory giant’s NAND business at Dalian, China, including the QLC components that have contributed to Solidigm’s recent success in high-capacity enterprise SSDs, appears to conclude at 196 layers.

In 2020, SK hynix acquired Intel’s NAND and SSD division through a two-phase deal. The first phase involved the former purchasing Intel’s SSD business and NAND fabrication plant in Dalian, China, for USD 7 billion. The second phase will see SK hynix pay Intel an additional USD 2 billion in 2025 for intellectual property related to NAND flash wafer manufacturing and design, as well as for R&D employees and the Dalian fab workforce. SK hynix named the acquired business Solidigm in December, 2021, and has since developed and launched a few successful products, including the D5-P5336 61.44 TB QLC (4 bits/cell) SSD, the reports noted.

Regarding the rumor, SK hynix clarifies that Solidigm is exploring various growth strategies, but no decision has been made at this time.

Read more

(Photo credit: Solidigm)

Please note that this article cites information from Blocks & Files and Hankyung.
2024-07-29

[News] MRDIMM/MCRDIMM to be the New Sought-Afters in Memory Field

Amidst the tide of artificial intelligence (AI), new types of DRAM represented by HBM are embracing a new round of development opportunities. Meanwhile, driven by server demand, MRDIMM/MCRDIMM have emerged as new sought-afters in the memory industry, stepping onto the “historical stage.”

According to a report from WeChat account DRAMeXchange, currently, the rapid development of AI and big data is boosting an increase in the number of CPU cores in servers. To meet the data throughput requirements of each core in multi-core CPUs, it is necessary to significantly increase the bandwidth of memory systems. In this context, HBM modules for servers, MRDIMM/MCRDIMM, have emerged.

  • JEDEC Announces Details of the DDR5 MRDIMM Standard

On July 22, JEDEC announced that it will soon release the DDR5 Multiplexed Rank Dual Inline Memory Modules (MRDIMM) and the next-generation LPDDR6 Compression-Attached Memory Module (CAMM) advanced memory module standards, and introduced key details of these two types of memory, aiming to support the development of next-generation HPC and AI. These two new technical specifications were developed by JEDEC’s JC-45 DRAM Module Committee.

As a follow-up to JEDEC’s JESD318 CAMM2 memory module standard, JC-45 is developing the next-generation CAMM module for LPDDR6, with a target maximum speed of over 14.4GT/s. In light of the plan, this module will also provide 24-bit wide subchannels, 48-bit wide channels, and support “connector array” to meet the needs of future HPC and mobile devices.

DDR5 MRDIMM supports multiplexed rank columns, which can combine and transmit multiple data signals on a single channel, effectively increasing bandwidth without additional physical connections. It is reported that JEDEC has planned multiple generations of DDR5 MRDIMM, with the ultimate goal of increasing its bandwidth to 12.8Gbps, doubling the current 6.4Gbps of DDR5 RDIMM memory and improving pin speed.

In JEDEC’s vision, DDR5 MRDIMM will utilize the same pins, SPD, PMIC, and other designs as existing DDR5 DIMMs, be compatible with the RDIMM platform, and leverage the existing LRDIMM ecosystem for design and testing.

JEDEC stated that these two new technical specifications are expected to bring a new round of technological innovation to the memory market.

  • Micron’s MRDIMM DDR5 to Start Mass Shipment in 2H24

In March 2023, AMD announced at the Memcom 2023 event that it is collaborating with JEDEC to develop a new DDR5 MRDIMM standard memory, targeting a transfer rate of up to 17600 MT/s. According to a report from Tom’s Hardware at that time, the first generation of DDR5 MRDIMM aims for a rate of 8800 MT/s, which will gradually increase, with the second generation set to reach 12800 MT/s, and the third generation to 17600 MT/s.

MRDIMM, short for “Multiplexed Rank DIMM,” integrates two DDR5 DIMMs into one, thereby providing double the data transfer rate while allowing access to two ranks.

On July 16, memory giant Micron announced the launch of the new MRDIMM DDR5, which is currently sampling and will provide ultra-large capacity, ultra-high bandwidth, and ultra-low latency for AI and HPC applications. Mass shipment is set to begin in the second half of 2024.

MRDIMM offers the highest bandwidth, largest capacity, lowest latency, and better performance per watt. Micron said that it outperforms current TSV RDIMM in accelerating memory-intensive virtualization multi-tenant, HPC, and AI data center workloads.

Compared to traditional RDIMM DDR5, MRDIMM DDR5 can achieve an effective memory bandwidth increase of up to 39%, a bus efficiency improvement of over 15%, and a latency reduction of up to 40%.

MRDIMM supports capacity options ranging from 32GB to 256GB, covering both standard and high-form-factor (TFF) specifications, suitable for high-performance 1U and 2U servers. The 256GB TFF MRDIMM outruns TSV RDIMM with similar capacity by 35% in performance.

This new memory product is the first generation of Micron’s MRDIMM series and will be compatible with Intel Xeon processors. Micron stated that subsequent generations of MRDIMM products will continue to offer 45% higher single-channel memory bandwidth compared to their RDIMM counterparts.

  • SK hynix to Launch MCRDIMM Products in 2H24

As one of the world’s largest memory manufacturers, SK hynix already introduced a product similar to MRDIMM, called MCRDIMM, even before AMD and JEDEC.

MCRDIMM, short for “Multiplexer Combined Ranks2 Dual In-line Memory Module,” is a module product that combines multiple DRAMs on a substrate, operating the module’s two basic information processing units, Rank, simultaneously.

Source: SK hynix

In late 2022, SK hynix partnered with Intel and Renesas to develop the DDR5 MCR DIMM, which became the fastest server DRAM product in the industry at the time. As per Chinese IC design company Montage Technology’s 2023 annual report, MCRDIMM can also be considered the first generation of MRDIMM.

Traditional DRAM modules can only transfer 64 bytes of data to the CPU at a time, while SK hynix’s MCRDIMM module can transfer 128 bytes by running two memory ranks simultaneously. This increase in the amount of data transferred to the CPU each time boosts the data transfer speed to over 8Gbps, doubling that of a single DRAM.

At that time, SK hynix anticipated that the market for MCR DIMM would gradually open up, driven by the demand for increased memory bandwidth in HPC. According to SK hynix’s FY2024 Q2 financial report, the company will launch 32Gb DDR5 DRAM for servers and MCRDIMM products for HPC in 2H24.

  • MRDIMM Boasts a Brilliant Future

MCRDIMM/MRDIMM adopts the DDR5 LRDIMM “1+10” architecture, requiring one MRCD chip and ten MDB chips. Conceptually, MCRDIMM/MRDIMM allows parallel access to two ranks within the same DIMM, increasing the capacity and bandwidth of the DIMM module by a large margin.

Compared to RDIMM, MCRDIMM/MRDIMM can offer higher bandwidth while maintaining good compatibility with the existing mature RDIMM ecosystem. Additionally, MCRDIMM/MRDIMM is expected to enable much higher overall server performance and lower total cost of ownership (TCO) for enterprises.

MRDIMM and MCRDIMM both fall under the category of DRAM memory modules, which have different application scenarios relative to HBM as they have their own independent market space. As an industry-standard packaged memory, HBM can achieve higher bandwidth and energy efficiency in a given capacity with a smaller size. However, due to high cost, small capacity, and lack of scalability, its application is limited to a few fields. Thus, from an industry perspective, memory module is the mainstream solution for large capacity, cost-effectiveness, and scalable memory.

Montage Technology believes that, based on its high bandwidth and large capacity advantages, MRDIMM is likely to become the preferred main memory solution for future AI and HPC. As per JEDEC’s plan, the future new high-bandwidth memory modules for servers, MRDIMM, will support even higher memory bandwidth, further matching the bandwidth demands of HPC and AI application scenarios.

Read more

(Photo credit: SK hynix)

Please note that this article cites information from Tom’s Hardware, Micron and WeChat account DRAMeXchange.

  • Page 9
  • 38 page(s)
  • 189 result(s)

Get in touch with us