Semiconductors


2024-03-11

[News] US Government Considers Adding ChangXin Memory Technologies to Entity List, Imposing Further Sanctions on Chinese Firms

According to sources cited in Bloomberg’s report, the US government is considering imposing sanctions on Chinese technology firms, including ChangXin Memory Technologies (CXMT), in its latest move against China’s advanced semiconductor sector.

The same report has pointed out that the US Department of Commerce’s Bureau of Industry and Security (BIS) is currently considering including CXMT in the Entity List, which would restrict the listed companies’ access to US technology. Apart from CXMT, US officials are also contemplating restrictions on five other Chinese companies, though the final list has yet to be confirmed.

Regarding this matter, the BIS and White House National Security Council declined to comment.

CXMT is a major Chinese DRAM manufacturer whose products include chips applicable in computer servers, smart vehicles, and other devices. Its primary competitors include Micron, Samsung Electronics, and SK Hynix. Micron.

The recent actions by the US government reportedly stem from Huawei’s breakthrough last year in circumventing US restrictions to acquire advanced process chips, specifically obtaining chips using the 7-nanometer process from SMIC (Semiconductor Manufacturing International Corporation). This allowed Huawei to make a comeback in the 5G smartphone market, prompting concerns and responses from the US government.

Gina Raimondo, the US Secretary of Commerce, has responded by stating that the US will take “as strong and effective action as possible” to uphold national security interests.

Currently, companies that have been listed on the Entity List by the US Department of Commerce include Huawei, SMIC (Semiconductor Manufacturing International Corporation), and Shanghai Micro Electronics. Additionally, China’s other major memory manufacturer, Yangtze Memory Technology Corp, was added to this restriction list in 2022.

Read more

(Photo credit: CXMT)

Please note that this article cites information from Bloomberg.

2024-03-11

[News] NVIDIA’s EULA Amendments Tighten Grip, Suppressing Third-Party CUDA Emulation

As per Chinese media mydrivers’ report, it has indicated that NVIDIA updated the EULA terms in the CUDA 11.6 installer, explicitly prohibiting third-party GPU companies from seamlessly integrating CUDA software. This move has reportedly stirred up discussions among the market.

NVIDIA updated the EULA agreement for CUDA 11.6, explicitly stating, “You may not reverse engineer, decompile or disassemble any portion of the output generated using SDK elements for the purpose of translating such output artifacts to target a non-NVIDIA platform.”

This move is reportedly speculated to target third-party projects like ZLUDA, involving Intel and AMD, as well as compatibility solutions from Chinese firms like Denglin Technology and MetaX Technology.

In response, MooreThreads issued an official statement, confirming that its MUSA and MUSIFY technologies remain unaffected. MUSA and MUSIFY are not subject to NVIDIA EULA terms, ensuring developers can use them with confidence.

In fact, since 2021, NVIDIA has prohibited other hardware platforms from running CUDA software analog layers but only warned about it in the online EULA user agreement. Currently, NVIDIA has not explicitly named any parties, issuing warnings in the agreement without taking further action, although further measures cannot be ruled out in the future.

In the past, CUDA development and its ecosystem were widely regarded as NVIDIA’s moat. However, processor architect Jim Keller previously criticized CUDA on the X platform, stating, “CUDA is a swamp, not a moat,” adding, “CUDA is not beautiful. It was built by piling on one thing at a time.”

Meanwhile, according to the account rgznai100 posting on Chinese blog CSDN, NVIDIA’s actions will have a significant impact on AI chip/GPGPU companies that previously adopted CUDA-compatible solutions. NVIDIA may initially resort to legal measures, such as lawsuits, against GPGPU companies following similar paths.

Therefore, Chinese enterprises should endeavor to enhance collaboration with the open-source community to establish an open AI compilation ecosystem, reducing the risks posed by NVIDIA’s market dominance.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from mydrivers and MooreThreads.

2024-03-08

[News] Intel Closer to Secure USD 3.5 Billion Investment from US Government for Chip Manufacturing in Military and Intelligence Applications

As per the report from Bloomberg, the US government is set to invest USD 3.5 billion in Intel to enhance the production capacity of advanced chips for military and intelligence purposes. Reportedly, the move is potentially positioning Intel as a leading semiconductor provider in the defense market.

Under the US government’s RAMP-C initiative, numerous companies, including IBM, Microsoft, and NVIDIA, are developing chips for the US military. Stu Pann, Intel’s head of foundry, recently stated in an interview with Tom’s Hardware that the company has signed a USD 1 billion contract with the US government and the Department of Defense.

According to the same report from Tom’s Hardware, this funding could be part of the total appropriation of USD 39 billion under the CHIPS and Science Act or may stem from the proposed Secure Enclave program specifically designed for military and intelligence chips. In any case, it will strengthen Intel’s position as a leading manufacturer in the defense market.

As the new funding announcement emerges, the U.S. Department of Commerce is also poised to invest billions of dollars in leading chip manufacturers like Intel, Micron, Samsung, and others, aiming to enhance local semiconductor manufacturing capabilities.

The U.S. government enacted the “Chip Act” in 2022. For now, only three American companies currently benefiting from the subsidies, including BAE Systems, GlobalFoundries, and Microchip Technology.

Read more

(Photo credit: Intel)

Please note that this article cites information from Bloomberg and Tom’s Hardware.

2024-03-08

[News] Speculations Arise on Samsung’s Partnership with Meta, Potentially Tapping into TSMC’s Client Base

As Samsung’s foundry business closely trails TSMC, Economic Daily News has reported that Samsung is poised to secure orders for Meta’s next-gen AI chips, manufactured on a 2-nanometer process. If so, this may mark Samsung’s first 2nm customer, intensifying the rivalry with TSMC in the 2nm race.

According to The Korea Times, the potential collaboration between Meta and Samsung was discussed during Meta CEO Zuckerberg’s recent visit to South Korea.

The report cites anonymous officials stating that during Zuckerberg’s meeting with South Korean President Yoon Suk Yeol, he disclosed Meta’s reliance on TSMC. However, the current situation is described as “volatile” due to TSMC’s tight production capacity, which could potentially affect Meta’s supply in the long run.

Nevertheless, neither Meta nor South Korean officials have confirmed these rumors.

Currently, Meta has entrusted TSMC with the production of two AI chips. As per the Economic Daily News citing sources, the biggest challenge for Samsung’s foundry business lies in yield rates.

Previously, poor yield rates prompted Apple, Qualcomm, and Google to switch their orders to TSMC. If Meta indeed turns to Samsung for the next-generation AI chips, the key to the success of their collaboration still lies in yield rates.

As per Samsung’s previous roadmap, the 2-nanometer SF2 process is set to debut in 2025. Compared to the second-generation 3GAP process at 3 nanometers, it offers a 25% improvement in power efficiency at the same frequency and complexity, as well as a 12% performance boost at the same power consumption and complexity, while reducing chip area by 5%.

As stated in Samsung’s Foundry Forum (SFF) plan, Samsung will begin mass production of the 2nm process (SF2) in 2025 for mobile applications, expand to high-performance computing (HPC) applications in 2026, and further extend to the automotive sector and the expected 1.4nm process by 2027.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Economic Daily News and The Korea Times.

2024-03-08

[News] SK Hynix Reportedly Invests USD 1 Billion in Advanced Packaging to Strengthen HBM Chip Leadership

South Korean memory giant SK Hynix is significantly investing in advanced chip packaging, aiming to capture more demand for High Bandwidth Memory (HBM), a vital component driving the burgeoning AI market.

According to Bloomberg’s report, Lee Kang-Wook, currently leading SK Hynix’s packaging research and development, stated that the company is investing over USD 1 billion in South Korea to expand and enhance the final steps of its chip manufacturing process.

“The first 50 years of the semiconductor industry has been about the front-end, or the design and fabrication of the chips themselves,” Lee Kang-Wook expressed in an interview with Bloomberg. “But the next 50 years is going to be all about the back-end, or packaging.”

The same report further indicates that the packaging upgrade will help reduce power consumption, enhance performance, and maintain SK Hynix’s leadership position in the HBM market.

Recent market trends also highlight the crucial role of advanced packaging in the manufacturing of HBM products. According to a recent report by South Korean media DealSite, the complex architecture of HBM has resulted in difficulties for manufacturers like Micron and SK Hynix to meet NVIDIA’s testing standards.

The yield of HBM is closely tied to the complexity of its stacking architecture, which involves multiple memory layers and Through-Silicon Via (TSV) technology for inter-layer connections. These intricate techniques increase the probability of process defects, potentially leading to lower yields compared to simpler memory designs.

The reason lies in the lower yield of HBM chips compared to traditional memory chips. The complex stacking architecture of HBM involves multiple memory layers and Through-Silicon Via (TSV) technology for interconnecting layers, which increases manufacturing complexity. In the multi-layer stacking of HBM, if any of the HBM chips are defective, the entire stack will be discarded.

HBM, a type of DRAM primarily used in AI servers, is experiencing a surge in demand worldwide, led by NVIDIA. Moreover, according to a previous TrendForce press release, the three major original HBM manufacturers held market shares as follows in 2023: SK Hynix and Samsung were both around 46-49%, while Micron stood at roughly 4-6%.

Read more

(Photo credit: SK Hynix)

Please note that this article cites information from Bloomberg.

  • Page 138
  • 274 page(s)
  • 1370 result(s)

Get in touch with us