News
According to a report from South Korean media outlet HankYung, Samsung plans to unveil its next-generation flagship Galaxy S25 series in January 2025, including the Galaxy S25, Galaxy S25+, and the top-tier Galaxy S25 Ultra.
Contrary to earlier rumors of a dual-processor strategy which offers different versions with either Exynos 2500 or Snapdragon 8 Gen 4, Samsung is reportedly equipping the entire S25 series with Qualcomm’s new Snapdragon 8 Gen 4 processor.
The report highlights that this shift is driven by Apple’s upcoming iPhone 16, which is being promoted as the first AI-centric smartphone, placing Samsung at a pivotal moment in the competition for AI smartphone leadership.
Given that the Snapdragon 8 Gen 4 boasts over a 30% improvement in AI performance compared to its predecessor and slightly outperforms the Exynos 2500, Samsung has opted to play it safe by adopting the latest Snapdragon chip, ensuring the S25 series to maximize its AI capabilities.
Previous rumors had also suggested that Samsung considered implementing a three-way strategy for its 2025 S25 series, which would have included MediaTek’s Dimensity chipset alongside Qualcomm’s Snapdragon.
As per a report from SamMobile, Samsung’s strategy to include multiple chipset suppliers was intended to prevent over-reliance on Qualcomm, which could limit their ability to negotiate lower prices.
However, a previous report by SamMobile points out that, since MediaTek’s Dimensity chips have traditionally only been used in Samsung’s mid-to-low-end devices, integrating them into the premium S series would have presented a significant challenge in terms of market acceptance.
Read more
(Photo credit: Samsung)
News
As Intel has reportedly been working out options to navigate the company through crisis, its possible moves are said to include selling off Altera, putting a halt to its investment project in Germany, and though, less unlikely, sale of its foundry business. However, if this restructuring does happen, according to South Korean media outlets The Korea Times and The Korea Herald, Samsung and TSMC are unlikely to be buyers for Intel’s foundry operations.
A Risky Move for Samsung to Make
Intel’s thoughts on its foundry business has been casting ripples in the global semiconductor industry, as the market has been speculating who the buyers might be and whether the falling giant will take action on the potential divestiture of its foundry operations.
Nevertheless, a report by The Korea Times notes that as Intel’s foundry market share is currently small, the impact to its competitors may be minimum. Therefore, it is unlikely that this sale will immediately boost Samsung’s chip market share.
According to TrendForce’s latest analysis, top five rankings in the foundry sector remained unchanged in the second quarter, with TSMC (62.3%), Samsung (11.5%), SMIC (5.7%), UMC (5.3%), and GlobalFoundries (4.9%) stood steadfast in their positions.
Moreover, industry officials cited by the report notes that it could be a risky move for Samsung to make another large investment in Intel’s foundry. Samsung’s non-memory chip division, which encompasses foundry and large-scale system integration devices, reportedly incurred an operating loss of 300 billion won (USD 2.24 million) in the second quarter of this year, according to the report.
On the other hand, Washington’s attitude could also pose a challenge for current market players like TSMC and Samsung, the report indicates. Given that the U.S. regards semiconductor manufacturing as a matter of national security, GlobalFoundries might be the most likely buyer, as it is a U.S. company and aligns with the policy of protecting U.S. national security, according to a semiconductor industry official cited by the report.
An Emerging Foundry Opportunity for Samsung: AI Chips
A report by The Korea Herald observes that Samsung, in a way, has been facing similar difficulties with Intel, as the company finds it challenging in securing significant orders from big techs. While TSMC is known for having close ties with tech giants, Samsung, on the other hand, is seeing increased orders from startups and automotive firms.
However, a turning point may have arrived. IBM unveiled its new AI chips for servers, the IBM Telum II Processor and IBM Spyre Accelerator, at Hot Chips 2024 last week. The report notes that these upcoming chips will be manufactured by Samsung using its 5nm process technology.
The report further suggests that it would be more advantageous for Samsung to focus on identifying potential clients in the AI industry and securing their orders, rather than trying to compete with TSMC on all areas of the logic chips sector.
Read more
(Photo credit: Samsung)
News
As per its official release, SK hynix has announced that it has developed the industry’s first 16Gb DDR5 built using its 1c node, the sixth generation of the 10nm process. Reportedly, it will be ready for mass production of the 1c DDR5 within the year to start volume shipment next year.
To reduce potential errors stemming from the procedure of advancing the process and transfer the advantage of the 1b, the company claims in the release that it extended the platform of the 1b DRAM for development of 1c.
As per the press release, the operating speed of the 1c DDR5, expected to be adopted for high-performance data centers, is improved by 11% from the previous generation, to 8Gbps.
With power efficiency also improved by more than 9%, SK hynix expects adoption of 1c DRAM to help data centers reduce the electricity cost by as much as 30% at a time when advancement of AI era is leading to an increase in power consumption.
Per a report from Businesskorea, the difficulty of advancing the shrinking process for 10nm-range DRAM technology has increased with each generation.
However, with the official release this time, SK hynix has become the first in the industry to overcome these technological limitations by achieving a higher level of design completion.
Per another report from Korea JooAng Daily, this marks a win for SK hynix, as its rival Samsung Electronics had previously outpaced it in the development of the 1b DRAM, which corresponds to nodes in the 12-nanometer range.
Read more
(Photo credit: SK hynix)
News
As designing and manufacturing large monolithic ICs became more complex, related challenges regarding yield and cost have emerged for semiconductor companies, which boosts the popularity of chiplets. Now the wave has been spreading to the memory sector. According to a report by TheElec, SK hynix intends to integrate the chiplet technology into its memory controllers over the next three years to improve cost management.
In January, the company applied for a brand name called MOSAIC, which represents its chiplet technology, the report notes.
Citing SK hynix Executive Vice President Moon Ki-ill, the report notes that the company currently collaborates with TSMC as the foundry for manufacturing its controllers.
However, within the next two to three years, parts of the controller would be manufactured with advanced nodes, while other sections will use legacy nodes, the report states. Moon added that the company is currently developing technology to connect these different sections.
Moon further explains that while TSMC manufactures the controllers for SK hynix, the memory giant itself is responsible for the packaging work. In the future, under the structure of chiplets, a chip can be divided into separate parts with various functions, and then reconnected to achieve similar performances as if they were a single integrated chip, according to TheElec.
In this scenario, function A might use TSMC’s 7nm node, while function B and C could be produced using legacy nodes from TSMC or another foundry. This approach would enables SK hynix to better manage the costs of its DRAM and NAND products, the report notes.
According to the definition of EDA and Intelligent System Design provider Cadence, chiplet technology results in versatile and customizable modular chips, which leads to reduced development timelines and costs.
The creation and adoption of chiplet standards like UCIe enable seamless integration of chiplets into System-on-Chips (SoCs), unlocking new possibilities in computing and technology applications, according to Cadence.
In addition to SK hynix, Samsung has also brought up plans to adopt the chiplet technology. In July, according to an announcement with Japanese startup Preferred Networks, the two companies plan to showcase groundbreaking AI chiplet solutions for the next-generation data center and generative AI computing market.
According to an earlier report by Gizmochina, Samsung is also said to be mulling to apply 3D chiplet technology to its Exynos mobile APs.
Read more
(Photo credit: SK hynix)
News
In mid-August, Samsung is said to be accelerating its progress on next-gen HBM, targeting to tape-out HBM4 by the end of this year. Now it seems SK hynix has maintained its competitive edge, as the company aims to tape out HBM4 in October, which will be used to power NVIDIA’s Rubin AI chips, according to the reports by Wccftech and ZDNet.
In addition, the reports note that SK hynix also plans to tape out HBM4 for AMD’s AI chips, which is expected to take place a few month later.
To further prepare for the strong demand from AI chip giants’ upcoming product launch, SK hynix is assembling development teams to supply HBM4 to NVIDIA and AMD, according to Wccftech and ZDNet.
Per SK hynix’s product roadmap, the company plans to launch 12-layer stacked HBM4 in the second half of 2025 and 16-layer in 2026. With NVIDIA’s Rubin series planned for 2026, it is expected to adopt HBM4 12Hi with 8 clusters per GPU.
SK hynix is the major HBM3e supplier for NVIDIA’s AI chips, as the memory giant has taken the lead by starting shipping the product a few months ago, followed by Micron. Samsung’s HBM3, on the other hand, have been cleared by NVIDIA in July, while its HBM3e is still striving to pass NVIDIA’s qualification.
According to the reports, the introduction of HBM4 represents another major milestone for SK hynix, as it offers the fastest DRAM with exceptional power efficiency and higher bandwidth.
HBM4 will feature double the channel width of HBM3E, offering 2048 bits versus 1024 bits. Moreover, it supports stacking 16 DRAM dies, up from 12 in HBM3e, with options for 24Gb and 32Gb layers. This advancement will enable a capacity of 64GB per stack, compared to 32GB with HBM3e, the reports suggest.
On August 19, SK hynix showcased the ambition on securing its leadership on HBM, claiming that the company is developing a product with performance up to 30 times higher than current HBM.
Read more
(Photo credit: SK hynix)