News
According to a report from Nikkei citing sources, memory giant Micron Technology is building a pilot production line for advanced high-bandwidth memory (HBM) in the United States and is considering producing HBM in Malaysia for the first time to capture more demand from the AI boom.
Reported on June 19, Micron is said to be expanding its HBM-related R&D facilities at its headquarters in Boise, Idaho, which include production and verification lines. Additionally, Micron is considering establishing HBM production capacity in Malaysia, where it already operates chip testing and assembly plants.
Nikkei’s report further noted that Micron’s largest HBM production facility is located in Taichung, Taiwan, where expansion efforts are also underway. Micron is said to have set a goal to triple its HBM market share to 24-26% by the end of 2025, which would bring it close to its traditional DRAM market share of approximately 23-25%.
Earlier this month, a report from a Japanese media outlet The Daily Industrial News also indicated that Micron planned to build a new DRAM plant in Hiroshima, with construction scheduled to begin in early 2026 and aiming for completion of plant buildings and first tool-in by the end of 2027.
Per industry sources cited by TechNews, Micron is expected to invest between JPY 600 to 800 billion in the new facility, located adjacent to the existing Fab15 facility. Initially, the new plant will focus on DRAM production, excluding backend packaging and testing, with a capacity emphasis on HBM products.
Micron, along with SK Hynix, has reportedly received certification from NVIDIA to produce HBM3e for the AI chip “H200.” Samsung Electronics has not yet received approval from NVIDIA; its less advanced HBM3 and HBM2e are currently primarily supplied to AMD, Google, and Amazon.
Read more
(Photo credit: Micron)
News
As tech heavyweights eagerly pursue more market share in the AI sector, the battle for talents in the semiconductor industry has also heated up. According to the latest report by The Chosun Daily, citing LinkedIn data as of June 18, NVIDIA has become the hottest tech talent hub, not only drawing talents from semiconductor giants, but recruiting from memory companies in full swing.
According to the report, NVIDIA employs 89 former TSMC employees, while only 12 former NVIDIA employees have joined TSMC. Moreover, comparing with the number of former NVIDIA employees now at Samsung (278), there are 515 NVIDIA employees coming from Samsung Electronics, which indicates a significant talent migration to the GPU giant.
Regarding the talent war between NVIDIA and Intel, the former has attracted as many as 2,848 employees from Intel, whereas only 544 former NVIDIA employees have joined Intel.
NVIDIA also attracts talents with its charisma in AI from other memory giants. The LinkedIn data cited by the report shows that there are 38 NVIDIA employees previously with SK hynix, with none moving in the opposite direction. In addition, NVIDIA has attracted 159 employees from Micron, whereas only 38 former NVIDIA employees have joined Micron.
Interestingly enough, though Samsung lags behind in the competition with TSMC regarding the advanced nodes, it seems to be attractive to talents from the world’s largest foundry. Data show that there are 195 TSMC employees transitioning to Samsung, while only 24 former employees have joined TSMC.
An industry insider cited by the report observed that Korean semiconductor companies are vigorously recruiting for high-performance memory positions, such as those involving HBM. Moreover, a growing number of master’s and doctoral-level semiconductor experts in South Korea are joining the industry, showing a trend where talent moves from academia to local companies and then to international firms.
(Photo credit: NVIDIA)
News
Samsung Electronics’ management has made a significant decision to invest in graphics processing units (GPUs). According to a report from Business Korea reported on June 18th, while the details of Samsung’s GPU investment have not been disclosed, this decision is noteworthy as it differs from their usual focus on memory and foundry services.
Per Business Korea citing Samsung Electronics’ governance report, the management committee approved the “GPU Investment Proposal” in March. The committee includes senior executives such as Han Jong-Hee, head of the Device eXperience (DX) division, as well as the presidents of the Mobile Experience (MX) and Memory Business divisions. Reportedly, this marks the first time since the agenda items were made public in 2012 that Samsung has decided to invest in GPUs, sparking speculation that the company aims to enhance its competitiveness in the GPU sector.
Industry sources cited in the same report interpret this investment as an internal strategy for Samsung to leverage GPUs to innovate semiconductor processes, rather than to develop or manufacture GPUs. At the “GTC Conference” held in March 2024, Samsung announced its collaboration with NVIDIA to develop AI-based digital twins, aiming to achieve full automation of semiconductor plants by 2030.
Reportedly, Samsung’s newly constructed high-performance computing (HPC) center in Hwaseong was completed in April 2024. This center houses a vast array of servers and network equipment necessary for semiconductor design, indicating a significant investment in GPUs.
According to the report, Samsung’s newly constructed high-performance computing (HPC) center in Hwaseong was completed in April 2024. This center houses a vast array of servers and network equipment necessary for semiconductor design, indicating a significant investment in GPUs.
Per another report from Bloomberg on June 4th, NVIDIA CEO Jensen Huang, during a briefing at the COMPUTEX, told reporters that NVIDIA is evaluating HBM provided by both Samsung and Micron Technology. Huang mentioned that there is still some engineering work needed to be completed, expressing the desire for it to have been finished already.
As per Huang, though Samsung hasn’t failed any qualification tests, its HBM product required additional engineering work. When asked about Reuter’s previous report concerning overheating and power consumption issues with Samsung’s HBM, Huang simply remarked, “there’s no story there.”
Read more
(Photo credit: Samsung)
News
According to a report from Commercial Times, the construction at TSMC’s advanced packaging plant (P1) in the Chia-Yi Science Park has been halted due to the discovery of suspected historical artifacts. In response, TSMC has promptly initiated preparations for its second plant (P2). TSMC stated that they will comply with regulations from the relevant authorities regarding the suspected archaeological site found on the Chia-Yi facility grounds.
The total developed area of the Chia-Yi Science Park is approximately 88 hectares, with TSMC’s two advanced packaging plants occupying 20 hectares. This is nearly 40% larger than the 14.3 hectares of the Zhunan packaging and testing plant. The planned area for the P1 plant is about 12 hectares, initially slated for completion by the end of 2026 and mass production by 2028. The discovery of the archaeological site has led to the early initiation of the P2 plant, raising concerns about potential impacts on the advanced packaging capacity plans.
The Southern Taiwan Science Park Administration and the Cultural and Tourism Bureau of Chiayi County both stated on June 17 that, in accordance with the Cultural Heritage Preservation Act, they submitted the case to the Chiayi County Cultural and Tourism Bureau for cultural heritage review on June 7. The review committee has principally agreed to proceed with the rescue excavation, which will be carried out in accordance with relevant regulations.
The Chia-Yi Science Park is a key hub for developing the Great Southern Technology Corridor. Besides providing backend CoWoS (Chip-on-Wafer-on-Substrate) packaging for TSMC’s 2nm process from its Kaohsiung plant, the Chia-Yi facility will further integrate advanced 3D packaging technology, SoIC (System-on-Integrated-Chips), highlighting its strategic importance.
As AI chips continue to evolve, the competition in advanced packaging remains fierce, with many clients eagerly anticipating developments.
TSMC’s advanced packaging capacity is scarce, with primary customer NVIDIA having the highest demand, occupying about half of the capacity, followed closely by AMD. Broadcom, Amazon, and Marvell have also expressed strong interest in using advanced packaging processes.
Per a report from global media outlet Wccftech, NVIDIA’s Rubin GPU is expected to adopt a 4x reticle design and utilize TSMC’s CoWoS-L packaging technology, along with the N3 process. Moreover, NVIDIA will use next-generation HBM4 DRAM to power its Rubin GPU.
Industry sources cited in Commercial Time’s report have indicated that by the end of next year, TSMC’s monthly CoWoS capacity will be increased to 60,000 wafers. With growing orders and a steep learning curve, the annual capacity is expected to surpass 600,000 wafers next year. As the semiconductor industry advances into the Angstrom Era, the gap in TSMC’s advanced packaging capacity will gradually widen. Whether the Chia-Yi plant can be completed by 2026 remains a critical focal point.
Read more
(Photo credit: TSMC)
News
Following Foxconn’s substantial order for the assembly of NVIDIA’s GB200 AI servers, according to a report from Economic Daily News, Foxconn has now exclusively secured a major order for the NVLink Switch, a key component of the GB200 renowned for enhancing computing power. The volume of this order is estimated to be seven times that of the server cabinets. Not only is this a brand new order, but it also carries a significantly higher gross profit margin compared to server assembly, the report noted.
While Foxconn does not comment on orders and customers, industry sources cited by the same report highlight that NVLink is an exclusive NVIDIA technology consisting of two parts. The first is the bridge technology, which connects the central processing unit (CPU) with the AI chip (GPU). The second is the switch technology, which is crucial for interconnecting GPUs, enabling thousands of GPUs to combine in operation, thereby maximizing their collective computing power.
Industry sources cited by Economic Daily News have stated that the key feature of the GB200 is not just its significant computing power but also its high-speed transmission capabilities. NVLink is considered the magic ingredient for enhancing this computing power.
Reportedly, the primary reason Foxconn has secured the exclusive order for NVIDIA’s NVLink is due to their long-standing cooperation and mutual understanding. Foxconn has been a leading manufacturer for network communication equipment for years, making it a reasonable choice for NVIDIA to entrust with these orders.
Industry sources cited by the report further claim that as each server cabinet requires seven NVLinks, this new order means that for every GB200 server cabinet produced, Foxconn receives an order for seven NVLink switches. Given that the profit margin for switches is considerably higher than for server assembly, this order is expected to significantly boost Foxconn’s operations.
Per the report, the world’s top seven switch manufacturers, including Dell, HP, Cisco, Nokia, and Ericsson, are all clients of Foxconn. This has enabled Foxconn to secure over 75% of the global market share in switches, firmly establishing its leading position.
Regarding the AI server market, Foxconn’s Chairman Young Liu previously revealed that the GB200 is in high demand, and he anticipates that Foxconn’s market share in AI servers could reach 40% this year.
Read more
(Photo credit: NVIDIA)