Semiconductors


2024-03-25

[News] Texas Instruments Plans Large-Scale Transition of GaN Chip Production from 6-inch to 8-inch Wafers

According to a report from Korean media outlet THE ELEC, a senior executive at analog chip manufacturer Texas Instruments (TI) stated that the company is transitioning its production of gallium nitride (GaN) chips from several 6-inch fabs to 8-inch fabs.

The same report further noted that Jerome Shin, manager of Texas Instruments’ Korean subsidiary, stated at a press conference in Seoul that Texas Instruments is preparing to build 8-inch fabs in Dallas and Aizu, Japan. This move will enable the company to offer more competitively priced GaN chips.

Jerome Shin pointed out that there has been a shift in the perception of GaN chips compared to silicon carbide (SiC) chips since 2022. While GaN chips were previously considered more expensive, this perception is changing because Texas Instruments is transitioning its production from 6-inch fabs to 8-inch fabs. Producing larger wafers means more chips per wafer, leading to increased productivity and lower costs for GaN chips.

Currently, the price of GaN chips is already lower than that of SiC chips. Once the transformation of Texas Instruments’ fabs in Dallas and Aizu, Japan is completed, they will be able to offer even more affordable solutions. Expansion at the Dallas facility is expected to be completed by 2025, although Jerome Shin did not disclose the timetable for the Aizu facility.

However, some industry sources cited in the report suggest that Texas Instruments’ plan may lead to a comprehensive decline in GaN chip prices.

Currently, Texas Instruments is also transitioning the production of power management IC from 8-inch fabs to 12-inch fabs. This move has already resulted in a decrease in the prices of power management chips across the industry.

Still, reportedly, transitioning the production of power management chips from 8-inch fabs to 12-inch fabs could enable Texas Instruments to save over 10% in costs.

Read more

(Photo credit: Texas Instruments)

Please note that this article cites information from THE ELEC.

2024-03-22

[News] Micron’s Financial Report Reveals High Demand for HBM in 2025, Capacity Nears Full Allocation

Micron, the major memory manufacturer in the United States, has benefited from AI demand, turning losses into profits last quarter (ending in February) and issuing optimistic financial forecasts.

During its earnings call on March 20th, Micron CEO Sanjay Mehrotra stated that the company’s HBM (High Bandwidth Memory) capacity for this year has been fully allocated, with most of next year’s capacity already booked. HBM products are expected to generate hundreds of millions of dollars in revenue for Micron in the current fiscal year.

Per a report from Washington Post, Micron expects revenue for the current quarter (ending in May) to be between USD 6.4 billion and USD 6.8 billion, with a midpoint of USD 6.6 billion, surpassing Wall Street’s expectation of USD 6 billion.

Last quarter, Micron’s revenue surged 58% year-on-year to USD 5.82 billion, exceeding Wall Street’s expectation of USD 5.35 billion. The company posted a net profit of USD 790 million last quarter, a turnaround from a loss of USD 2.3 billion in the same period last year. Excluding one-time charges, Micron’s EPS reached USD 0.42 last quarter. Mehrotra reportedly attributed Micron’s return to profitability last quarter to the company’s efforts in pricing, product, and operational costs.

Over the past year, memory manufacturers have reduced production, coupled with the explosive growth of the AI industry, which has led to a surge in demand for NVIDIA AI processors, benefiting upstream memory manufacturers.

Mehrotra stated, “We believe Micron is one of the biggest beneficiaries in the semiconductor industry of the multiyear opportunity enabled by AI.”

The projected growth rates for DRAM and NAND Flash bit demand in 2024 are close to 15% and in the mid-teens, respectively. However, the supply growth rates for DRAM and NAND Flash bits in 2024 are both lower than the demand growth rates.

Micron utilizes 176 and 232-layer processes for over 90% of its NAND Flash production. As for HBM3e, it is expected to contribute to revenue starting from the second quarter.

Per a previous TrendForce press release, the three major original HBM manufacturers held market shares as follows in 2023: SK Hynix and Samsung were both around 46-49%, while Micron stood at roughly 4-6%.

In terms of capital expenditures, the company maintains an amount of USD 7.5 to USD 8 billion (taking into account U.S. government subsidies), primarily allocated for enhancing HBM-related capacity.

Micron stated that due to the more complex packaging of HBM, it consumes three times the DRAM capacity of DDR5, indirectly constraining the capacity for non-HBM products, thereby improving overall DRAM market supply.

As per Micron’s report, regarding growth outlooks for various end markets in 2024, the annual growth rate for the data center industry has been revised upward from mid-single digits to mid-to-high single digits, while the PC industry’s annual growth rate remains at low to mid-single digits. AI PCs are expected to capture a certain market share in 2025. The annual growth rate for the mobile phone industry has been adjusted upward from modest growth to low to mid-single digits.

Read more

(Photo credit: Micron)

Please note that this article cites information from Micron and Washington Post.

2024-03-22

[News] Samsung Reportedly Commits to Advanced Packaging, Targets Over USD100 Million in Related Revenue This Year

Amid the AI boom driving a surge in demand for advanced packaging, South Korean semiconductor giant Samsung Electronics is aggressively entering the advanced packaging arena. On the 20th, it announced its ambitions to achieve record-high revenue in advanced packaging this year, aiming to surpass the USD 100 million mark.

According to reports from Reuters and The Korea Times, Samsung’s annual shareholders’ meeting took place on March 20th.

During the meeting, Han Jong-hee, the vice chairman of the company, stated as follows: “Although the macroeconomic environment is expected to be uncertain this year, we see an opportunity for increased growth through next-generation technology innovation.”

“Samsung plans to apply AI to all devices, including smartphones, foldable devices, accessories and extended reality (XR), to provide customers with a new experience where generative AI and on-device AI unfold,” Han added.

Samsung established the Advanced Package Business Team under the Device Solutions business group in December last year. Samsung Co-CEO Kye-Hyun Kyung stated that he expects the results of Samsung’s investment to come out in earnest from the second half of this year.

Kyung further noted that for a future generation of HBM chips called HBM4, likely to be released in 2025 with more customised designs, Samsung will take advantage of having memory chips, chip contract manufacturing and chip design businesses under one roof to satisfy customer needs.

According to a previous report from TrendForce, Samsung led the pack with the highest revenue growth among the top manufacturers in Q4 as it jumped 50% QoQ to hit $7.95 billion, largely due to a surge in 1alpha nm DDR5 shipments, boosting server DRAM shipments by over 60%. In the fourth quarter of last year, Samsung secured a market share of 45.5% in DRAM chips.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Reuters and The Korea Times.

 

2024-03-21

[News] MediaTek Partners with Ranovus to Enter Niche Market, Expands into Heterogeneous Integration Co-Packaged Optics Industry

MediaTek has reportedly made its foray into the booming field of Heterogeneous Integration Co-Packaged Optics (CPO), announcing on March 20th a partnership with optical communications firm Ranovus to launch a customized Application-Specific Integrated Circuit (ASIC) design platform for CPO. This platform is reported to provide advantages such as low cost, high bandwidth density, and low power consumption, expanding MediaTek’s presence in the thriving markets of AI, Machine Learning (ML), and High-Performance Computing (HPC).

According to its press release, on the eve of the 2024 Optical Fiber Communication Conference (OFC 2024), MediaTek announced the launch of a new-generation customized chip design platform, offering heterogeneous integration solutions for high-speed electronic and optical signal transmission interfaces (I/O).

MediaTek stated that it will be demonstrating a serviceable socketed implementation that combines 8x800G electrical links and 8x800G optical links for a more flexible deployment. It integrates both MediaTek’s in-house SerDes for electrical I/O as well as co-packaged Odin® optical engines from Ranovus for optical I/O.

As per the same release, leveraging the heterogeneous solution that includes both 112G LR SerDes and optical modules, this CPO demonstration is said to be delivering reduced board space and device costs, boosts bandwidth density, and lowers system power by up to 50% compared to existing solutions.

MediaTek emphasizes that its ASIC design platform covers all aspects from design to production, offering a comprehensive solution with the latest industry technologies such as MLink, UCIe’s Die-to-Die Interface, InFO, CoWoS, Hybrid CoWoS advanced packaging technologies, PCIe high-speed transmission interfaces, and integrated thermals and mechanical design.

“The emergence of Generative AI has resulted in significant demand not only for higher memory bandwidth and capacity, but also for higher I/O density and speeds, integration of electrical and optical I/O is the latest technology that allows MediaTek to deliver the most flexible leading edge data center ASIC solutions.” said Jerry Yu, Senior Vice President at MediaTek.

As per Economy Daily News citing Industry sources, they have predicted that as the next-generation of optical communication transitions to 800G transmission speeds, the physical limitations of materials will necessitate the use of optical signals instead of electronic signals to achieve high-speed data transmission. This, reportedly, will lead to a rising demand for CPOs with optical-to-electrical conversion capabilities, becoming one of the new focal points for semiconductor manufacturers to target.

Read more

(Photo credit: MediaTek)

Please note that this article cites information from MediaTek and Economy Daily News.

2024-03-21

[News] NVIDIA CEO Jensen Huang Estimates Blackwell Chip Price Around USD 30,000 to USD 40,000

With the Blackwell series chips making a splash in the market, pricing becomes a focal point. According to Commercial Times citing sources, Jensen Huang, the founder and CEO of NVIDIA, revealed in a recent interview that the price range for the Blackwell GPU is approximately USD 30,000 to USD 40,000. However, this is just an approximate figure.

Jensen Huang emphasizes that NVIDIA customizes pricing based on individual customer needs and different system configurations. NVIDIA does not sell individual chips but provides comprehensive services for data centers, including networking and software-related equipment.

Reportedly, Jensen Huang stated that the global data center market is currently valued at USD 1 trillion, with total expenditures on data center hardware and software upgrades reaching USD 250 billion last year alone, a 20% increase from the previous year. He noted that NVIDIA stands to benefit significantly from this USD 250 billion investment in data centers.

According to documents recently released by NVIDIA, 19% of last year’s revenue came from a single major customer, and more than USD 9.2 billion in revenue from a few large cloud service providers in the last quarter alone. Adjusting the pricing of the Blackwell chip series could attract more businesses from various industries to become NVIDIA customers.

As per the report from Commercial Times, Jensen Huang is said to be optimistic about the rapid expansion of the AI application market, emphasizing that AI computing upgrades are just beginning. Reportedly, he believes that future demand will only accelerate, allowing NVIDIA to capture more market share.

According to a previous report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.

NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.

Per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

In the development of the GB200, NVIDIA invested significantly. Jensen Huang revealed that the development of the GB200 was a monumental task, with expenditures exceeding USD 10 billion solely on modern GPU architecture and design.

Given the substantial investment, Huang reportedly confirmed that NVIDIA has priced the Blackwell GPU GB200, tailored for AI and HPC workloads, at USD 30,000 to USD 40,000. Industry sources cited by the report from Commercial Times point out that NVIDIA is particularly keen on selling supercomputers or DGX B200 SuperPODS, as the average selling price (ASP) is higher in situations involving large hardware and software deployments.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commercial Times, Tom’s Hardware and TechNews.

  • Page 177
  • 320 page(s)
  • 1598 result(s)

Get in touch with us