Semiconductors


2024-03-21

[News] MediaTek Partners with Ranovus to Enter Niche Market, Expands into Heterogeneous Integration Co-Packaged Optics Industry

MediaTek has reportedly made its foray into the booming field of Heterogeneous Integration Co-Packaged Optics (CPO), announcing on March 20th a partnership with optical communications firm Ranovus to launch a customized Application-Specific Integrated Circuit (ASIC) design platform for CPO. This platform is reported to provide advantages such as low cost, high bandwidth density, and low power consumption, expanding MediaTek’s presence in the thriving markets of AI, Machine Learning (ML), and High-Performance Computing (HPC).

According to its press release, on the eve of the 2024 Optical Fiber Communication Conference (OFC 2024), MediaTek announced the launch of a new-generation customized chip design platform, offering heterogeneous integration solutions for high-speed electronic and optical signal transmission interfaces (I/O).

MediaTek stated that it will be demonstrating a serviceable socketed implementation that combines 8x800G electrical links and 8x800G optical links for a more flexible deployment. It integrates both MediaTek’s in-house SerDes for electrical I/O as well as co-packaged Odin® optical engines from Ranovus for optical I/O.

As per the same release, leveraging the heterogeneous solution that includes both 112G LR SerDes and optical modules, this CPO demonstration is said to be delivering reduced board space and device costs, boosts bandwidth density, and lowers system power by up to 50% compared to existing solutions.

MediaTek emphasizes that its ASIC design platform covers all aspects from design to production, offering a comprehensive solution with the latest industry technologies such as MLink, UCIe’s Die-to-Die Interface, InFO, CoWoS, Hybrid CoWoS advanced packaging technologies, PCIe high-speed transmission interfaces, and integrated thermals and mechanical design.

“The emergence of Generative AI has resulted in significant demand not only for higher memory bandwidth and capacity, but also for higher I/O density and speeds, integration of electrical and optical I/O is the latest technology that allows MediaTek to deliver the most flexible leading edge data center ASIC solutions.” said Jerry Yu, Senior Vice President at MediaTek.

As per Economy Daily News citing Industry sources, they have predicted that as the next-generation of optical communication transitions to 800G transmission speeds, the physical limitations of materials will necessitate the use of optical signals instead of electronic signals to achieve high-speed data transmission. This, reportedly, will lead to a rising demand for CPOs with optical-to-electrical conversion capabilities, becoming one of the new focal points for semiconductor manufacturers to target.

Read more

(Photo credit: MediaTek)

Please note that this article cites information from MediaTek and Economy Daily News.

2024-03-21

[News] NVIDIA CEO Jensen Huang Estimates Blackwell Chip Price Around USD 30,000 to USD 40,000

With the Blackwell series chips making a splash in the market, pricing becomes a focal point. According to Commercial Times citing sources, Jensen Huang, the founder and CEO of NVIDIA, revealed in a recent interview that the price range for the Blackwell GPU is approximately USD 30,000 to USD 40,000. However, this is just an approximate figure.

Jensen Huang emphasizes that NVIDIA customizes pricing based on individual customer needs and different system configurations. NVIDIA does not sell individual chips but provides comprehensive services for data centers, including networking and software-related equipment.

Reportedly, Jensen Huang stated that the global data center market is currently valued at USD 1 trillion, with total expenditures on data center hardware and software upgrades reaching USD 250 billion last year alone, a 20% increase from the previous year. He noted that NVIDIA stands to benefit significantly from this USD 250 billion investment in data centers.

According to documents recently released by NVIDIA, 19% of last year’s revenue came from a single major customer, and more than USD 9.2 billion in revenue from a few large cloud service providers in the last quarter alone. Adjusting the pricing of the Blackwell chip series could attract more businesses from various industries to become NVIDIA customers.

As per the report from Commercial Times, Jensen Huang is said to be optimistic about the rapid expansion of the AI application market, emphasizing that AI computing upgrades are just beginning. Reportedly, he believes that future demand will only accelerate, allowing NVIDIA to capture more market share.

According to a previous report from TechNews, this new architecture, Blackwell, boasts a massive GPU volume, crafted using TSMC’s 4-nanometer (4NP) process technology, integrating two independently manufactured dies, totaling 208 billion transistors. These dies are then bound together like a zipper through the NVLink 5.0 interface.

NVIDIA utilizes a 10 TB/sec NVLink 5.0 to connect the two dies, officially termed NV-HBI interface. The NVLink 5.0 interface of the Blackwell complex provides 1.8 TB/sec bandwidth, doubling the speed of the NVLink 4.0 interface on the previous generation Hopper architecture GPU.

Per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

In the development of the GB200, NVIDIA invested significantly. Jensen Huang revealed that the development of the GB200 was a monumental task, with expenditures exceeding USD 10 billion solely on modern GPU architecture and design.

Given the substantial investment, Huang reportedly confirmed that NVIDIA has priced the Blackwell GPU GB200, tailored for AI and HPC workloads, at USD 30,000 to USD 40,000. Industry sources cited by the report from Commercial Times point out that NVIDIA is particularly keen on selling supercomputers or DGX B200 SuperPODS, as the average selling price (ASP) is higher in situations involving large hardware and software deployments.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commercial Times, Tom’s Hardware and TechNews.

2024-03-21

[News] US Considers Sanctioning Huawei-Linked Chinese Chip Network, Huawei-Related Enterprises Blacklisted

According to Bloomberg citing sources, the US government is considering adding Chinese semiconductor companies linked to Huawei to a blacklist. This move comes after Huawei made significant breakthroughs in technology last year, indicating a potential escalation in US efforts to curb China’s ambitions in AI and semiconductors.

As per a report from the Semiconductor Industry Association (SIA), most of the potentially affected Chinese entities are identified as chip manufacturing facilities either acquired by or under construction by Huawei. Among them, companies that may be blacklisted include Qingdao Si’En, SwaySure, and Shenzhen Pengsheng Technology Co., Ltd.

Furthermore, US officials are also considering action against companies like Changxin Memory Technologies (CXMT). A previous report from Bloomberg already indicated that the US Department of Commerce’s Bureau of Industry and Security (BIS) was contemplating placing CXMT on the entity list, which would restrict their access to US technology. Additionally, restrictions on five other Chinese companies are being considered, although the final list is yet to be confirmed.

Regarding this matter, the BIS and White House National Security Council declined to comment at that time.

In addition to companies involved in actual chip production, the United States may also consider sanctioning Shenzhen Pengjin High-Tech Co., Ltd. and SiCarrier. Per the report citing industry sources, there are concerns that these two semiconductor manufacturing equipment companies may act as agents to help Huawei obtain restricted equipment.

Currently, companies that have been listed on the entity list by the US Department of Commerce include Huawei, SMIC (Semiconductor Manufacturing International Corporation), and Shanghai Micro Electronics. Additionally, China’s other major memory manufacturer, Yangtze Memory Technology Corp, was added to this restriction list in 2022.

Read more

(Photo credit: iStock)

Please note that this article cites information from Bloomberg.

2024-03-21

[News] Blackwell Enters the Scene – A Closer Look at TSMC’s CoWoS Branch

NVIDIA unveiled its Blackwell architecture and the touted powerhouse AI chip GB200 at GTC 2024 held in San Jose, California, on March 19th. Manufactured using TSMC’s 4-nanometer (4NP) process, it is expected to ship later this year.

According to a report from TechNews, TSMC’s CoWoS technology comes in various forms, including CoWoS-R, CoWoS-L, and CoWoS-S, each differing in cost due to variations in the interposer material. Customers can choose the appropriate technology based on their specific requirements.

CoWoS-R, for instance, integrates InFo technology, utilizing RDL wiring in the interposer to connect chips, making it suitable for high-bandwidth memory (HBM) and SoC integration.

On the other hand, CoWoS-L combines the advantages of CoWoS-S and InFO technologies, offering a cost-effective solution with the use of LSI (Local Silicon Interconnect) chips as the interposer for dense chip-to-chip connections. According to market reports, the Blackwell platform adopts CoWoS-L, as this technology is better suited for larger chiplets.

CoWoS-S, utilizing silicon as the interposer material, represents the highest cost variant and is currently the mainstream choice. Notably, NVIDIA’s H100, H200, and AMD’s MI300 chips all employ CoWoS-S.

NVIDIA’s latest Blackwell architecture features AI chips, including the B100, B200, and the GB200 with Grace CPU, all manufactured on TSMC’s 4-nanometer process. As per the industry sources cited by the report, insights suggest that production for the B100 is slated for the fourth quarter of this year, with mass production expected in the first half of next year.

Meanwhile, the B200 and GB200 are set to follow suit with mass production next year. As per a report from Tom’s Hardware, the AI computing performance of a single B200 GPU can reach 20 petaflops, whereas the previous generation H100 offered a maximum of only 4 petaflops of AI computing performance. The B200 will also be paired with 192GB of HBM3e memory, providing up to 8 TB/s of bandwidth.

TSMC’s advanced manufacturing processes and CoWoS packaging technology are expected to continue benefiting, particularly with the adoption of CoWoS-L packaging.

Read more

(Photo credit: TSMC)

Please note that this article cites information from TechNews.

2024-03-20

[News] Taiwanese Supply Chain Players Showcase Next-Gen AI Server Solutions at NVIDIA’s GTC

Following NVIDIA’s launch of the new computing platform GB200, as per a report from Commerical Times, Taiwanese supply chain players including Quanta, Pegatron, Wiwynn, Wistron, Gigabyte, and Foxconn’s subsidiary Ingrasys have showcased their solutions and related cooling technologies based on the GB200 at the latest GTC conference, aiming to capture opportunities in the next-generation AI server market.

Quanta Cloud Technology (QCT), a subsidiary of Quanta Computer, demonstrated its systems and AI applications based on the NVIDIA MGX architecture, announcing support for the upcoming NVIDIA GB200 superchip and NVIDIA GB200 NVL72.

QCT showcased their NVIDIA MGX architecture systems, featuring the NVIDIA GH200 chip, employing a modular reference design. System manufacturers can utilize the NVIDIA MGX architecture to tailor models suitable for applications like generative AI, high-performance computing (HPC), and edge deployments.

Pegatron, on the other hand, has become one of NVIDIA’s global partners in advanced GPU computing technology, particularly with the latest NVIDIA GB200 chip. Reportedly, Pegatron is actively developing the GB200 NVL36, designed as a multi-node, liquid-cooled, rack-level platform dedicated to processing compute-intensive workloads. Equipped with the NVIDIA BlueField-3 data processing unit, it enables network acceleration in ultra-scale AI cloud environments and fulfills various GPU computing functionalities.

GIGABYTE, a key supplier of high-end AI GPU servers for NVIDIA last year, showcased their latest offerings at this year’s GTC exhibition. Their subsidiary, GIGABYTE Technology, unveiled the GIGABYTE XH23-VG0, a 2U server featuring the NVIDIA H100 GPU and GH200 architecture, capable of transferring data at speeds of up to 900GB per second. Additionally, they announced the readiness of their product line for the next-generation Blackwell platform, including HGX boards, superchips, and PCIe expansion cards, which will be released gradually over the coming months.

Meanwhile, Wiwynn, included in the first wave of suppliers for the NVIDIA GB200 NVL72 system, showcased its latest AI server cabinet solution based on the NVIDIA GB200 NVL72 at the GTC exhibition. They also presented their newest comprehensive liquid-cooled management system, the UMS100.

Ingrasys also showcased a range of innovations at the exhibition, including NVIDIA MGX architecture servers and the GB200 NVL72 solution. They also demonstrated advanced liquid cooling technologies such as the liquid-to-gas Sidecar technology and liquid-to-liquid Cooling Distribution Unit (CDU).

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commerical Times.

  • Page 132
  • 274 page(s)
  • 1370 result(s)

Get in touch with us