Articles


2024-09-11

[News] Venturing into Silicon Photonics, AMD Reportedly Seeks Partnerships in Taiwan

As silicon photonics has become a key technology in the AI era, semiconductor giants, including Intel and TSMC, have joined the battlefield. Now another tech giant has engaged in the war, while U.S. chip giant AMD is reportedly seeking silicon photonics partners in Taiwan, according to local media United Daily News.

According to the report, AMD has reached out to Taiwanese rising stars in the sector, including BE Epitaxy Semiconductor and best Epitaxy Manufacturing Company. The former focuses on the design, research and development of silicon photonics platforms, while the latter possesses MOCVD machines to produce 4-inch and 6-inch epitaxy wafers.

Regarding the rumor, AMD declined to comment. Recently, the AI chip giant announced a USD 4.9 billion acquisition of server manufacturer ZT Systems to strengthen its AI data center infrastructure, with the aim to further enhance its system-level R&D capability. Now it seems that AMD is also eyeing to set foot in the market, as silicon photonics is poised to be a critical technology in the future.

Earlier in July, AMD is said to establish a research and development (R&D) center in Taiwan, which will focus on several advanced technologies, including silicon photonics, artificial intelligence (AI), and heterogeneous integration.

Here’s why the technology matters: As chipmakers keep pushing the boundaries of Moore’s Law, leading to increased transistor density per unit area, signal loss issues inevitably arise during transmission since chips rely on electricity to transmit signals. Silicon photonics technology, on the other hand, by replacing electrical signals with optical signals for high-speed data transmission, successfully overcomes this challenge, achieving higher bandwidth and faster data processing.

On September 3, a consortium of more than 30 companies, including TSMC, announced the establishment of the Silicon Photonics Industry Alliance (SiPhIA) at SEMICON.

According to a previous report by Nikkei, TSMC and its supply chain are accelerating the development of next-generation silicon photonic solutions, with plans to have the technology ready for production within the next three to five years.

AMD’s major rival, NVIDIA, is reportedly collaborating with TSMC to develop optical channel and IC interconnect technologies.

On the other hand, Intel has been developing silicon photonics technology for over 30 years. Since the launch of its silicon photonics platform in 2016, Intel has shipped over 8 million photonic integrated circuits (PICs) and more than 3.2 million integrated on-chip lasers, according to its press release. These products have been adopted by numerous large-scale cloud service providers.

Interestingly enough, Intel has also been actively collaborating with Taiwanese companies in the development of silicon photonics, United Daily News notes. One of its most notable partners is LandMark Optoelectronics, which supplies Intel with critical upstream silicon photonics materials, such as epitaxial layers and related components.

Read more

(Photo credit: AMD)

Please note that this article cites information from Nikkei and United Daily News.
2024-09-10

[News] AMD Unifies RDNA and CDNA into UDNA Architecture, Aiming to Compete with NVIDIA’s CUDA

According to a report from tom’s Hardware, Jack Huynh, AMD’s senior vice president and general manager of its Computing and Graphics Business Group, announced at IFA 2024 in Berlin that AMD will unify its consumer microarchitecture “RDNA” and data center microarchitecture “CDNA” under a single name: “UDNA.” This move is expected to compete with NVIDIA’s CUDA ecosystem.

Previously, AMD used the same architecture for both gaming and compute GPUs, known as “GCN.” However, since 2019, the company decided to split the microarchitectures into two distinct designs: RDNA for consumer gaming GPUs and CDNA for data center computing.

Reportedly, Jack Huynh stated that the consolidation into the unified “UDNA” architecture will make it easier for developers to work with, eliminating the need to choose between different architectures without added value.

When asked if future desktop GPUs will have the same architecture as the MI300X, Huynh mentioned that this is part of a strategy to unify from cloud to client. With a single team working on it, the company is making efforts to standardize, acknowledging that while there may be minor conflicts, it is the right approach.

While high-end chips can establish a market presence, the report from tom’s hardware further addressed that the ultimate success depends on software support. NVIDIA built a strong moat 18 years ago with its CUDA architecture, and one of its fundamental advantages is the “U” in CUDA, which stands for Unified.

NVIDIA’s single CUDA platform serves all purposes, using the same underlying microarchitecture for AI, HPC, and gaming.

Jack Huynh revealed that CUDA has around 4 million developers, and his goal is to pave the way for AMD to achieve similar success.

However, AMD relies on the open-source ROCm software stack, which depends on support from users and the open-source community. If AMD can simplify this process, even if it means optimizing for specific applications or games, it will help accelerate the ecosystem.

Read more

(Photo credit: AMD)

Please note that this article cites information from tom’s Hardware.

2024-09-10

[News] Apple Officially Debuts iPhone 16 with AI Features, but Yet to Announce an AI partner in China

Apple’s 2024 fall event officially took place earlier, highlighting the launch of the new iPhone 16 series, along with other products like the Apple Watch Series 10 and AirPods 4. According to CEO Tim Cook, the next generation of iPhone has been designed for Apple Intelligence, marking the beginning of an exciting new era.

Apple and global tech companies are in a race to integrate AI into their products, with smartphones anticipated to be one of the key competitive arenas.

Apple’s AI software, Apple Intelligence, is set to enhance Siri and improve functionalities like object recognition and identification through the phone’s camera, per sources cited in a report from Reuters.

A test version of Apple Intelligence will launch next month for U.S. English users, with other localized English versions set to follow in December. Additional language versions, including Chinese, French, Japanese, and Spanish, are anticipated for next year. It is worth noting that Apple has not yet announced an AI partner in China for the iPhone 16 series.

Apple stated that improvements, including enhancements to Siri, will be phased in over time but did not provide a timeline for moving beyond the test phase.

Notably, Apple’s event occurred just hours before Huawei’s launch of a tri-fold phone, highlighting the competitive challenge Apple faces.

In contrast, Huawei’s website revealed on Monday that it had already received over 3 million pre-orders for its Z-shaped tri-fold phone before its official release.

This underscores Huawei’s ability to withstand U.S. sanctions and bolsters its competitive position against Apple in China, where consumers are enthusiastic about AI features and willing to pay a premium for them, according to a Reuters report.

  • iPhone16

The iPhone 16 features a 6.1-inch display, while the iPhone 16 Plus has a 6.7-inch screen. Both models are equipped with the A18 chip built on TSMC’s advanced 3nm process, which reportedly offers a 30% boost in CPU performance compared to the A16 chip used in iPhone 15. Additionally, they come with increased battery capacity and enhanced cooling capabilities.

Moreover, the latest iPhone chips are based on the newest Arm architecture, which includes specialized features aimed at accelerating AI applications.

Notably, Apple has introduced an “Action Button” on the iPhone 16 and iPhone 16 Plus, which supports various functions like camera, flashlight, Focus mode, translation, magnifier, and voice memos.

The higher-end iPhone 16 Pro and 16 Pro Max are crafted from titanium and come with enhanced AI features, including suggestions for optimizing photo shoots and advanced audio-editing tools designed for professional video production.

Still, a previous report from Bloomberg has also addressed concerns over the slow rollout with its AI platform, which may put iPhone 16 “Supercycle” in doubt.

  • New Apple Watches and Airpods

Apple also unveiled new Watches and AirPods with health-focused capabilities, as well as hardware-design improvements.

Apple highlighted the Watch’s ability to discover longer-term health conditions such as sleep apnea as well as detecting and responding to emergencies such as a fall.

For the new AirPods, there are two versions: a standard model and a version with active noise cancellation. As part of the AirPods update, Apple’s introduced hearing-aid features are under review by U.S. regulators.

Read more

(Photo credit: Apple)

Please note that this article cites information from Reuters and Apple.

2024-09-10

[News] NVIDIA’s Blackwell Overcomes Delays, as GB200 Reportedly Sets for December Mass Production

According to a report from Commercial Times citing sources, it’s revealed that NVIDIA has executed changes to the Blackwell series’ 6-layer GPU mask. Therefore, the process can now proceed without re-taping out, as production delays being minimized.

The report noted that NVIDIA’s updated version of B200 is expected to be completed by late October, allowing the GB200 to enter mass production in December, with large-scale deliveries to ODMs expected in the first quarter of next year.

Previously, as per a report from The Information, NVIDIA’s GB200 was said to be experiencing a one-quarter delay in mass shipments. Another report from the Economic Daily News further suggested that the problem likely lies in the yield rates of advanced packaging, which mainly affected the non-reference-designed GB200 chips.

Industry sources cited by Commercial Times addressed that NVIDIA’s Blackwell chip used to be facing instability in metal layers during the HV process, which was then resolved by July.

In addition, since the issue reportedly occurred in the back-end-of-line process, a new tape-out was deemed unnecessary. Still, as CoWoS-L capacity remains a bottleneck, the advanced packaging for GB200 this year is expected to adopt  CoWoS-S.

Read more

(Photo credit: NVIDIA)

Please note that this article cites information from Commercial Times, The Information and Economic Daily News.

2024-09-10

[News] Micron’s 12-Hi HBM3e Ready for Production, Targeting NVIDIA’s H200 and B100/ B200 GPUs

After its 8-Hi HBM3e entered mass production in February, Micron officially introduced the 12-Hi HBM3e memory stacks on Monday, which features a 36 GB capacity, according to a report by Tom’s Hardware. The new products are designed for cutting-edge processors used in AI and high-performance computing (HPC) workloads, including NVIDIA’s H200 and B100/B200 GPUs.

It is worth noting that the achievement has made the US memory chip giant almost on the same page with the current HBM leader, SK hynix. Citing Justin Kim, president and head of the company’s AI Infra division at SEMICON Taiwan last week, another report by Reuters notes that SK hynix is set to begin mass production of its 12-Hi HBM3e chips by the end of this month.

Samsung, on the other hand, is said to have completed NVIDIA’s quality test for the shipment of 8-Hi HBM3e memory, while the company is still working on the verification of its 12-Hi HBM3e.

Micron’s 12-Hi HBM3e memory stacks, according to Tom’s Hardware, feature a 36GB capacity, a 50% increase over the previous 8-Hi models, which had 24GB. This expanded capacity enables data centers to handle larger AI models, such as Meta AI’s Llama 2, with up to 70 billion parameters on a single processor. In addition, this capability reduces the need for frequent CPU offloading and minimizes communication delays between GPUs, resulting in faster data processing.

According to Tom’s Hardware, in terms of performance, Micron’s 12-Hi HBM3e stacks deliver over 1.2 TB/s. Despite offering 50% more memory capacity than competing products, Micron’s HBM3e consumes less power than the 8-Hi HBM3e stacks.

Regarding the future roadmap of HBM, Micron is said to be working on its next-generation memory solutions, including HBM4 and HBM4e. These upcoming memory technologies are set to further enhance performance, solidifying Micron’s position as a leader in addressing the increasing demand for advanced memory in AI processors, such as NVIDIA’s GPUs built on the Blackwell and Rubin architectures, the report states.

Read more

(Photo credit: Micron)

Please note that this article cites information from Tom’s Hardware and Reuters.
  • Page 11
  • 386 page(s)
  • 1930 result(s)

Get in touch with us