News
On August 8, 2024, Black Sesame Technologies, a company specializing in AI chip for smart vehicles, was listed on the Hong Kong Stock Exchange.
Founded in 2016, Black Sesame is a provider of automotive-grade intelligent vehicle computing chips and chip-based solutions. The company has established research and sales centers in Wuhan, Silicon Valley, Shanghai, Chengdu, Shenzhen, Chongqing, and Singapore.
Currently, Black Sesame has launched two major product lines: the Huashan series designed for autonomous driving and the Wudang series focused on cross-domain computing.
The SoCs of Huashan A1000 family is designed for autonomous driving and supports BEV fusion algorithms for L3 and below application scenarios.
The Huashan® A1000 automotive-grade high-performance autonomous driving chip applies to L2+ and L3 level autonomous driving, which is currently the most widely used autonomous driving chip in Chinese mass-produced car companies and the only local chip platform capable of backing integrated domain controllers with a single chip.
It’s reported that Huashan A1000 chip has been in full mass production and adopted by several leading Chinese car manufacturers, including FAW Group, Dongfeng Group, Geely Group, and JAC Group, and has been used in mass-produced models including Lynk & Co 08, Hycan V09, Dongfeng eπ007, and its first pure electric SUV Dongfeng eπ008. The next-generation SoC, Huashan A2000, is currently under development and is expected to be launched in 2024.
The Wudang C1200 family of intelligent vehicle cross-domain computing chips was rolled out in April 2023. It has completed full testing after tape-out, with successful functional and performance verification, and sample chips are now available to customers.
As an “All in one” chip, the C1200 family targets multi-domain integration and cross-domain computing, covering core scenarios of intelligent vehicles with a single chip and thus empowering smart vehicles as a whole. Black Sesame expects to generate revenue from C1200 in 2024 and achieve mass production before 2025.
Read more
(Photo credit: Black Sesame Technologies)
News
According to a report from The Information, NVIDIA’s “world’s most powerful AI chip,” the GB200, is said to be experiencing yield issues, leading to a one-quarter delay in mass shipments.
As per sources cited by a report from the Economic Daily News, it’s suggested that the problem likely lies in the yield rates of advanced packaging, mainly affecting the non-reference-designed GB200 chips.
The supply of the reference-designed GB200 chips remains stable, with Foxconn being the sole contract manufacturer receiving an adequate supply of these chips. Foxconn is set to ship according to the original schedule in the fourth quarter.
Furthermore, the sources cited by the same report point out that Foxconn is currently the only manufacturer able to meet the scheduled shipment of the GB200 in the fourth quarter. This is primarily due to Foxconn securing NVIDIA’s reference-designed GB200 chips orders, which are prioritized for shipment amid the supply shortage.
The term “reference-designed” refers to the GB200 AI servers ordered by NVIDIA for production at Foxconn and other manufacturers. These products are made according to NVIDIA’s reference designs and are not customized. Once produced, they can be sold to cloud service providers (CSPs) and other clients.
In contrast, “non-reference-designed” refers to customized versions of the GB200, which are tailored to specific customer requirements. The current yield issues are affecting the production of these non-reference-designed items, with the priority given to shipping the reference-designed products first.
Following the reports addressing the tight supply of GB200, customers are said to be scrambling to secure their orders from Foxconn due to its ample chip supply. Foxconn, traditionally silent on customer and order details, will reveal the latest status of its product lines during the press conference on August 14th.
The GB200 was originally scheduled for mass shipments starting in the fourth quarter of this year. However, over the weekend, reports emerged about yield issues, pushing the mass shipment timeline to the first quarter of next year, causing a stir among the market.
Read more
(Photo credit: NVIDIA)
News
On July 30, AMD announced its second-quarter financial results (ending June 29), with profits exceeding Wall Street expectations. According to a report from TechNews, the most notable highlight is that nearly half of AMD’s sales now come from data center products, rather than from PC chips, gaming consoles, or industrial and automotive embedded chips.
AMD’s growth this quarter is may attribute to the MI300 accelerator. AMD CEO Lisa Su highlighted that the company’s chip sales for the quarter just surpassed USD 1 billion, with contributions also coming from EPYC CPUs.
As per a report from The Verge, AMD is following a similar path as NVIDIA, producing new AI chips annually and accelerating all R&D efforts to maintain a competitive edge. During the earnings call, AMD reaffirmed that the MI325X will launch in Q4 of this year, followed by the next-generation MI350 next year, and the MI400 in 2026.
Lisa Su emphasized that the MI350 should be very competitive compared to NVIDIA’s Blackwell. NVIDIA launched its most powerful AI chip, Blackwell, in March of this year and has recently started providing samples to buyers.
Regarding the MI300, Su noted that while AMD is striving to sell as many products as possible and the supply chain is improving, supply is still expected to be tight until 2025.
Per a reports from TechNews, despite AMD’s data center business doubling in growth this year, it still constitutes only a small fraction of NVIDIA’s scale. NVIDIA’s latest quarterly revenue reached USD 22.6 billion, with data center performance also hitting new highs.
A report from anue further indicates that, AMD’s core business remains the CPUs for laptops and servers. The PC sales, categorized under the company’s Client segment, saw a 49% increase year-over-year, reaching USD 1.5 billion. Sales of AMD’s AI chips continue to grow, and with strong demand expected to persist, the company forecasts that third-quarter revenue will exceed market expectations.
Additionally, AMD produces chips for gaming consoles and GPUs for 3D graphics, which fall under the company’s Gaming segment. Although sales for PlayStation and Xbox have declined, leading to a 59% drop in revenue from this segment compared to last year, totaling USD 648 million, AMD notes that sales of its Radeon 6000 GPUs have actually been growing year over year.
Read more
(Photo credit: AMD)
News
According to Reuters, engineers at Amazon’s chip lab in Austin, Texas, recently tested highly confidential new servers. Per the Economic Times, the director of engineering at Amazon’s Annapurna Labs under AWS Rami Sinno revealed that these new servers feature Amazon’s AI chips, which can compete with NVIDIA’s chips.
It’s reported that Amazon is developing processors to reduce reliance on the costly NVIDIA chips, which will power some of Amazon’s AWS AI cloud services.
Amazon expects to use its self-developed chips to enable customers to perform complex calculations and process large amounts of data at a lower cost. The company’s competitors, Microsoft and Alphabet, are also pursuing similar efforts.
However, Amazon is a late starter in AI chip field, but a industrial leader in non-AI processing chip, whose main non-AI processing chip, Graviton, has been in development for nearly a decade and is now in its fourth generation. The other two AI chips, Trainium and Inferentia, are newer designs.
David Brown, AWS’s Vice President of Compute and Networking, stated that in some cases, the performance of these chips can be 40% to 50% higher compared to NVIDIA’s, and their cost is supposed to be about half of the same models of NVIDIA’s chips.
AWS accounts for nearly 20% of Amazon’s total revenue. The company’s revenue from January to March surged by 17% from the same period last year, reaching USD 25 billion. AWS controls about one-third of the cloud computing market, with Microsoft’s Azure comprising about 25%.
Amazon stated that it deployed 250,000 Graviton chips and 80,000 custom AI chips to handle the surge in platform activity during the recent Prime Day.
Read more
(Photo credit: Amazon)
News
Due to challenges in exporting high-performance processors based on x86 and Arm architectures to China, the country is gradually adopting domestically designed operating systems.
According to industry sources cited by Tom’s hardware, Tencent Cloud recently launched the TencentOS Server V3 operating system, which supports China’s three major processors: Huawei’s Kunpeng CPUs based on Arm, Sugon’s Hygon CPUs based on x86, and Phytium’s FeiTeng CPUs based on Arm.
The operating system optimizes CPU usage, power consumption, and memory usage. To optimize the operating system and domestic processors for data centers, Tencent has collaborated with Huawei and Sugon to develop a high-performance domestic database platform.
Reportedly, TencentOS Server V3 can run GPU clusters, aiding Tencent’s AI operations. The latest version of the operating system fully supports NVIDIA GPU virtualization, enhancing processor utilization for resource-intensive services such as Optical Character Recognition (OCR). This innovative approach reduces the cost of purchasing NVIDIA products by nearly 60%.
TencentOS Server is already running on nearly 10 million machines, making it one of the most widely deployed Linux operating systems in China. Other companies, such as Huawei, have also developed their own operating systems, like OpenEuler.
Read more
(Photo credit: Tencent Cloud)