AWS


2024-09-18

[News] Intel’s Foundry Spin-off Could Improve Chances of Securing Orders from Tech Giants such as Apple and AMD

One of the most critical moves of Intel’s next step, regarded by CEO Pat Gelsinger as “the most significant transformation in over four decades,” is turning its foundry business into an independent subsidiary. Citing remarks from foreign media and analysts, a report by Taiwanese media outlet Anue notes that this is a much-needed temporary measure aimed at gaining the trust of potential customers, who may hesitate to entrust their chip designs to a competitor’s foundry division.

Following last week’s board meeting, Intel announced on September 16th that the company will transform its foundry business into a wholly-owned subsidiary with its own board of directors.

It is worth noting that in the meantime, Intel signed a multi-billion-dollar, multi-year agreement with Amazon to produce certain chips for Amazon Web Services’ (AWS) AI data centers.

The Two tech giants will co-develop AWS’ next-gen AI fabric chips on Intel 18A, which signals a good start for Intel. Additionally, Intel is developing customized Xeon 6 server chips for AWS.

Regarding Intel’s plan on carving out its foundry business, citing comments from foreign analysts, the report by Anue states that the move could help Intel in having a better chance of attracting tech heavyweights, such as Apple, Qualcomm, Broadcom, and even AMD.

Here is why: if the new company appears as an independent entity and if it has the right board members, the foundry business could progress more smoothly, the report suggests. This move should help alleviate concerns from potential customers, but its effectiveness will yet be proven through execution.

The report added that if Intel’s collaboration with Amazon goes well, it could potentially manufacture other Amazon chips in the future, such as AWS Graviton processors and Trainium AI training chips used for machine learning.

Intel has failed to attract a significant number of clients for its foundry business, with Microsoft being its largest customer to date, the report notes.

Two years ago, the struggling giant lost the contract to design and manufacture chips for Sony’s next-generation PlayStation 6, dealing a major blow to its efforts to establish its nascent foundry business.

In its own words, the move in terms of the new subsidiary structure will provide greater separation and independence for Intel’s external foundry customers and suppliers from Intel’s other divisions. Importantly, it also gives the company the flexibility to evaluate independent funding sources in the future and optimize the capital structure of each business to maximize growth and create shareholder value.

Read more

(Photo credit: Intel)

 

 

Please note that this article cites information from Anue and Intel.
2024-07-30

[News] Amazon Unveiled the Latest AI Chip, Performance up by 50%

According to Reuters, engineers at Amazon’s chip lab in Austin, Texas, recently tested highly confidential new servers. Per the Economic Times, the director of engineering at Amazon’s Annapurna Labs under AWS Rami Sinno revealed that these new servers feature Amazon’s AI chips, which can compete with NVIDIA’s chips.

It’s reported that Amazon is developing processors to reduce reliance on the costly NVIDIA chips, which will power some of Amazon’s AWS AI cloud services.

Amazon expects to use its self-developed chips to enable customers to perform complex calculations and process large amounts of data at a lower cost. The company’s competitors, Microsoft and Alphabet, are also pursuing similar efforts.

However, Amazon is a late starter in AI chip field, but a industrial leader in non-AI processing chip, whose main non-AI processing chip, Graviton, has been in development for nearly a decade and is now in its fourth generation. The other two AI chips, Trainium and Inferentia, are newer designs.

David Brown, AWS’s Vice President of Compute and Networking, stated that in some cases, the performance of these chips can be 40% to 50% higher compared to NVIDIA’s, and their cost is supposed to be about half of the same models of NVIDIA’s chips.

AWS accounts for nearly 20% of Amazon’s total revenue. The company’s revenue from January to March surged by 17% from the same period last year, reaching USD 25 billion. AWS controls about one-third of the cloud computing market, with Microsoft’s Azure comprising about 25%.

Amazon stated that it deployed 250,000 Graviton chips and 80,000 custom AI chips to handle the surge in platform activity during the recent Prime Day.

Read more

(Photo credit: Amazon)

Please note that this article cites information from Economic Daily and WeChat account DRAMeXchange.

2024-05-22

[News] Google to Add EUR 1 Billion for AI Business in Reaction to CSPs’ Strong Demands

On May 20, a report by Reuters revealed that Google plans to invest an additional Euro 1 billion in its data center park in Finland. This move aims to expand the scale and boost its AI business growth in Europe.

The report notes that in recent years, many data centers have been established in Nordic countries due to the cool climate, tax incentives, and ample supply of renewable energy. Finland’s wind power capacity has seen significant growth over these years, up by 75% to 5,677 megawatts by 2022, which brings electricity prices even down to negative values on particularly windy days.

Thus, Data center operators like Google have been taken advantage of this renewable energy, and already signed long-term wind power purchase agreements in Finland.

Driven by the AI wave, cloud providers such as Microsoft, Google, Meta, and Amazon have an increasingly robust demand for AI servers and data centers.

According to a previous forecast by TrendForce, considering the global CSPs’ demands for high-end AI servers (Those equipped with NVIDIA, AMD, or other high-end ASIC chips included) in 2024, the demands from four major U.S. CSPs: Microsoft, Google, AWS, and Meta are expected to account for 20.2%, 16.6%, 16%, and 10.8% of global demand respectively, reigning over the global market with a total proportion of more than 60%.

Read more

(Photo credit: Google)

Please note that this article cites information from WeChat account DRAMeXchange and Reuters.

2024-05-15

[News] Google Unveils 6th Generation TPU, Scheduled to Launch Later This Year

At the Google I/O 2024 developer conference on Tuesday, Google unveiled its 6th generation custom chip, the Trillium TPU, which is scheduled to hit the market later this year, according to the report by TechCrunch.

According to the information provided by Google on its website, compared to TPU v5e, Trillium boasts a 4.7x peak compute performance increase per chip. Google has also doubled the High Bandwidth Memory (HBM) capacity and bandwidth, along with a 1x increase in Interchip Interconnect (ICI) bandwidth between chips.

Additionally, Trillium features the third-generation SparseCore, a dedicated accelerator for processing large embeddings, aimed at handling advanced ranking and recommendation workloads. Moreover, Trillium achieves a 67% higher energy efficiency compared to TPU v5e.

Trillium has the capacity to expand up to 256 TPUs within a singular pod boasting high bandwidth and low latency. Additionally, it incorporates multislice technology, allowing Google to interlink thousands of chips, thus constructing a supercomputer capable of facilitating a data center network capable of processing petabits of data per second.

In addition to Google, major cloud players such as AWS, Meta, and Microsoft have also made their way to develop their own AI Chips.

In late 2023, Microsoft unveiled two custom-designed chips, the Microsoft Azure Maia AI Accelerator, optimized for AI tasks and generative AI, and the Microsoft Azure Cobalt CPU, an Arm-based processor tailored to run general purpose compute workloads on the Microsoft Cloud. The former is reportedly to be manufactured using TSMC’s 5nm process.

In May 2023, Meta also unveiled the Meta Training and Inference Accelerator (MTIA) v1, its first-generation AI inference accelerator designed in-house with Meta’s AI workloads in mind.

AWS has also jumped into the AI chip market. In November, 2023, AWS released Trainium2, a chip for training AI models.

Read more

(Photo credit: Google)

Please note that this article cites information from TechNews.

2023-11-30

[News] Amazon Unveils New AWS-Designed Chips, Boosting Orders for TSMC and ALCHIP

On the 28th, Amazon unveiled two AWS-designed chips, Graviton4, a CPU propelling its AWS cloud services, and the second-gen AI chip Trainium2, tailored for large language models. Both chips boast substantial performance upgrades. With a positive market outlook, Amazon is intensifying its competition with Microsoft and Google for dominance in the AI cloud market. The demand for in-house chips is surging, leading to increased orders for key players like the wafer foundry TSMC and the silicon design and production services company ALCHIP, reported by UDN News.

According to reports, Amazon AWS CEO Adam Selipsky presented the fourth AWS-Designed custom CPU chip, Graviton4, at the AWS re:Invent 2023 in Las Vegas. It claims a 30% improvement in computing performance compared to the current Graviton3, with a 75% increase in memory bandwidth. Computers equipped with this processor are slated to go live in the coming months.

Trainium2, the second-gen chip for AI system training, boasts a computing speed three times faster than its predecessor and doubled energy efficiency. Selipsky announced that AWS will commence offering this new training chip next year.

AWS is accelerating the development of chips, maintaining its lead over Microsoft Azure and Google Cloud platforms. Amazon reports that over 50,000 AWS customers are currently utilizing Graviton chips.

Notably, Amazon’s in-house chip development heavily relies on the Taiwan supply chain, TSMC and ALchip. To produce Amazon’s chips, Alchip primarily provides application-specific integrated circuit (ASIC) design services, and TSMC manufactures with advanced processes.

TSMC consistently refrains from commenting on products for individual customers. Analysts estimate that TSMC has recently indirectly secured numerous orders from Cloud Service Providers (CSPs), mainly through ASIC design service providers assisting CSP giants in launching new in-house AI chips. This is expected to significantly contribute to TSMC’s high utilization for the 5nm family.

In recent years, TSMC has introduced successive technologies such as N4, N4P, N4X, and N5A to strengthen its 5nm family. The N4P, announced at 2023 Technology Symposium, is projected to drive increased demand from 2024 onwards. The expected uptick in demand is mainly attributed to AI, network, and automotive products.

(Image: Amazon)

  • Page 1
  • 3 page(s)
  • 11 result(s)

Get in touch with us