Microsoft


2024-11-12

[News] Samsung Reportedly Starts Developing Custom HBM4 for Big CSPs, Eyeing Mass Production by 2025

As Samsung has implied earlier that it plans to collaborate with TSMC on HBM4, the memory giant seems to take a key step forward. A report by South Korean media outlet Maeli Business Newspaper discloses that the memory giant has already begun developing “Custom HBM4,” a next-gen high-bandwidth memory tailored specifically for Microsoft and Meta.

Samsung aims to begin mass production of HBM4 promptly upon completing development by the end of 2025, the report suggests.

Industrial sources cited by the report state that Samsung is establishing a dedicated production line for HBM4, which is now in the “pilot production” phase, where small-scale, trial manufacturing takes place ahead of mass production.

Citing sources familiar with the situation, the report further notes that Samsung is actively working on HBM4 designed for Microsoft and Meta. Both tech heavyweights have their own AI chips, namely, Microsoft’s Maia 100 and Meta’s Artemis.

As major tech companies make strides in scaling up their own AI data centers, there is a strong demand to cut costs associated with chip purchases. Therefore, many design and utilize their own AI accelerators while also buying AI chips from NVIDIA and AMD, according to Maeli.

Samsung, with its memory division and an LSI division capable of designing computational chips, is ideally positioned as a top partner for these major tech companies, according to Maeli.

Though the specifics of the HBM4 product that Samsung will supply to these companies remain undisclosed, Samsung did reveal the general specifications in February, according to Maeli.

Its data transmission speed, or bandwidth, reportedly reaches 2 terabytes per second (TB), marking a 66% increase over HBM3E, while its capacity has risen 33% to 48 gigabytes (GB), up from 36GB.

The report explains that unlike previous memory products, HBM4 is also referred to as “Computer-in-Memory (CIM)” due to its advanced capabilities, which go beyond standard memory functions to include specialized operations aligned with customer requirements.

Up to HBM3, the focus is said to be mainly on managing heat generation and maximizing speed, according to a semiconductor industrial official cited by the report. With HBM4, however, the emphasis will shift toward integrating AI computation capabilities (NPU) and specific data processing functions (IP) to meet evolving needs, the report says.

Read more

(Photo credit: Samsung)

Please note that this article cites information from Maeli Business Newspaper.
2024-11-04

[News] Big Four CSPs Capital Expenditures Surge, Driving Demand for Taiwanese AI Supply Chain

The latest quarterly reports from the big four cloud service providers (CSPs) have been released in succession. According to a report from Commercial Times, not only has there been significant revenue growth, but capital expenditures for these CSPs have also surged compared to the same period last year, underscoring the ongoing momentum in AI investments.

Industry scources cited by Commercial Times estimate that capital expenditures by CSPs will surpass USD 240 billion by 2025, reflecting an annual increase of over 10%.

The report indicated that the increase in capital expenditures by CSPs is expected to boost demand for Taiwanese companies in the supply chain during the fourth quarter of this year and into next year, benefiting companies such as Quanta, Wistron, Wiwynn, and Inventec.

According to the report, Microsoft’s capital expenditures for the first quarter of fiscal year 2025 (the third quarter of 2024) reached USD 20 billion, higher than USD 19 billion of the previous quarter, reflecting a 78% increase year-on-year. Microsoft noted that the demand for AI now exceeds available production capacity, and they plan to continue increasing investment, expanding data center construction, and promoting AI services.

The report indicated that the market estimates Microsoft’s total expenditures for fiscal year 2025 will reach USD 80 billion, an increase of over USD 30 billion compared to the previous year.

Google’s capital expenditures in the third quarter reached USD 13.1 billion, an annual increase of 62%, which means that total capital expenditures in 2024 will reach USD 51.4 billion, an annual increase of 59%, and capital expenditures will continue to increase next year, according to the report.

Amazon’s capital expenditures for the third quarter reached USD 22.62 billion, reflecting an 81% year-on-year increase. This year, Amazon’s total capital expenditures have reached USD 51.9 billion, and full-year investments are projected to be as high as USD 75 billion. Furthermore, capital expenditures for next year are expected to be even higher, as the report indicated.

According to the report, as for Meta, capital expenditures in the third quarter were USD 9.2 billion, an annual increase of 36%. Moreover, Meta adjusted their capital expenditure forecast for fiscal 2024 to an upward revision of USD 40 billion. The report indicated that its capital expenditures will continue to grow in 2025.

The report highlighted that AI business opportunities will continue to benefit Taiwan’s major server ODMs. Companies such as Quanta, Wistron, Wiwynn, Inventec, and Foxconn all reported strong results in the third quarter and are optimistic about the fourth quarter and the year ahead.

According to the report, Quanta’s third-quarter revenue reached a record high, driven by strong demand for AI server orders. Quanta Chairman Barry Lam also expressed an optimistic outlook on the future of AI, noting that as large-scale CSPs develop generative AI applications, the scale of AI data centers is continually expanding, leading to a substantial increase in orders.

After demonstrating strong growth momentum in the first half of the year, Wistron has benefited from urgent orders in the second half. Additionally, some B200 series products utilizing the next-generation Blackwell platform are scheduled to be shipped after the fourth quarter. The report indicated that Wistron is quite optimistic about its performance for this quarter and next year.

Inventec plans to ship servers to customers primarily from US-based CSPs in the second half of the year. The report highlighted that orders from Google have increased as the company expands its purchase of AI servers based on its own TPU architecture, in addition to acquiring general-purpose servers for new platforms.

Read more

(Photo credit: Microsoft)

Please note that this article cites information from Commercial Times.

2024-09-11

[News] NVIDIA Sued for Patent Infringement, Asked to Halt Sales of Blackwell Architecture GPUs

According to a report from The Register, DPU developer Xockets recently filed a lawsuit, accusing AI chip giant NVIDIA, Microsoft, and intellectual property risk management company RPX of colluding to avoid paying Xockets the fees it is owed, violating federal antitrust laws, and intentionally infringing on its patents.

The report states that in addition to seeking monetary compensation, Xockets is also requesting an injunction. If granted, this injunction would prevent NVIDIA from selling its upcoming Blackwell architecture GPUs.

Per Reuter’s report, Xockets, founded in 2012, claims that its invention, the Data Processing Unit (DPU), plays a critical role in some of NVIDIA’s and Microsoft’s systems. The company states that its technology helps offload and accelerate tasks that would otherwise place a heavy burden on server processors.

Reportedly, Xockets founder Parin Dalal began filing a series of DPU technology patents in 2012. These patents describe architectures used for the linear downloading, acceleration, and isolation of data-intensive computational operations from server processors.

Xockets claims that its DPU-related patents cover various applications including cloud computing, machine learning, security, network overlay, stream data processing, and cloud computing architectures. Xockets alleges that Microsoft and Mellanox, which was acquired by NVIDIA in 2020, which was acquired by NVIDIA in 2020, have infringed on these patents.

In a recent statement, Xockets claimed that NVIDIA has utilized DPU technology patented by Xockets, allowing NVIDIA to monopolize the AI server market using its GPUs. Meanwhile, Microsoft has allegedly monopolized the AI platform market using NVIDIA GPUs.

Xockets further claimed that it has made effort to engage in sincere negotiations with NVIDIA and Microsoft, but these attempts have been rejected.

Xockets’ lawsuit reveals that it actually demonstrated the relevant technology to Microsoft in 2016, and the technology was subsequently adopted by Mellanox within the same year for cloud computing downloads used by Redmond and other clients.

Additionally, NVIDIA’s ConnectX smartNIC, BlueField DPU, and NVLink switch, which are crucial for extending AI training and inference deployments across large GPU clusters, are said to infringe on Xockets’ patents.

Regarding this matter, NVIDIA has declined to comment, while Xockets’ spokesperson has also not provided any additional explanation.

The report highlights that Microsoft and NVIDIA may not be Xockets’ only targets but are at least the most profitable ones. Other companies, such as Broadcom, Intel, AMD, Marvell, Napatech, and Amazon, are also actively developing products similar to NVIDIA’s ConnectX, BlueField, and NVLink.

Regarding the lawsuit, the judge overseeing the case has approved a preliminary injunction hearing to be held on September 19.

Read more

(Photo credit: Xockets)

Please note that this article cites information from The Register.

2024-09-02

[News] NVIDIA and Apple May Follow Microsoft’s Footsteps in the New Round of OpenAI Investment

The Wall Street Journal reported that OpenAI is in talks for a new round of funding, with tech giants Apple and NVIDIA both interested in investing in the AI research company OpenAI.

It’s reported that this investment will be part of OpenAI’s new round of financing, which will bring its estimated value to exceed USD 100 billion.

Sources indicated that OpenAI plans to raise billions of dollars, and venture capital firm Thrive Capital will lead this round of funding with a USD 1 billion investment. Microsoft, OpenAI’s largest shareholder, will also be a part of this round.

Reportedly, sources have revealed that Apple is currently in talks with OpenAI for the potential investment, while NVIDIA has already discussed joining the latest round of funding, who reportedly considered investing USD 100 million.

Although it is not yet clear how much Apple and Microsoft plan to invest, the point is that the three most valuable tech giants in the world would all become shareholders of OpenAI if these negotiations end in success.

In a memo on Wednesday, OpenAI’s CFO Sarah Friar stated that the company is seeking new financing but did not disclose specific details. Friar mentioned that OpenAI would leverage this funding to strengthen computing power and cover other operational expenses.

With the rise of the AI industry, Microsoft, Apple, and NVIDIA have also accelerated their pace in developing AI technologies.

Microsoft has invested USD 13 billion in OpenAI since 2019, holding a stake of 49% in this company. Apple, at its Worldwide Developers Conference (WWDC) in June this year, launched the Apple Intelligence system and announced a partnership with OpenAI.

As for NVIDIA, it has long been closely collaborating with OpenAI and has been highly active in making investment in this field. Its investment arm, NVentures, has invested in several AI companies since 2023.

Read more

(Photo credit: OpenAI)

Please note that this article cites information from the Wall Street Journal and WeChat account DRAMeXchange.

2024-08-08

[News] Intel Reportedly Rejected OpenAI Investment, Missing Out on AI Opportunity

According to a report from Reuters citing sources on August 7th, American chip giant Intel had an opportunity to invest in OpenAI several years ago but ultimately had the investment rejected by company executives, resulting in a missed opportunity.

Reportedly, Intel and OpenAI discussed collaboration several times between 2017 and 2018. At that time, OpenAI was still a nascent nonprofit research organization focused on developing relatively unknown generative AI technologies.

The discussions included Intel potentially purchasing a 15% stake in OpenAI for USD 1 billion in cash and possibly producing hardware for OpenAI at cost in exchange for an additional 15% stake.

Sources cited by the report further reveal that OpenAI was very interested in Intel’s investment, primarily because it would reduce the company’s reliance on NVIDIA chips and enable OpenAI to build its own infrastructure.

However, Intel ultimately rejected the deal. One reason cited by the report was that then-CEO Bob Swan did not believe generative AI could be commercialized in the short term and was concerned that Intel’s investment would not yield returns. Another reason was that Intel’s data center division was unwilling to produce hardware for OpenAI at cost.

After Intel’s refusal, Microsoft began investing in OpenAI in 2019. In 2022, OpenAI launched the chatbot ChatGPT, reportedly sparking a global AI boom and achieving a valuation of USD 80 billion. Per the data from CB Insights, it has made OpenAI the third most valuable tech startup worldwide, behind only ByteDance and SpaceX.

Neither Intel nor OpenAI has commented on these reports.

As per a previous report from The Atlantic, Intel had previously declined to produce processors for Apple’s iPhone, a misstep that caused Intel to miss the opportunity to transition into the mobile area.

The news from Reuters this time further suggests that Intel has made a similar mistake in the AI domain.

Read more

(Photo credit: Intel)

Please note that this article cites information from Reuters and The Atlantic.
  • Page 1
  • 7 page(s)
  • 35 result(s)

Get in touch with us