chatGPT


2023-08-22

[News] Taiwan Server Supply Chain Wistron, GIGABYTE will be benefit from UK’s AI Chip Purchase

Following Saudi Arabia’s $13 billion investment, the UK government is dedicating £100 million (about $130 million) to acquire thousands of NVIDIA AI chips, aiming to establish a strong global AI foothold. Potential beneficiaries include Wistron, GIGABYTE, Asia Vital Components, and Supermicro.

Projections foresee a $150 billion AI application opportunity within 3-5 years, propelling the semiconductor market to $1 trillion by 2030. Taiwan covers the full industry value chain. Players like TSMC, Alchip, GUC, Auras, Asia Vital Components, SUNON, EMC, Unimicron, Delta, and Lite-On are poised to gain.

Reports suggest the UK is in advanced talks with NVIDIA for up to 5,000 GPU chips, but models remain undisclosed. The UK government recently engaged with chip giants NVIDIA, Supermicro, Intel, and others through the UK Research and Innovation (UKRI) to swiftly acquire necessary resources for Prime Minister Sunak’s AI development initiative. Critics question the adequacy of the £100 million investment in NVIDIA chips, urging Chancellor Jeremy Hunt to allocate more funds to support the AI project.

NVIDIA’s high-performance GPU chips have gained widespread use in AI fields. Notably, the AI chatbot ChatGPT relies heavily on NVIDIA chips to meet substantial computational demands. The latest iteration of AI language model, GPT-4, requires a whopping 25,000 NVIDIA chips for training. Consequently, experts contend that the quantity of chips procured by the UK government is notably insufficient.

Of the UK’s £1 billion investment in supercomputing and AI, £900 million is for traditional supercomputers, leaving £50 million for AI chip procurement. The budget recently increased from £70 million to £100 million due to global chip demand.

Saudi Arabia and the UAE also ordered thousands of NVIDIA AI chips, and Saudi Arabia’s order includes at least 3,000 of the latest H100 chips. Prime Minister Sunak’s AI initiative begins next summer, aiming for a UK AI chatbot like ChatGPT and AI tools for healthcare and public services.

As emerging AI applications proliferate, countries are actively competing in the race to bolster AI data centers, turning the acquisition of AI-related chips into an alternative arms race. Compal said, “An anticipate significant growth in the AI server sector in 2024, primarily within hyperscale data centers, with a focus on European expansion in the first half of the year and a shift toward the US market in the latter half.”

2023-06-07

TSMC Lowers Full-Year Semiconductor Growth Forecast, But Advanced Packaging Demand Outstrips Supply

Semiconductor manufacturing leader TSMC held its annual shareholder meeting on June 6, addressing issues including advanced process development, revenue, and capital expenditure. TSMC’s Chairman Mark Liu and President C.C. Wei answered a series of questions. The key points from the industry are summarized as follows:

2023 Capital Expenditure Leaning towards $32 Billion

For TSMC’s Q2 and full-year outlook for this year, the consolidated revenue forecast is between $15.2 and $16 billion, a decrease of 5%-10% from the first quarter. Gross profit margin is expected to range between 52%-54%, and operating profit margin between 39.5%-41.5%. Chairman Mark Liu revealed that this year’s capital expenditure is expected to lean more towards $32 billion.

TSMC’s President C.C. Wei lowered the 2023 growth forecast for the overall semiconductor market (excluding memory), expecting a mid-single digit percentage decrease. The revenue in the wafer manufacturing industry is expected to decrease by a high single digit percentage. At this stage, the overall revenue for 2023 is expected to decrease by a low-to-mid single digit percentage, sliding approximately 1%-6%.

Advanced Process N4P to be Mass Produced this Year

TSMC’s total R&D expenditure for 2022 reached $5.47 billion, which expanded its technical lead and differentiation. The 5-nanometer technology family entered its third year of mass production, contributing 26% to the revenue. The N4 process began mass production in 2022, with plans to introduce the N4P and N4X processes. The N4P process technology R&D is progressing smoothly and is expected to be mass-produced this year. The company’s first high-performance computing (HPC) technology, N4X, will finalize product design for customers this year.

Advanced Packaging Demand Far Exceeds Capacity

Due to the generative AI trend initiated by ChatGPT, the demand for advanced packaging orders for TSMC has increased, forcing an increase in advanced packaging capacity. TSMC also pointed out that the demand for TSMC’s advanced packaging capacity far exceeds the existing capacity, and it is forced to increase production as quickly as possible. Chairman Mark Liu stated that the current investment in R&D focuses on two legs, namely 3D IC (chip stacking) and advanced packaging.

At present, three-quarters of TSMC’s R&D expenditure is used for advanced processes, and one quarter for mature and special processes, with advanced packaging falling under mature and special processes.

(Photo credit: TSMC)

2023-05-22

Beyond the SoC Paradigm: Where Are Next-Gen Mobile AI Chips Going to Land?

The excitement surrounding ChatGPT has sparked a new era in generative AI. This fresh technological whirlwind is revolutionizing everything, from cloud-based AI servers all the way down to edge-computing in smartphones.

Given that generative AI has enormous potential to foster new applications and boost user productivity, smartphones have unsurprisingly become a crucial vehicle for AI tech. Even though the computational power of an end device isn’t on par with the cloud, it has the double benefit of reducing the overall cost of computation and protecting user privacy. This is primarily why smartphone OEMs started using AI chips to explore and implement new features a few years ago.

However, Oppo’s recent decision to shut down its chip design company, Zheku, casted some doubts on the future of smartphone OEMs’ self-developed chips, bringing the smartphone AI chip market into focus.

Pressing Needs to Speed Up AI Chips Iterations

The industry’s current approach to running generative AI models on end devices involves two-pronged approaches: software efforts focus on reducing the size of the models to lessen the burden and energy consumption of chips, while the hardware side is all about increasing computational power and optimizing energy use through process shrinkage and architectural upgrades.

IC design houses, like Qualcomm with its Snapdragon8 Gen.2, are now hurrying to develop SoC products that are capable of running these generative AI base models.
Here’s the tricky part though: models are constantly evolving at a pace far exceeding the SoC development cycle – with updates like GPT occurring every six months. This gap between hardware iterations and new AI model advancements might only get wider, making the rapid expansion of computational requirements the major pain point that hardware solution providers need to address.

Top-tier OEMs pioneering Add-on AI Accelerators

It’s clear that in this race for AI computational power, the past reliance on SoCs is being challenged. Top-tier smartphone OEMs are no longer merely depending on standard products from SoC suppliers. Instead, they’re aggressively adopting AI accelerator chips to fill the computational gap.

The approaches of integrating and add-on AI accelerator were first seen in 2017:

  • Integrated: This strategy is represented by Huawei’s Kirin970 and Apple’s A11 Bionic, which incorporated an AI engine within SoC.
  • Add-on: Initially implemented by Google Pixel 2, which used a custom Pixel Visual Core chip alongside Snapdragon 835. It wasn’t until the 2021 Pixel 6 series, which introduced Google’s self-developed Tensor SoC, that the acceleration unit was directly integrated into the Tensor.

Clearly, OEMs with self-developing SoC+ capabilities usually embed their models into AI accelerators at the design stage. This hardware-software synergy supplies the required computing power for specific AI scenarios.

New Strategic Models on the Rise

For OEMs without self-development capabilities, the hefty cost of SoC development keeps them reliant on chip manufacturers’ SoC iterations. Yet, they’re also applying new strategies within the supply chain to keep pace with swift changes.

Here’s the interesting part – brands are leveraging simpler specialized chips to boost AI-enabled applications, making standalone ICs like ISPs(Image Signal Processors) pivotal for new features of photography and display. Meanwhile, we’re also seeing potential advancements in the field of productivity tools – from voice assistants to photo editing – where the implementation of small-scale ASICs is seriously being considered to fulfill computational demands.

From Xiaomi’s collaboration with Altek and Vivo’s joint effort with Novatek to develop ISPs, the future looks bright for ASIC development, opening up opportunities for small-scale IC design and IP service providers.

Responding to the trend, SoC leader MediaTek is embracing an open 5G architecture strategy for market expansion through licensing and custom services. However, there’s speculation about OEMs possibly replacing MediaTek’s standard IP with self-developed ones for deeper product differentiation.

Looking at this, it’s clear that the battle of AI chips continues with no winning strategy for speeding up smartphone AI chip product iteration.

Considering the substantial resources required for chip development and the saturation of the smartphone market, maintaining chip-related strategies adds a layer of uncertainty for OEMs.With Oppo’s move to discontinue its chip R&D, other brands like Vivo and Xiaomi are likely reconsidering their game plans. The future, therefore, warrants close watch.

Read more:

AI Sparks a Revolution Up In the Cloud

2023-04-25

AI Sparks a Revolution Up In the Cloud

OpenAI’s ChapGPT, Microsoft’s Copilot, Google’s Bard, and latest Elon Musk’s TruthGPT – what will be the next buzzword for AI? In just under six months, the AI competition has heated up, stirring up ripples in the once-calm AI server market, as AI-generated content (AIGC) models take center stage.

The convenience unprecedentedly brought by AIGC has attracted a massive number of users, with OpenAI’s mainstream model, GPT-3, receiving up to 25 million daily visits, often resulting in server overload and disconnection issues.

Given the evolution of these models has led to an increase in training parameters and data volume, making computational power even more scarce, OpenAI has reluctantly adopted measures such as paid access and traffic restriction to stabilize the server load.

High-end Cloud Computing is gaining momentum

According to Trendforce, AI servers currently have a merely 1% penetration rate in global data centers, which is far from sufficient to cope with the surge in data demand from the usage side. Therefore, besides optimizing software to reduce computational load, increasing the number of high-end AI servers in hardware will be another crucial solution.

Take GPT-3 for instance. The model requires at least 4,750 AI servers with 8 GPUs for each, and every similarly large language model like ChatGPT will need 3,125 to 5,000 units. Considering ChapGPT and Microsoft’s other applications as a whole, the need for AI servers is estimated to reach some 25,000 units in order to meet the basic computing power.

As the emerging applications of AIGC and its vast commercial potential have both revealed the technical roadmap moving forward, it also shed light on the bottlenecks in the supply chain.

The down-to-earth problem: cost

Compared to general-purpose servers that use CPUs as their main computational power, AI servers heavily rely on GPUs, and DGX A100 and H100, with computational performance up to 5 PetaFLOPS, serve as primary AI server computing power. Given that GPU costs account for over 70% of server costs, the increase in the adoption of high-end GPUs has made the architecture more expansive.

Moreover, a significant amount of data transmission occurs during the operation, which drives up the demand for DDR5 and High Bandwidth Memory (HBM). The high power consumption generated during operation also promotes the upgrade of components such as PCBs and cooling systems, which further raises the overall cost.

Not to mention the technical hurdles posed by the complex design architecture – for example, a new approach for heterogeneous computing architecture is urgently required to enhance the overall computing efficiency.

The high cost and complexity of AI servers has inevitably limited their development to only large manufacturers. Two leading companies, HPE and Dell, have taken different strategies to enter the market:

  • HPE has continuously strengthened its cooperation with Google and plans to convert all products to service form by 2022. It also acquired startup Pachyderm in January 2023 to launch cloud-based supercomputing services, making it easier to train and develop large models.
  • In March 2023, Dell launched its latest PowerEdge series servers, which offers options equipped with NVIDIA H100 or A100 Tensor Core GPUs and NVIDIA AI Enterprise. They use the 4th generation Intel Xeon Scalable processor and introduce Dell software Smart Flow, catering to different demands such as data centers, large public clouds, AI, and edge computing.

With the booming market for AIGC applications, we seem to be one step closer to a future metaverse centered around fully virtualized content. However, it remains unclear whether the hardware infrastructure can keep up with the surge in demand. This persistent challenge will continue to test the capabilities of cloud server manufacturers to balance cost and performance.

(Photo credit: Google)

  • Page 4
  • 4 page(s)
  • 19 result(s)

Get in touch with us