News
As AI applications become more widespread, there is an urgent need to improve energy efficiency. Traditional AI processes are known as power-hungry due to the constant data transferring between logic and memory. However, according to the reports by Tom’s Hardware and Innovation News Network, researchers in the U.S. may have come up with a solution: computational random-access memory (CRAM), which is said to reduce energy consumption by AI by 1,000 times or more.
According to the reports, researchers at the University of Minnesota, after over 20 years of research, have developed a new generation of phase-change memory that can significantly reduce energy consumption in AI applications.
Citing the research, Tom’s Hardware explains that in current AI computing, data is frequently transferred between processing components (logic) and storage (memory). This constant back-and-forth movement of information can consume up to 200 times more energy than the actual computation.
However, with the so-called CRAM, data can be processed entirely within the memory array without having to leave the grid where it is stored. Computations can be performed directly within memory cells, eliminating the slow and energy-intensive data transfers common in traditional architectures.
According to Innovation News Network, machine learning inference accelerators based on CRAM could achieve energy savings of up to 1,000 times, with some applications realizing reductions of 2,500 and 1,700 times compared to conventional methods.
The reports note further that the patented technology is related to Magnetic Tunnel Junctions (MTJs), which are nanostructured devices used in hard drives, sensors, and various microelectronic systems, including Magnetic Random Access Memory (MRAM).
It is worth noting that among Taiwanese companies, NOR flash memory company Macronix may be the one with the most progress. According to a report by the Economic Daily, Macronix has been collaborating with IBM to develop the phase-change memory technology for over a decade, with AI applications as their main focus. Currently, Macronix is IBM’s sole partner for phase-change memory.
The report notes that the joint development program between Macronix and IBM is organized in three-year phases. At the end of each phase, the two companies decide whether to sign a new agreement based on the situation.
Read more
(Photo credit: npj Unconventional Computing)
News
TSMC has achieved a breakthrough in next-generation MRAM memory-related technology, collaborating with the Industrial Technology Research Institute (ITRI) to develop a spin-orbit-torque magnetic random-access memory (SOT-MRAM) array chip
This SOT-MRAM array chip showcases an innovative computing in memory architecture and boasts a power consumption of merely one percent of a spin-transfer torque magnetic random-access memory (STT-MRAM) product.
According to a report by the Economic Daily News, industry sources suggest that with the advent of the AI and 5G era, applications such as autonomous driving, precise medical diagnostics, and satellite image recognition require a new generation of memory that is faster, more stable, and has lower power consumption. MRAM, which utilizes common refined magnetic materials found in hard drives, meets the demands of this new generation of memory, attracting major players like Samsung, Intel, and TSMC to invest in research and development.
In the past, MRAM was mainly applied in automotive and base station. However, due to the characteristics of MRAM architecture, it was challenging to achieve a balance between data retention, write endurance, and write speed. A few years ago, a new architecture called Spin-Transfer Torque MRAM (STT-MRAM) emerged, addressing the aforementioned challenges and entering commercialization.
TSMC has successfully developed related MRAM product lines with 22-nanometer, 16/12-nanometer processes and secured orders in markets such as memory and automotive, seizing the MRAM business opportunity.
In a recent development, TSMC, riding on its success, collaborates with the Industrial Technology Research Institute (ITRI) to create an SOT-MRAM array chip, complemented by an innovative computing architecture.
Their collaborative efforts have resulted in a research paper on this microelectronic component, which was jointly presented at the 2023 IEEE International Electron Devices Meeting (IEDM 2023), underscoring the cutting-edge nature of their findings and their pivotal role in advancing next-generation memory technologies.
Dr. Shih-Chieh Chang, General Director of Electronic and Optoelectronic System Research Laboratories at ITRI, highlighted the collaborative achievements of both organizations.
“Following the co-authored papers presented at the Symposium on VLSI Technology and Circuits last year, we have further co-developed a SOT-MRAM unit cell,” said Chang. “This unit cell achieves simultaneous low power consumption and high-speed operation, reaching speeds as rapid as 10 nanoseconds. And its overall computing performance can be further enhanced when integrated with computing in memory circuit design. Looking ahead, this technology holds the potential for applications in high-performance computing (HPC), artificial intelligence (AI), automotive chips, and more.”
(Image: ITRI)