banner

News

Dec 22, 2023

Novel Transistor

A research team from the University of Pennsylvania, Sandia National Laboratories, and Brookhaven National Laboratory has unveiled a new computing architecture, based around the principle of compute-in-memory (CIM), which is entirely transistor-free — and that could prove considerably more efficient for artificial intelligence (AI) workloads, including AI at the edge.

"Even when used in a compute-in-memory architecture, transistors compromise the access time of data," project co-lead Deep Jariwala, assistant processor at the University of Pennsylvania's Electrical and Systems Engineering (ESE) department, explains of the team's decision to move away from the current standard building blocks of modern computers. "They require a lot of wiring in the overall circuitry of a chip and thus use time, space and energy in excess of what we would want for AI applications. The beauty of our transistor-free design is that it is simple, small, and quick, and it requires very little energy."

The team's architecture builds on the established principle of compute-in-memory (CIM), in which selected tasks can be carried out directly where data is held without the usual shuffling required to transfer it to the CPU, GPU, or accelerator, process it, then return it back to system memory again. Using CIM, a big bottleneck is removed — and the efficiency of the system dramatically boosted, at least for selected workloads.

What the team has created goes a stage further, however, not only doing away with transistors but also switching to a novel semiconductor material called scandium-alloyed aluminum nitride (AlScN), which exhibits ferroelectric switching behavior — physically switching considerably faster than traditional semiconductor materials used for non-volatile memory devices.

"One of this material's key attributes is that it can be deposited at temperatures low enough to be compatible with silicon foundries," explains Troy Olsson, co-lead and ESE assistant professor. "Most ferroelectric materials require much higher temperatures. AlScN's special properties mean our demonstrated memory devices can go on top of the silicon layer in a vertical hetero-integrated stack."

"Think about the difference between a multistory parking lot with a hundred-car capacity and a hundred individual parking spaces spread out over a single lot," Olsson continues. "Which is more efficient in terms of space? The same is the case for information and devices in a highly miniaturized chip like ours. This efficiency is as important for applications that require resource constraints, such as mobile or wearable devices, as it is for applications that are extremely energy intensive, such as data centers."

The team's transistor-free architecture has the potential, the team claims, to perform 100 times faster than a conventional processor — while offering superior accuracy. "Let's say," Jariwala explains, "that you have an AI application that requires a large memory for storage as well as the capability to do pattern recognition and search. Think self-driving cars or autonomous robots, which need to respond quickly and accurately to dynamic, unpredictable environments. Using conventional architectures, you would need a different area of the chip for each function and you would quickly burn through the availability and space. Our ferrodiode design allows you to do it all in one place by simply changing the way you apply voltages to program it."

"This research is highly significant," claims first author Xiwen Liu, Ph.D candidate in the ESE, "because it proves that we can rely on memory technology to develop chips that integrate multiple AI data applications in a way that truly challenges conventional computing technologies. We design hardware that makes software work better, and with this new architecture we make sure that the technology is not only fast, but also accurate."

The team's work has been published in the journal Nano Letters under closed terms, with an open-access preprint available on Cornell's arXiv server.

SHARE