Wednesday, September 27, 2023

The next generation of computing is paved by nanomaterials

Must read

Carolyn Hansen
Carolyn Hansen
Carolyn Hansen is a certified fitness expert and fitness center owner who coaches clients to look and feel younger. In her nearly 30 years of fitness and bodybuilding competition experience, she has helped thousands of people start their journey towards being strong, fit, and youthful at any age.

Around 2012, developers of deep learning, a technique in which systems improve their performance based on prior experience, realized that conventional computers’ general-purpose central processing units (CPUs) couldn’t meet their needs. As a result, hints of a major shift in computing emerged.

The development of nanotechnology might provide the answer to moving beyond the solid-state era.

A neuromorphic chip is crucial to the predicted performance revolution in computing.

Since the 1950s, solid-state electronics have been in use, as transistors were substituted for vacuum tubes in electronic circuits. Many generations of electronic devices processing and storing data have come and gone as germanium transistors were replaced by silicon transistors, followed by integrated circuits, then by increasingly complex chips laden with ever-larger numbers of tiny transistors.

The post-1965 industry has been governed by Moore’s law— a declaration by Gordon Moore, co-founder of microprocessor giant Intel — that ever-shrinking devices will enhance computing efficiency and performance. Because nanotechnology has enabled the tiny characteristics of today’s most advanced integrated circuits to be shrunk down to an atomic scale, this is not compatible with current technology. A new nanomaterials architecture is necessary for the next big step in computing.

The primary function of CMOS (complementary metal-oxide-semiconductor) transistors in integrated circuits have been the norm since the 1980s. Since the mid-twentieth century, John von Neumann has been the architect of CMOS circuits as well as digital computers. His design was created to keep the electronics that store data separate from those that process information in computers. Computers store data in one location, then send it to other circuits for processing. By keeping signals from interfering with each other, the accuracy required for digital computing is preserved. However, transferring data from memory to the processor has become a hurdle. To avoid wasting time by moving data from place to place, developers are seeking alternative non-von Neumann architectures that may do computations ‘in memory’ to perform calculations.

Chemical and materials science researcher Mark Hersam says that the aim is to develop artificial neurons and synapses that are compatible with electronic processing but that outperform CMOS circuits. Neuromorphic systems that use algorithms and network designs inspired by the human brain’s high connectivity and parallel processing are another option. Developing these new elements would be well worth the effort, he says. According to Hersam, neuromorphic computing is a bigger paradigm shift with more potential benefits than in-memory processing, which he believes has greater potential.

At Northwestern University in Evanston, Illinois, Hersam is working to identify the best technologies for the job. In the Nature Index, which tracks articles in 82 selected natural-science journals, Northwestern University places second in the United States in nano-related output, after the Massachusetts Institute of Technology in Cambridge.

Around 2012, developers of deep learning, a computing method where systems improve their performance based on prior experience, came to the realization that conventional computers’ general-purpose central processing units (CPUs) could not meet their needs as Moore’s law began to stall out.

Faster processing is desired.

Wilfried Haensch, who was in charge of designing computer memories at IBM Watson Research Center in Yorktown Heights, New York, until his retirement in 2020, claims that CPUs were renowned for their versatility. Whether an application can run efficiently on a CPU is another matter, according to Haensch.

Deep learning requires high-speed, three-dimensional imaging, so IBM developers turned to graphical processing units (GPUs) to run deep-learning algorithms. IBM discovered that GPUs could run deep-learning algorithms much more efficiently than CPUs, so they wired the chips to run specific processes.

Certain instructions are hardwired into data-flow processors, so you don’t have to load instructions, according to Haensch. This was a departure from the traditional von Neumann design in which data flowed through the hard-wired processor as if operations were being performed in memory. The deep-learning algorithm also benefited from this approach, since about 80% of its operations used the same sophisticated mathematics as image processing.

According to Haensch, further fine-tuning of current materials is only a short-term solution. There are many new devices, new nanostructures, and new ideas, he says, but CMOS is not ready to be replaced. There is also no guarantee, he says, that these technologies will be ready to deliver the industry transformation we need anytime soon.

A memristor is a device that combines memory and electrical resistance in one unit. Memristors are a type of resistor that can both store and read data, and they behave much like standard electrical resistors. Because their structure comprises three layers—two terminals that connect to other devices and a storage layer in between—memristors are able to store and process data. After nearly forty years of research, Hewlett-Packard Labs scientist R. Stanley Williams created the first thin-film solid-state memristor in 2007.

The University of Michigan in Ann Arbor team led by Wei Lu (M. A. Zidan et al. Nature Electron. 1, 22–29; 2018) described memristors as having ‘great potential for developing future computing systems past the von Neumann and Moore eras’ in a 2018 literature review (p. 5). Creating a single system with all of the desired properties will be difficult.

Materials of the future

Researchers are searching for novel materials to support advanced computation. Van der Waals heterostructures (multiple two-dimensional layers of material that bond together) and other one-dimensional and two-dimensional materials (graphene) are among the neuromorphic electronic materials that Hersam and his colleague Vinod K. Sangwan of Northwestern University have listed (V. K. Sangwan and M. C. Hersam Nature Nanotechnol. 15, 517–528; 2020).

Neuromorphic systems have been interested in one-dimensional carbon nanotubes because they resemble the tubular axons through which nerve cells transmit electric signals in biological systems.

Abu Sebastian, a Zurich-based technical leader of the IBM Research AI Hardware Center in Albany, New York, is focused on near-term gains and sees opportunities to push further in both digital and neuromorphic computing. Despite the fact that opinions on how these materials will factor into future computing are divided, he is thinking long-term.

According to Lu, there is still a lot of work to be done on the research side. Mythic [an artificial intelligence company based in Austin, Texas] and other companies are close to commercialization, he says. There are a lot of complexities that must be addressed in neuromorphic computing, according to Lu. There is currently no material suitable for mass production, according to Haensch.

There are plenty of substantial corporations in nanoscience and nanotechnology-related output in the Nature Index that are working on non-von Neumann computing. Hewlett-Packard and Paris-based artificial intelligence firm Lights-On are just two of the many firms focusing on near-term applications.

More articles


Please enter your comment!
Please enter your name here

Latest article