Home Science & TechSecurity For AI to Actualize its Potential, Energy Demands Must be Addressed

For AI to Actualize its Potential, Energy Demands Must be Addressed

by ccadm


Artificial Intelligence (AI) continues to power the 4th industrial revolution, alongside its energy demands. Today, anyone can access advanced AI tools and integrate them into their systems to improve efficiency and reduce workload. The energy required to power these algorithms increases as the demand for AI applications increases. As such, environmentalists are already pointing out sustainability concerns surrounding the tech. Thankfully, a team of researchers has created a highly efficient alternative. here’s what you need to know.

Growing AI Energy Demands Creating an Energy Crisis

New AI systems continue to launch at an increasing frequency. The most recent global energy use forecast predicts that AI energy consumption will double from 460 terawatt-hours (TWh) in 2022 to 1,000 TWh by 2026. These protocols include recommenders, large language models (LLMs), image and video processing and creation, Web3 services, and more.

According to the researcher’s study, AI systems require data transference that equates to “200 times the energy used for computation when reading three 64-bit source operands from and writing one 64-bit destination operand to an off-chip main memory.”  As such, reducing energy consumption for artificial intelligence (AI) computing applications is a prime concern for developers who will need to overcome this roadblock to achieve large-scale adoption and mature the tech.

Thankfully, a group of innovative engineers from the University of Minnesota have stepped up with a possible solution that could reduce the power consumption of AI protocols by orders of magnitude. To accomplish this task, the researchers introduce a new chip design that improves on the Von Neumann Architecture found in most chips today.

Von Neumann Architecture

John von Neumann revolutionized the computer sector in 1945 when he separated logic and memory units, enabling more efficient computing at the time. In this arrangement, the logic and data are stored in different physical locations. His invention improved performance because it allowed both to be accessed simultaneously.

Source – University of Minnesota Twin Cities

RAM

Today, most computers still use the Von Neuman structure with your HD storing your programs and the random access memory (RAM) housing programming instructions and temporary data. Today’s RAM accomplishes this task using various methods including DRAM, which leverages capacitors, and SRAM, which has multiple circuits.

Notably, this structure worked great for decades. However, the constant transfer of data between logic and memory requires lots of energy. This energy transfer increases as data requirements and computational load increase. As such, it creates a performance bottleneck that limits efficiency as computing power increases.

Attempted Improvements on Energy Demands

Over the years, many attempts have been made to improve Von Neumann’s architecture. These attempts have created different variations of the memory process with the goal of bringing the two actions closer physically. Currently, the three main variations include.

Near-memory Processing

This upgrade moves logic physically closer to memory. This was accomplished using a 3D-stacked infrastructure. Moving the logic closer reduced the distance and energy needed to transfer the data required to power computations. This architecture provided improved efficiency.

In-memory Computing

Another current method of improving computational architecture is in-memory computing. Notably, there are two variations of this style of chip. The original integrates clusters of logic next to the memory on a single chip. This deployment enables the elimination of transistors used in predecessors. However, there are many who consider this method not  “true” to the in-memory structure because it still has separate memory locations, which means that initial performance issues that resulted from the data transfer exist, albeit on a smaller scale.

True In-memory

The final type of chip architecture is “true in-memory.” To qualify as this type of architecture, the memory needs to perform computations directly. This structure enhances capabilities and performance because the data for logic operations remains in its location. The researcher’s latest version of true in-memory architecture is CRAM.

(CRAM)

Computational random-access memory (CRAM) enables true in-memory computations as the data is processed within the same array. The researchers modified a standard 1T1M STT-MRAM architecture to make CRAM possible. The CRAM layout integrates micro transistors into each cell and builds on the magnetic tunnel junction-based CPUs.

This approach provides better control and performance. The team then stacked an additional transistor, logic line (LL), and logic bit line (LBL) in each cell, enabling real-time computation within the same memory bank.

History of CRAM

Today’s AI systems require a new structure that can meet their computational demands without diminishing sustainability concerns. Recognizing this demand, engineers decided to delve deep into CRAM capabilities for the first time. Their results were published in the NPJ scientific journal under the report “Experimental demonstration of magnetic tunnel junction-based computational random-access memory.

The first CRAM leveraged an MTJ device structure. These spintronic devices improved on previous storage methods by using electron spin rather than transistors to transfer and store data using the MTJ method. This method utilizes a thin tunneling barrier that sits in between two ferromagnetic (FM) layers. A small voltage causes these electrons to get excited and transfer between layers.

This transfer creates current. It also provides an efficient way to read and write memory. The latest CRAM upgrade takes this concept further, providing high-performance results while reducing energy demands.

CRAM Study

The CRAM concept has been under development for years. However, there hasn’t been very much in-depth testing of its capabilities until now. Notably, the study leveraged a variety of patented concepts developed by the team and its predecessors along the way. For example, Magnetic Random Access Memory (MRAM), which is a vital component of today’s smartwatches, sensors, and microcontrollers, was used alongside modified MTJ devices to improve performance.

CRAM Tests

The testing stage required researchers to measure the performance of chips during logic execution. The researchers employed a variety of strategies to gain the deepest insight possible including Scalar addition, multiplication, and matrix multiplication.

The first step of the process was to measure activity under basic memory operations. From there the team stepped up computation using 2-, 3-, and 5-input logic operations. Following this stage, a 1-bit full adder with two different designs was introduced and retested. The final testing included a 1 × 7 array, which yielded interesting results.

CRAM Testing Results Show Lower Energy Demands

The testing results demonstrated how efficient the new proof-of-concept process is compared to today’s models. The data showed an average of 1000X less power consumption during computation. When combined with other power-saving methods, the approach demonstrated energy savings of 2,500 and 1,700 times less compared to traditional methods.

Additional Features

Another interesting discovery was that CRAM enabled the simultaneous random access of data and operands. This benefit greatly improves parallel computational capabilities, which can result in more secure and stable protocols in the future.

Benefits CRAM Brings to the Market

Examining the benefits that CRAM brings to the table will help you to better understand why this is a game-changing breakthrough that could have an immediate effect on the average person’s daily activities soon. CRAM enables developers and manufacturers to create hardware that’s configured perfectly for its primary task, reducing energy demands and improving performance.

AI – Focus

Artificial Intelligence is already changing so much. These systems are in high demand but require specific hardware and software to operate correctly. CRAM offers manufacturers the ability to create hardware from day one designed to support data-intensive, memory-centric, or power-sensitive applications.

In the future, CRAM will power advanced AI algorithms such as bioinformatics, signal processing, neural networks, edge computing, and the most advanced military hardware. The CRAM array will enable developers to create better-performing machine-learning applications that are flexible enough to meet the needs of the community.

Uses a Proven Structure

Another major benefit that makes CRAM a smart option is the use of proven hardware systems. Notably. CRAM uses common and mature technology which adds to consumer confidence. It also ensures that hardware issues are a minimum concern for users.

Flexible

CRAM provides true flexibility to developers. Programmers can compute data anywhere within the memory array using a variety of popular logic operations. Specifically, CRAM supports AND, OR, NAND, NOR, and MAJ, adding to its versatility.

Faster

Speed is another benefit that can’t be overlooked. It takes time for data to transfer between logic and memory storage locations. While this time may only be a fraction of a second, it can build up and lead to a lowered UX. CRAM eliminates the need for slow and energy-intensive data transfers by making the same memory responsible for these tasks.

Parallelism

Parallelism is the ability to run the same coding in parallel at the same time. It’s a vital component of many manufacturing, safety, and industrial operations. Due to the CRAM’s structure, it can run the same logic in parallel across the same memory array simultaneously.

Manufacturing Costs

CRAM will also lower the manufacturing costs for high-end devices by reducing the amount of pieces needed to build a product. CRAM uses the same memory for logic and data, which means that manufacturers can create chips with fewer parts. This structure reduces costs and improves reliability and performance.

Researchers

This research was spearheaded by Tema from the University of Minnesota. The lead researchers included Jian-Ping Wang, McKnight Professor, and Robert F. Hartmann. Additionally, Ulya Karpuzcu, Robert Bloom, Husrev Cilasun, Robert and Marjorie Henle, Sachin Sapatnekar, Brandon Zink, Zamshed Chowdhury, and Salonik Resch played vital roles in the study. A team from Arizona University also assisted, including Pravin Khanal, Ali Habiboglu, and Professor Weigang Wang.

The research was made possible by grants from the U.S. Defense Advanced Research Projects Agency (DARPA), the National Institute of Standards and Technology (NIST), the National Science Foundation (NSF), and Cisco Inc. The project also conducted tests and studies at the Minnesota Nano Center and the Minnesota Supercomputing Institute at the University of Minnesota.

Two Companies That Could Benefit from Reduced AI Energy Demands

There are many manufacturers that could secure additional revenue or improve their product line simply by integrating CRAM options. These firms hold strong positions in the market and have the ability to integrate new tech in a way that will improve their offerings greatly.

Microsoft finviz dynamic chart for  MSFT

Microsoft is a major player in the AI and computing markets. The company remains a pioneer in the sector and has been a major contributor to tech for over a decade. Today, Microsoft has a major stake in the AI sector and seeks to remain a viable contender moving forward.

Microsoft AI services have a major advantage over its competitors in that it remains the main operating system in use globally. As such, Microsoft AI systems have a massive audience of users and data that is accessible. This data has helped Microsoft create powerful new AI algorithms that will one day act as the core of the latest Windows operating systems.

Arm Holdings finviz dynamic chart for  ARM

Arms Holdings entered the market in 1990. It was founded by Sophie Wilson and Steve Furber. Originally, the company was named Advanced RISC Machines (ARM) Ltd. before rebranding to Arm Holdings in 1998. In the early 2000s, Arm Holding secured a reputation as a quality manufacturer offering GPUs and other products to the market.

Today, it remains a leading IP provider and semiconductor manufacturer. The firm was purchased in 2016 by Softbank for $32B and has seen considerable growth since. The manufacturer could greatly reduce its environmental footprint while improving performance and revenue by integrating CRAM into their current offerings.

AI’s Future will Depend on its Energy Demands

The main factor limiting AI adoption is its energy demands. These demands require people to think outside the box to create solutions to bottlenecks that limit performance. This latest study opens the door for a brighter future where AI services will access low-power solutions that empower the entire community.

Learn about other cool artificial intelligence projects now.



Source link

Related Articles