Engineers at Northwestern University in the US have devised an innovative transistor design that not only enables miniaturization but also significantly enhances the energy efficiency of artificial intelligence tasks, making them 100 times more power-efficient. This breakthrough may render cloud computing unnecessary for certain applications.
The ubiquity of AI technology has led to a widespread integration of AI-powered features by companies, both large and small. Silicon-based chips have been the driving force behind the AI revolution, with companies like Microsoft investing heavily in cloud-based infrastructure to accommodate the surge in demand.
However, Mark Hersam, a Materials Science professor at Northwestern University, highlights the energy-intensive nature of the cloud-based approach, where data is collected, transmitted to the cloud for analysis, and then results are sent back to users. In contrast, local data processing proves to be far more energy-efficient.
The transition from Silicon
To initiate the machine learning process, collected data must be categorized. Since each silicon transistor can handle only a single data processing task, larger datasets necessitate a growing number of transistors.
In response to this challenge, Hersam’s team opted to shift away from silicon and utilize two-dimensional molybdenum disulfide and one-dimensional carbon nanotubes to create miniature transistors. These novel transistors were engineered to be reconfigurable, capable of adapting to various stages of the analysis.
Hersam explained, “The integration of two disparate materials into one device allows us to strongly modulate the current flow with applied voltages, enabling dynamic reconfigurability.” The new design increases AI energy efficiency by 100 times.
This approach significantly reduced both the number of transistors required and the energy consumption, facilitating the miniaturization of the analysis to a level where it could be seamlessly incorporated into everyday wearable devices.
Advanced analysis ưithout the cloud
To showcase the capabilities of their device, the researchers employed publicly available medical datasets. They trained the AI to interpret electrocardiogram (ECG) data, a task that typically demands extensive training even for medical professionals.
The device was tasked with classifying data into six common heartbeat categories, including normal, atrial premature beat, premature ventricular contraction, paced beat, left bundle branch block beat, and right bundle branch block beat. It successfully achieved this with an impressive 95 percent accuracy.
Completing such a complex task using conventional silicon transistors would require at least 100 transistors, but the Northwestern researchers accomplished it with just two of their novel designs.
Hersam also underscored the significance of local data processing in safeguarding patient privacy, stating, “Every time data is passed around, it increases the likelihood of the data being stolen. If personal health data is processed locally, such as on your wrist in your watch, that presents a much lower security risk.”
Looking ahead, the team envisions their devices becoming integral components of everyday wearables, powering real-time applications without overburdening the power grid.