The rapid advancement of artificial intelligence (AI) and machine learning systems has increased the demand for new hardware components that could speed up data analysis while consuming less power. As machine learning algorithms draw inspiration from biological neural networks, some engineers have been working on hardware that also mimics the... Read more
For decades, computing followed a simple rule: Smaller transistors made chips faster, cheaper, and more capable. As Moore's law slows, a different limit has come into focus. The challenge is no longer only computation; modern processors and accelerators are throttled by interconnection.... Read more
Cornell University researchers have developed a low-power microchip they call a "microwave brain," the first processor to compute on both ultrafast data signals and wireless communication signals by harnessing the physics of microwaves.... Read more
Researchers from MIT and elsewhere have designed a novel transmitter chip that significantly improves the energy efficiency of wireless communications, which could boost the range and battery life of a connected device.... Read more
The startup behind Chicago's more than $1 billion quantum computing deal said operations are expected to start in three years, a win for Illinois Governor JB Pritzker, who backed the investment and is widely seen as a potential presidential candidate.... Read more
Large language models (LLMs) like BERT and GPT are driving major advances in artificial intelligence, but their size and complexity typically require powerful servers and cloud infrastructure. Running these models directly on devices—without relying on external computation—has remained a difficult technical challenge.... Read more
JUPITER became the world's fourth fastest supercomputer when it debuted last month. Though housed in Germany at the Jülich Supercomputing Center (JSC), Georgia Tech played a supporting role in helping the system land on the latest TOP500 list.... Read more
The latest generative AI models such as OpenAI's ChatGPT-4 and Google's Gemini 2.5 require not only high memory bandwidth but also large memory capacity. This is why generative AI cloud operating companies like Microsoft and Google purchase hundreds of thousands of NVIDIA GPUs.... Read more
Seoul National University College of Engineering announced that a research team has developed a new hardware security technology based on commercially available 3D NAND flash memory (V-NAND flash memory).... Read more
Researchers at NYU Tandon School of Engineering have created VeriGen, the first specialized artificial intelligence model successfully trained to generate Verilog code, the programming language that describes how a chip's circuitry functions.... Read more
Artificial intelligence is considered to be computationally and energy-intensive—a challenge for the Internet of Things (IoT), where small, embedded sensors have to make do with limited computing power, little memory and small batteries.... Read more
When it comes to storing images, DNA strands could be a sustainable, stable alternative to hard drives. Researchers at EPFL are developing a new image compression standard designed specifically for this emerging technology.... Read more
BingoCGN, a scalable and efficient graph neural network accelerator that enables inference of real-time, large-scale graphs through graph partitioning, has been developed by researchers at the Institute of Science Tokyo, Japan. This breakthrough framework utilizes an innovative cross-partition message quantization technique and a novel training algorithm to significantly reduce memory... Read more
A novel power supply technology for 3D-integrated chips has been developed by employing a three-dimensionally stacked computing architecture consisting of processing units placed directly above stacks of dynamic random access memory.... Read more
Researchers at the University of Massachusetts Amherst have pushed forward the development of computer vision with new, silicon-based hardware that can both capture and process visual data in the analog domain. Their work, described in the journal Nature Communications, could ultimately add to large-scale, data-intensive and latency-sensitive computer vision tasks.... Read more