Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Search Publications by: Matthew Daniels (Fed)

Search Title, Abstract, Conference, Citation, Keyword or Author
Displaying 1 - 20 of 20

Measurement-driven Langevin modeling of superparamagnetic tunnel junctions

July 23, 2024
Author(s)
Liam Pocher, Temitayo Adeyeye, Sidra Gibeault, Philippe Talatchian, Ursula Ebels, Daniel Lathrop, Jabez J. McClelland, Mark Stiles, Advait Madhavan, Matthew Daniels
Superparamagnetic tunnel junctions are important devices for a range of emerging technologies, but most existing compact models capture only their mean switching rates. Capturing qualitatively accurate analog dynamics of these devices will be important as

Measurement-driven neural-network training for integrated magnetic tunnel junction arrays

May 14, 2024
Author(s)
William Borders, Advait Madhavan, Matthew Daniels, Vasileia Georgiou, Martin Lueker-Boden, Tiffany Santos, Patrick Braganca, Mark Stiles, Jabez J. McClelland, Brian Hoskins
The increasing scale of neural networks needed to support more complex applications has led to an increasing requirement for area- and energy-efficient hardware. One route to meeting the budget for these applications is to circumvent the von Neumann

Programmable electrical coupling between stochastic magnetic tunnel junctions

March 29, 2024
Author(s)
Sidra Gibeault, Temitayo Adeyeye, Liam Pocher, Daniel Lathrop, Matthew Daniels, Mark Stiles, Jabez J. McClelland, William Borders, Jason Ryan, Philippe Talatchian, Ursula Ebels, Advait Madhavan
Superparamagnetic tunnel junctions (SMTJs) are promising sources of randomness for compact and energy efficient implementations of various probabilistic computing techniques. When augmented with electronic circuits, the random telegraph fluctuations of the

Experimental demonstration of a robust training method for strongly defective neuromorphic hardware

December 11, 2023
Author(s)
William Borders, Advait Madhavan, Matthew Daniels, Vasileia Georgiou, Martin Lueker-Boden, Tiffany Santos, Patrick Braganca, Mark Stiles, Jabez J. McClelland, Brian Hoskins
Neural networks are increasing in scale and sophistication, catalyzing the need for efficient hardware. An inevitability when transferring neural networks to hardware is that non-idealities impact performance. Hardware-aware training, where non-idealities

Neural networks three ways: unlocking novel computing schemes using magnetic tunnel junction stochasticity

September 28, 2023
Author(s)
Matthew Daniels, William Borders, Nitin Prasad, Advait Madhavan, Sidra Gibeault, Temitayo Adeyeye, Liam Pocher, Lei Wan, Michael Tran, Jordan Katine, Daniel Lathrop, Brian Hoskins, Tiffany Santos, Patrick Braganca, Mark Stiles, Jabez J. McClelland
Due to their interesting physical properties, myriad operational regimes, small size, and industrial fabrication maturity, magnetic tunnel junctions are uniquely suited for unlocking novel computing schemes for in-hardware neuromorphic computing. In this

Magnetic tunnel junction-based crossbars: improving neural network performance by reducing the impact of non-idealities

July 13, 2023
Author(s)
William Borders, Nitin Prasad, Brian Hoskins, Advait Madhavan, Matthew Daniels, Vasileia Gerogiou, Tiffany Santos, Patrick Braganca, Mark Stiles, Jabez J. McClelland
Increasingly higher demand in chip area and power consumption for more sophisticated artificial neural networks has catalyzed efforts to develop architectures, circuits, and devices that perform like the human brain. However, many novel device technologies

Low-Rank Gradient Descent for Memory-Efficient Training of Deep In-Memory Arrays

May 18, 2023
Author(s)
Siyuan Huang, Brian Hoskins, Matthew Daniels, Mark Stiles, Gina C. Adam
The movement of large quantities of data during the training of a Deep Neural Network presents immense challenges for machine learning workloads. To minimize this overhead, espe- cially on the movement and calculation of gradient information, we introduce

Device Modeling Bias in ReRAM-Based Neural Network Simulations

January 20, 2023
Author(s)
Imtiaz Hossen, Matthew Daniels, Martin Lueker-Boden, Andrew Dienstfrey, Gina Adam, Osama Yousuf
The study of resistive-RAM (ReRAM) devices for energy efficient machine learning accelerators requires fast and robust simulation frameworks that incorporate realistic models of the device population. Jump table modeling has emerged as a phenomenological

Implementation of a Binary Neural Network on a Passive Array of Magnetic Tunnel Junctions

July 18, 2022
Author(s)
Jonathan Goodwill, Nitin Prasad, Brian Hoskins, Matthew Daniels, Advait Madhavan, Lei Wan, Tiffany Santos, Michael Tran, Jordan Katine, Patrick Braganca, Mark Stiles, Jabez J. McClelland
The increasing scale of neural networks and their growing application space have produced a demand for more energy and memory efficient artificial-intelligence-specific hardware. Avenues to mitigate the main issue, the von Neumann bottleneck, include in

Easy-plane spin Hall nano-oscillators as spiking neurons for neuromorphic computing

January 10, 2022
Author(s)
Danijela Markovic, Matthew Daniels, Pankaj Sethi, Andrew Kent, Mark Stiles, Julie Grollier
We show analytically using a macrospin approximation that easy-plane spin Hall nano-oscillators excited by a spin-current polarized perpendicularly to the easy-plane have phase dynamics analogous to that of Josephson junctions. This allows them to

Mutual control of stochastic switching for two electrically coupled superparamagnetic tunnel junctions

August 19, 2021
Author(s)
Philippe Talatchian, Matthew Daniels, Advait Madhavan, Matthew Pufall, Emilie Jue, William Rippard, Jabez J. McClelland, Mark Stiles
Superparamagnetic tunnel junctions (SMTJs) are promising sources for the randomness required by some compact and energy-efficient computing schemes. Coupling them gives rise to collective behavior that could be useful for cognitive computing. We use a

A System for Validating Resistive Neural Network Prototypes

July 27, 2021
Author(s)
Brian Hoskins, Mitchell Fream, Matthew Daniels, Jonathan Goodwill, Advait Madhavan, Jabez J. McClelland, Osama Yousuf, Gina C. Adam, Wen Ma, Muqing Liu, Rasmus Madsen, Martin Lueker-Boden
Building prototypes of heterogeneous hardware systems based on emerging electronic, magnetic, and photonic devices is an increasingly important area of research. On the face of it, the novel implementation of these systems, especially for online learning

Temporal Memory with Magnetic Racetracks

December 1, 2020
Author(s)
Hamed Vakili, Mohammed N. Sakib, Samiran Ganguly, Mircea Stan, Matthew Daniels, Advait Madhavan, Mark D. Stiles, Avik W. Ghosh
Race logic is a relative timing code that represents information in a wavefront of digital edges on a set of wires in order to accelerate dynamic programming and machine learning algorithms. Skyrmions, bubbles, and domain walls are mobile magnetic

Streaming Batch Gradient Tracking for Neural Network Training

April 3, 2020
Author(s)
Siyuan Huang, Brian D. Hoskins, Matthew W. Daniels, Mark D. Stiles, Gina C. Adam
Faster and more energy efficient hardware accelerators are critical for machine learning on very large datasets. The energy cost of performing vector-matrix multiplication and repeatedly moving neural network models in and out of memory motivates a search

Energy-efficient stochastic computing with superparamagnetic tunnel junctions

March 5, 2020
Author(s)
Matthew W. Daniels, Advait Madhavan, Philippe Talatchian, Alice Mizrahi, Mark D. Stiles
Stochastic computing has been limited by the inaccuracies introduced by correlations between the pseudorandom bitstreams used in the calculation. We hybridize a stochastic version of magnetic tunnel junctions with basic CMOS logic gates to create a

Streaming Batch Eigenupdates for Hardware Neural Networks

August 6, 2019
Author(s)
Brian D. Hoskins, Matthew W. Daniels, Siyuan Huang, Advait Madhavan, Gina C. Adam, Nikolai B. Zhitenev, Jabez J. McClelland, Mark D. Stiles
Neuromorphic networks based on nanodevices, such as metal oxide memristors, phase change memories, and flash memory cells, have generated considerable interest for their increased energy efficiency and density in comparison to graphics processing units