April 03, 2014

IBM Fellow

I was named an "IBM Fellow". See the official IBM official press release here. Through 51 year history of the program, only 257 Fellows have been appointed. My favorite Fellows are John Backus (FORTRAN), Nathaniel Rochester (first cortical simulations), John Cocke (RISC), Paul Coteus (Blue Gene). Here is an article by Steve Hamm on "IBM Fellows: Transforming Computing, Society and IBM".

I will also assume title of "Chief Scientist - Brain-inspired Computing" in addition to my existing role as "Senior Manager - Cognitive Computing".

I am grateful to my colleagues and collaborators; my managers; my mentors; DARPA; my alma maters, the University of California at San Diego, and the Indian Institute of Technology, Bombay; IBM; IBM Research; The lab here in Almaden; and, of course, my friends and family.

February 14, 2014

Summer Internships

Internship

January 04, 2014

Brainlike Computers, Learning From Experience

Last Sunday, The New York Times carried the following article by John Markoff on front page.

December 16, 2013

Thinking In Silicon

Today, MIT Technology Review featured an article by Tom Simonite on SyNAPSE project.

More Cognitive Computing / SyNAPSE Jobs

Software Build and Release Engineer

December 06, 2013

Cognitive Computing / SyNAPSE Jobs

Software

Hardware

October 17, 2013

When Debate Stalls, Try Your Paintbrush

Over the past weekend, the New York Times published a guest essay by me in its Preoccupations column highlighting some of the management lessons in running IBM's SyNAPSE project.

September 10, 2013

Best Paper at IDEMi'13

Our paper "Cognitive Computing Commercialization: Boundary Objects for Communication" was selected as the Best Paper at the 3rd International Conference on Integration of Design, Engineering and Management for Innovation (IDEMi'13).

August 08, 2013

DARPA SyNAPSE Phase 3 & A New Foundation to Program SyNAPSE Chips

Latest Results:

Today, IBM is announcing DARPA SyNAPSE Phase 3 that builds on Phase 0, Phase 1, and Phase 2. See here for my perspective on the significance of the results being announced. See here for a video explanation.

The latest results are described in four papers. The first three papers was presented this week at the International Joint Conference on Neural Networks (IJCNN), which is sponsored jointly by International Neural Network Society and the IEEE Computational Intelligence Society. The fourth paper will be presented at the 3rd International Conference on Integration of Design, Engineering, and Management for Innovation in early September. Brief paper summaries are below:

2013: Developed a simple, digital, reconfigurable, versatile spiking neuron model that supports one-to-one equivalence between hardware and simulation and is implementable using only 1272 ASIC gates.

2013: Developed a new programming paradigm that permits construction of complex cognitive algorithms and applications while being efficient for our cognitive computing chip architecture and effective for programmer productivity.

2013: Developed a set of abstractions, algorithms, and applications that are natively efficient for our cognitive computing architecture. These applications span in scale from 102 to 107 synapses.

2013: Envisioned a set of industrial design models to communicate SyNAPSE's value for science, technology, government, business and society.

Vertically-Integrated SyNAPSE Technology Stack
(A Clickable Map of Papers)


Key Past Milestones (in chronological order):

2007: Developed a massively parallel cortical simulator and demonstrated simulation at unprecedented scale of ~4×1011 synapses on Blue Gene/L.

2009: Developed a massively parallel cortical simulator and demonstrated simulation at unprecedented scale of 1013 synapses on Blue Gene/P.

2010: Derived, analyzed, and visualized the largest long-distance wiring diagram in the Macaque brain.

2011: Presented the vision of bringing together neuroscience, supercomputing, and nanotechnology to discover, demonstrate, and deliver the brain's core algorithms.

2011: Demonstrated a neurosynaptic core comprising 256 neurons and 256×256 binary synapses with on-chip learning based on spike-timing dependent plasticity.

2011: Demonstrated a key building block of a novel architecture, namely, a neurosynaptic core, with 256 digital integrate-and-fire neurons and a 1024×256 bit SRAM crossbar memory for synapses using IBM 45nm SOI process. For more design details, see here.

2012: Demonstrated several applications of the neurosynaptic core: (i) a robot driving in a virtual environment, (ii) the classic game of pong, (iii) visual digit recognition and (iv) an autoassociative memory.

2012: Demonstrated a biomimetic system that captures essential functional properties of the glomerular layer of the mammalian olfactory bulb.

2012: Developed a new non-von Neumann, modular, parallel, distributed, fault-tolerant, event-driven, scalable architecture inspired by the function, low power, and compact volume of the organic brain. The architecture comprises of a scalable network of configurable neurosynaptic cores.

2012: Developed a new multi-threaded, massively parallel functional simulator for the new architecture and a parallel compiler.

2012: Demonstrated the simulation of the new architecture at unprecedented scale of 1014 synapses on Blue Gene/Q.

2013: Visualized Connectivity of a Cognitive Computer Based on the Macaque Brain.


SyNAPSE Synopsis (Restated in a logical order, and derived from underlying technical papers):

We now live in an instrumented, interconnected, and, increasingly, intelligent planet.

A Smarter Planet

As a result, we are surrounded by real-time, noisy, analog, low-precision, high-dimensional, parallel, spatio-temporal, multimodal Big Data. We need cognitive systems that understand their environment, can deal with ambiguity, and act in real-time within context. When confronted with these challenges, traditional computers become constrained by power, size, and speed. (See here.)

Our goal is to develop a novel cognitive computing architecture -- inspired by the function, low power, and compact volume of the organic brain -- that captures the essence of neuroscience within the limits of current CMOS technology. Needless to say, in the long-term, by using the new architecture as a beacon, we seek to explore new technology frontiers. It is important to note that we seek to complement today's computers.

We demonstrated a neurosynaptic core in IBM's 45nm SOI CMOS technology. Each core integrates computation (neurons), memory (synapses), and intra-core communication (axons), breaking the von Neumann bottleneck. Each core is event-driven (as opposed to clock-driven), reconfigurable, compact, and consumes ultra-low power. (See here and here.)

Neurosynaptic Core

We have envisioned a new cognitive computing architecture that consists of an interconnected and communicating network of extremely large numbers of neurosynaptic cores. Cores are distributed modules that operate in parallel and send unidirectional messages to one another; that is, neurons on a source core send spikes to axons on a target core. One can think of cores as "gray matter" canonical cortical microcircuits, and inter-core connectivity as long-distance "white matter". Like the cerebral cortex, the architecture is highly scalable in terms of number of cores. Modularity of the architecture means that while capability increases with scale, design complexity does not. (See here.)

Cognitive Computing Chip Architecture

By exploiting the largest long-distance wiring diagram in the Macaque brain, we have visualized the new architecture and used this data to inform the physical design of the architecture. (See here and here.)

Cover of Science

We have developed a simple, digital, reconfigurable, versatile spiking neuron model suitable for hardware implementation. The neuron model supports a wide variety of computational functions and neural codes and can qualitatively replicate the 20 biologically-relevant behaviors of a dynamical neuron model. (See here.)

Neuron Behaviors

Unlike analog computation, our architecture is completely deterministic by design, and hence, functionally equivalent to software models. (See here and here.)

By exploiting hardware-software one-to-one equivalence, we have developed a multi-threaded, massively parallel functional simulator for the new architecture. We have demonstrated the simulation of the new architecture at unprecedented scale of 1014 synapses. (See here and here.)

We have developed a non-von Neumann architecture. Adapting programming languages developed for the von Neumann architecture to it, is like putting a square peg in a round hole. So, we have developed a new programming paradigm with compositional semantics, namely, Corelet Programming, that permits construction of complex cognitive algorithms and applications while being efficient for our cognitive computing chip architecture and effective for programmer productivity. Corelet Programming is an entirely new way of thinking.

Corelet Programming

We have developed not just a new language, but a new software ecosystem---new foundation to program cognitive computing chips. We have developed a library that acts as an ever-growing repository of reusable corelets from which programmers compose new corelets. As of this writing, the library contains over 150 corelets. We have developed an end-to-end environment that supports all aspects of the SyNAPSE programming cycle spanning design, development, debugging, and deployment. (See here.)

End-to-End

By using the environment, we have developed a set of abstractions, algorithms, and applications that are natively efficient for our architecture. We have implemented a vast array of cognitive algorithms and applications with rich feedforward, feedback, lateral, and recurrent connectivity. Specifically, we implemented algorithms that include convolution networks, spectral content estimators, liquid state machines, restricted Boltzmann machines, hidden Markov models, looming detection, temporal pattern matching, and various classifiers. Composing these suite of algorithms, we have demonstrated several applications that include speaker recognition, music composer recognition, digit recognition, sequence prediction, collision avoidance, optical flow, and eye detection. (See here.)

Speaker Recognition System

We have envisioned a set of industrial design models to communicate SyNAPSE's value for science, technology, government, business and society. (See here for the paper and here for a video.) The legend for the figure below is: (a) BrainCube, (b) Sensor Leaves, (c) Conversation Flower, (d) Jellyfish, (e) Tumbleweed, (f) Vision Cubes, (g) Composable Cubes, (h) Vision Assistive, (i) Home Health Wand and Pulmonary Monitor, (j) Build-a-Brain.

Industrial Design Models

To promulgate the technology, we have developed a novel teaching curriculum that spans the architecture, neuron specification, chip simulator, programming language, application library and prototype design models. This will enable a range of researchers, academics, developers, partners, and customers.

In summary, SyNAPSE is a new synthesis of silicon and software inspired by the brain.


IBM SyNAPSE Phase 3 Team:

IBM Research - Almaden

Almaden Team

Seated, Left-to-right: Arnon Amir, David Berg, Andrew Cassidy, Dharmendra S. Modha, Rodrigo Alvarez-Icaza, Sue Gilson, Alexander Andreopoulos
Standing, Left-to-right: Jeffrey Kusnitz, Norm Pass, John Arthur, Rathinakumar Appuswamy, Ben Shaw, Brian Taba, Bryan Jackson, David Peyton, Emmett McQuinn, Pallab Datta, Bill Risk, Wendy Belluomini, Myron Flickner, Nitin Parekh, Steven Esser, Paul Merolla, Davis Barch
Photo Credit: Hita Bambhania-Modha, photo free to use without restrictions!
Link to high-resolution image is here.

Filipp Akopyan

Filipp Akopyan (pictured at IBM T. J. Watson Research Center)


Cornell University

Rajit Manohar Nabil Imam

Rajit Manohar, Nabil Imam


iniLabs

Chuck Alpert Chuck Alpert Chuck Alpert Chuck Alpert Chuck Alpert Fadi H. Gebara

Luca Longinotti, Vicente Villanueva, Sim Bamford, Tobi Delbruck, Shih-Chii Liu, Rodney Douglas


IBM Research - Austin

Austin Team

Gi-Joon Nam, JB Kuang, Jun Sawada, Ivan Vo, Tuyet Y Nguyen, Zhuo Li, Don Nguyen

Chuck Alpert Fadi H. Gebara

Chuck Alpert, Fadi H. Gebara


IBM Research - India

Tapan Nayak and Raghavendra Singh

Tapan Nayak, Raghavendra Singh


IBM Research - Tokyo

Yutaka Nakamura

Yutaka Nakamura


IBM T. J. Watson Research Center

Jae-sun Seo, Bernard Brezzo, Mohit Kapur, Daniel Friedman, 
Charles Haymes, Sameh Asaad, Roger Moussalli

Jae-sun Seo, Bernard Brezzo, Mohit Kapur, Daniel Friedman, Charles Haymes, Sameh Asaad, Roger Moussalli

Christian Baks Ralph Bellofatto

Christian Baks, Ralph Bellofatto


IBM Systems and Technology Group - East Fishkill

John Barth Herbert Ho, Subu Iyer Jonathan Faltermeier

John Barth, Herbert Ho, Subu Iyer, Jonathan Faltermeier


IBM Systems and Technology Group - Hardware Experience Design

Daniel Friedman Daniel Friedman Daniel Friedman Daniel Friedman

Aaron Cox, Paula Besterman, Jason Minyard, Camillo Sassano

The postings on this site are my own and don’t necessarily represent IBM’s positions, strategies or opinions.