« October 2009 | Main | February 2010 »

November 19, 2009

ACM Gordon Bell Prize for "The Cat is Out of the Bag"

Today, at the Supercomputing 2009 conference in Portland, Oregon, our paper "The Cat is Out of the Bag" was awarded the ACM Gordon Bell Prize.

The ACM Gordon Bell Prize has been awarded since 1987 to recognize outstanding achievement in high-performance computing. It is now administered by the Association for Computing Machinery (ACM), with financial support for the stipend provided by Gordon Bell, a pioneer in high-performance and parallel computing.

The purpose of the prize is to track the progress of leading-edge technical computing, namely simulation, modeling and large-scale data analysis as applied to science, engineering or other fields. In addition to the main ACM Gordon Bell Prize, the Bell Prize Committee may, at its discretion, grant a special award to recognize an achievement in a related area such as price/performance, usage of innovative techniques or non-traditional types of computation.

Gordon Bell Prize

From left to right: Steven K. Esser, Horst D. Simon, Dharmendra S. Modha,
Mateo Valero (Chair, Gordon Bell Prize Committee), Rajagopal Ananthanarayanan

November 18, 2009

The Cat is Out of the Bag and BlueMatter

Today at SC 09, the supercomputing conference in Portland, Oregon, IBM is announcing progress toward creating a computer system that simulates the way the brain works.  Two major milestones indicate the feasibility of building a cognitive computing chip: unprecedented advances in large-scale cortical simulation and a new algorithm that synthesizes neurological data.

Links 

IBM announcement is here.  A story from AP press is here.

The Cat is Out of The Bag is here.
Authors: Rajagopal Ananthanarayanan, Steven K. Esser, Horst D. Simon, and Dharmendra S. Modha

BlueMatter is here.
Authors: Anthony J. Sherbondy, Robert F. Dougherty (Stanford University), Rajagopal Ananthanarayanan, Dharmendra S. Modha, Brian A. Wandell (Stanford University) 

DARPA SyNAPSE BAA is here.

For information on DARPA SyNAPSE Phase 0 award to IBM, please see here. IBM's last year's announcement is here.

Overview

Can you please summarize (in a picture)?

Announcement Overview

Can you please summarize (in words)?

The brain is fundamentally different from and complementary to today’s computers. The brain can exhibit awe-inspiring function of sensation, perception, action, interaction, and cognition. It can deal with ambiguity and interact with real-world, complex environments in a context-dependent fashion. And yet, it consumes less power than a light bulb and occupies less space than a 2-liter bottle of soda.

Our long-term mission is to discover and demonstrate the algorithms of the brain and deliver cool, compact cognitive computers that that complements today’s von Neumman computers and approach mammalian-scale intelligence. We are pursuing a combination of computational neuroscience, supercomputing, and nanotechnology to achieve this vision.

Towards this end, we are announcing two major milestones.

First, using Dawn Blue Gene / P supercomputer at Lawrence Livermore National Lab with 147,456 processors and 144 TB of main memory, we achieved a simulation with 1 billion spiking neurons and 10 trillion individual learning synapses. This is equivalent to 1,000 cognitive computing chips each with 1 million neurons and 10 billion synapses, and exceeds the scale of cat cerebral cortex. The simulation ran 100 to 1,000 times slower than real-time.

Second, we have developed a new algorithm, BlueMatter, that exploits the Blue Gene supercomputing architecture to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information.

These milestones will provide a unique workbench for exploring a vast number of hypotheses of the structure and computational dynamics of the brain, and further our quest of building a cool, compact cognitive computing chip.

Why do we need cognitive computing? How could cognitive computing help build a smarter planet?

As the amount of digital data that we create continues to grow massively and the world becomes more instrumented and interconnected, there is a need for new kinds of computing systems – imbued with a new intelligence that can spot hard-to-find patterns in vastly varied kinds of data, both digital and sensory; analyze and integrate information real-time in a context-dependent way; and deal with the ambiguity found in complex, real-world environments.   Cognitive computing offers the promise of entirely new computing architectures, system designs and programming paradigms that will meet the needs of the instrumented and interconnected world of tomorrow.

Smarter Planet

What is the goal of the DARPA SyNAPSE project?

The goal of the DARPA SyNAPSE program is to create new electronics hardware and architecture that can understand, adapt and respond to an informative environment in ways that extend traditional computation to include fundamentally different capabilities found in biological brains.

Who is on your SyNAPSE team?

Stanford University: Brian A. Wandell, H.-S. Philip Wong

Cornell University: Rajit Manohar

Columbia University Medical Center: Stefano Fusi

University of Wisconsin-Madison: Giulio Tononi

University of California-Merced: Christopher Kello

IBM Research: Rajagopal Ananthanarayanan, Leland Chang, Daniel Friedman, Christoph Hagleitner, Bulent Kurdi, Chung Lam, Paul Maglio, Stuart Parkin, Bipin Rajendran, Raghavendra Singh 

The Cat is Out of the Bag

  cat-out-of-the-bag.jpg

What advantages does Blue Gene provide to enable these simulations?

Mammalian-scale simulations place tremendous restraints on the memory, processor and communication capabilities of any computer system. Blue Gene architecture provides the best match to meet these resource requirements by supporting hundreds of terabytes of memory, and hundreds of thousands of processors This is augmented with outstanding communication capabilities in terms of bi-section and point-to-point bandwidth, excellent low-latency of communication and very efficient broadcast and reduce networks, some of which have dedicated hardware resources, and thus allowing truly parallel exploitation of processors and their memory.

What role do large-scale cortical simulations play in the SyNAPSE project?

Please note that the cat-scale cortical simulation is equivalent to equivalent to 1,000 cool, compact cognitive computing chips each with 1 million neurons and 10 billion synapses, and compares very favorably to DARPA’s published metrics.

The simulations in C2 will help guide the design of features in the SyNAPSE chip and the overall architecture of the hardware. C2 supports customizable components, in which hardware neurons and synapses can be used instead of the default biologically inspired phenomenological neurons and synapses. Thus, C2 enables a functional simulation of the hardware and helps choose between alternate hardware implementations.

Can you place the cat-scale simulation in context of relate to your past work?

For past work on rat-scale simulations, please see here and for mouse-scale simulations, please see here.

Mouse (40%)

December 2006:
Blue Gene/L at IBM Research - Almaden with 4,096 CPUs and 1 TB memory
40% mouse-scale with 8 million neurons, 50 billion synapses
10 times slower than real-time at 1 ms simulation resolution

Rat

April 2007:
Blue Gene/L at IBM Research - Watson with 32,768 CPUs and 8 TB memory
Rat-scale with 56 million neurons, 448 billion synapses
10 times slower than real-time at 1 ms simulation resolution

One Percent

March 2009:
Blue Gene/P on KAUST-IBM WatsonShaheen machine with 32,768 CPUs and 32 TB memory
1% of human-scale with 200 million neuron, 2 trillion synapses
100 - 1000 times slower than real-time at 0.1ms simulation resolution

Cat

SC09: this announcement:
Blue Gene/P DAWN at LLNL with 147,456 CPUs and 144 TB memory
Cat-scale with 1 billion neurons, 10 trillion synapses
100-1000 times slower than real-time at 0.1ms simulation resolution
Neuroscience details: neuron dynamics, synapse dynamics, individual learning synapses, biologically realistic thalamocortical connectivity, axonal delays
Prediction: In 2019, using a supercomputer with 1 Exaflop/s and 4PB of main memory, a near real-time human-scale simulation may become possible.

Summary: Progress in large-scale cortical simulations.  Each of the four charts above details recent achievements in the simulation of networks of single-compartment, phenomenological neurons with connectivity based on statistics derived from mammalian cortex.  Simulations were run on Blue Gene supercomputers with progressively larger amounts of main memory.  The number of synapses in the models varied from 5,485 to 10,000 synapses per neuron, reflecting construction from different sets of biological measurements.  First: Simulations on a Blue Gene/L supercomputer of a 40% mouse-scale cortical model with 8 million neurons and 52 billion synapses, employing 4,096 processors and 1 TB of main memory.  Second: Simulations on a Blue Gene/L supercomputer culminating in a rat-scale cortical model with 58 million neurons and 461 billion synapses, using 32,768 processors and 8 TB of main memory.  Third: Simulations on a Blue Gene/P supercomputer culminating in a one-percent human-scale cortical model with 200 million neurons and 1.97 trillion synapses, employing 32,768 processors and 32 TB of main memory.  Fourth: Simulations on a Blue Gene/P supercomputer culminating in a cat-scale cortical model with 1.62 billion neurons and 8.61 trillion synapses, using 147,456 processors and 144 TB of main memory.  The largest simulations performed on this machine correspond to approximately 4.5% of human cerebral cortex.

When will human-scale simulations become possible?

2019

The figure shows the progress that has been made in supercomputing since the early 90s.  At each time point, the green line shows the 500th fast supercomputer, the dark blue line the fastest supercomputer, and the light blue line the summed power of the top 500 machines.  These lines show a nice trend, which we’ve extrapolated out 10 years.

The IBM team’s latest simulation results represent a model about 4.5% the scale of the human cerebral cortex, which was run at 1/83 of real time. The machine used provided 144 TB of memory and 0.5 PFLop/s.

Turning to the future, you can see that running human scale cortical simulations will probably require 4 PB of memory and to run these simulations in real time will require over 1 EFLop/s.  If the current trends in supercomputing continue, it seems that human-scale simulations will be possible in the not too distant future.

What aspects of the brain does the model include?

The model reproduces a number of physiological and anatomical features of the mammalian brain.  The key functional elements of the brain, neurons, and the connections between them, called synapses, are simulated using biologically derived models.  The neuron models include such key functional features as input integration, spike generation and firing rate adaptation, while the simulated synapses reproduce time and voltage dependent dynamics of four major synaptic channel types found in cortex.  Furthermore, the synapses are plastic, meaning that the strength of connections between neurons can change according to certain rules, which many neuroscientists believe is crucial to learning and memory formation.

At an anatomical level, the model includes sections of cortex, a dense body of connected neurons where much of the brain's high level processing occurs, as well as the thalamus, an important relay center that mediates communication to and from cortex.  Much of the connectivity within the model follows a statistical map derived from the most detailed study to date of the circuitry within the cat cerebral cortex.

What do the simulations demonstrate?

We are able to observe activity in our model at many scales, ranging from global electrical activity levels, to activity levels in specific populations, to topographic activity dynamics to individual neuronal membrane potentials. In these measurements, we have observed the model reproduce activity in cortex measured by neuroscientists using corresponding techniques: electroencephalography, local field potential recordings, optical imaging with voltage sensitive dyes, and intracellular recordings.   Specifically, we were able to deliver a stimulus to the model then watch as it propagated within and between different populations of neurons.  We found that this propagation showed a spatiotemporal pattern remarkably similar to what has been observed in experiments with real brains.  In other simulations, we also observed oscillations between active and quiet periods, as is often observed in the brain during sleep or quiet waking.  In all our simulations, we are able to simultaneously record from billions of individual model components, compared to cutting-edge neuroscience techniques that might allow simultaneous recording of a few hundred brain regions, thus providing us with an unprecedented picture of circuit dynamics.

Can I see the simulator in action?

Yes, if you can download a 150 MB movie Smile

The following is a frame from the movie.  An earlier frame showing the input is here and a later frame is here. To understand the figure and the movie, it is helpful if you study Figure 1 in the paper.

IBM Logo Frame 

Caption: Like the surface of a still lake reacting to the impact of a pebble, the neurons in IBM's cortical simulator C2 respond to stimuli. Resembling a travelling wave, the activity propagates through different cortical layers and cortical regions. The simulator is an indispensable tool that enables researchers to bring static structural brain networks to life, to probe the mystery of cognition, and to pave the path to cool, compact cognitive computing systems.

Please note that the simulator is demonstrating how information percolates and propagates. It is NOT learning the IBM logo.

How close is the model to producing high level cognitive function?

Please note that the rat (-scale simulation) does not sniff cheese, and the cat (-scale simulation) does not chase the rat. Smile Up to this point, our efforts have primarily focused on developing the simulator as a tool of scientific discovery that incorporates many neuroscientific details to produce large-scale thalamocortical simulations as a means of studying behavior and dynamics within the brain.  While diligent researchers have made tremendous strides in improving our understanding of the brain over the past 100 years, neuroscience has not yet reached the point where it can provide us with a recipe of how to wire up a cognitive system.  Our hope is that by incorporating many of the ingredients that neuroscientists think may be important to cognition in the brain, such as a general statistical connectivity pattern and plastic synapses, we may be able to use the model as a tool to help understand how the brain produces cognition.

What do you see on the horizon for this work in thalamocortical simulations?

We are interested in expanding our model in both scale and in the details that it incorporates.  In terms of scale, as the amount of memory available in cutting edge supercomputers continues to increase, we foresee that simulations at the scale of monkey cerebral cortex and eventually the human cerebral cortex will soon be within reach.  As supercomputing speed increases, we also see the speed of our simulations increasing to approach real-time.

In terms of details in our simulations, we are currently working on differentiating our cortical region into specific areas (such as primary visual cortex or motor cortex) and providing the long-range connections that form the circuitry between these areas in the mammalian brain.  For this work, we are drawing from many studies describing the structure and input/output patterns of these areas as well as a study recently performed within IBM that collates a very large number of individual measurements of white matter, the substrate of long-range connectivity within the brain.

How will this affect neuroscience?

Within neuroscience, there is a rich history of using brain simulations as a means of developing models based on experimental observations, testing those models and then using those models to form predictions that can be tested through further experiments.  A major limitation of such efforts is computational power, forcing models to make major sacrifices in terms of detail or scale.  Through our work, we have developed and demonstrated a tool that enables simulations at very large-scales on cutting edge supercomputers.  We believe that as this tool continues to grow, it will serve as a crucial test bed for testing hypotheses about brain function through simulations at a scale and level of detail never before possible.

BlueMatter

What does BlueMatter mean?

BlueMatter is a highly parallelized algorithm for identifying white matter projectomes written to take advantage of the Blue Gene supercomputing architecture. Hence, the term BlueMatter.

Can you please provide more details on BlueMatter?

Our software, BlueMatter, is able to provide unique visualization and measurement of the long range circuitry (interior white matter) that allow geographically separated regions of the brain to communicate.  The labels or colors of the fibers represent divisions of these fibrous networks that we are measuring.  The colors and names are as follows:

Red - Interhemispheric fibers projecting between the corpus callosum and frontal cortex.
Green - Interhemispheric fibers projecting between primary visual cortex and the corpus callosum.
Yellow - Interhemispheric fibers projecting from corpus callosum and not Red or Green.
Brown - Fibers of the superior longitudinal fasciculus, connecting regions critical for language processing.
Orange - Fibers of inferior longitudinal fasciculus and uncinate fasciculus, connecting regions to cortex responsible for memory.
Purple - Projections between parietal lobe and lateral cortex
Blue - Fibers connecting local regions of the frontal cortex

Blue Matter

 

Blue Matter 2

High-resolution version (2MB)

The figure displays results from BlueMatter, a parallel algorithm for white matter projection measurement.  Recent advances in diffusion-weighted magnetic resonance imaging (DW-MRI) have allowed the unprecedented ability to non-invasively measure the human white matter network across the entire brain. DW-MRI acquires an aggregate description of the diffusion of water molecules, which act as microscopic probes of the dense packing of axon bundles within the white matter.  Understanding the architecture of all white matter projections (the projectome) may be crucial for understanding brain function, and has already lead to fundamental discoveries in normal and pathological brains.  The figure displays a view from the top of the brain (top) and a view from the left hemisphere (bottom).  The cortical surface is shown (gray) as well as the brain stem (pink) in context with a subset of BlueMatter’s projectome estimate coursing through the core of the white matter in the left hemisphere.  Leveraging the Blue Gene/L supercomputing architecture, BlueMatter creates a massive database of 180 billion candidate pathways using multiple DW-MRI tracing algorithms, and then employs a global optimization algorithm to select a subset of these candidates as the projectome. The estimated projectome accounts for 72 million projections per square centimeter of cortex and is the highest resolution projectome of the human brain.

What role will BlueMatter play in the SyNAPSE project?

Long term, we hope that our work will lead to insights on how to wire together a system of cognitive computing chips. Short term, we are incorporating data from BlueMatter into our cortical simulations.

What makes all the computational power necessary?

Because of the relatively low resolution of the data compared with the white matter tissue, there are many possible sets of curves one may draw in order to estimate the projectome and compare it with a global error metric as we have done.  Searching this space leads to a combinatorial explosion of possibilities.  This has led many researchers to focus on individual tract estimation at the cost of ignoring global constraints, such as the volume consumption of the tracts.  Rather than simplify our model, we have addressed the computational challenge with an algorithm designed to specifically leverage a supercomputing architecture of Blue Gene.

What are the next steps?

We are also interested in using our technique to make measurements on the projectome and communication between brain areas that can generate hypothesis about brain function that may be validated with behavioral results or perhaps functional imaging and can be integrated with large-scale simulations. 

Future

How will your current project to design a computer similar to the human brain change the everyday computing experience?

While we have algorithms and computers to deal with structured data (for example, age, salary, etc.) and semi-structured data (for example, text and web pages), no mechanisms exist that parallel the brain’s uncanny ability to act in a context-dependent fashion while integrating ambiguous information across different senses (for example, sight, hearing, touch, taste, and smell) and coordinating multiple motor modalities. Success of cognitive computing will allow us to mine the boundary between digital and physical worlds where raw sensory information abounds. Imagine, for example, instrumenting the world’s oceans with temperature, pressure, wave height, humidity and turbidity sensors, and imagine streaming this information in real-time to a cognitive computer that may be able to detect spatiotemporal correlations, much like we can pick out a face in a crowd. We think that cognitive computing has the ability to profoundly transform the world and bring about entirely new computing architectures and, possibly even, industries.

What is the ultimate goal?

Cognitive computing seeks to engineer the mind by reverse engineering the brain.  The mind arises from the brain, which is made up of billions of neurons that are liked by an internet like network. An emerging discipline, cognitive computing is about building the mind, by understanding the brain. It synthesizes neuroscience, computer science, psychology, philosophy, and mathematics to understand and mechanize the mental processes.  Cognitive computing will lead to a universal computing platform that can handle a wide variety of spatio-temporally varying sensor streams.

Brain Chip

Can I help?

We have a number of job openings, please see here.

Credits

This blog entry was written in collaboration with my co-authors Rajagopal Ananthanarayanan, Robert F. Dougherty, Steven K. Esser, Anthony J. Sherbondy, Horst Simon, and Brian A. Wandell.

November 04, 2009

Lord Sainsbury of Turville

Today, Lord David Sainsbury of Turville visited IBM Research - Almaden.  From Left: Gregory S. Corrado, Steven K. Esser, Dharmendra S. Modha, Lord Sainsbury, Anthony J. Sherbondy, Sarah Caddick, and Rajagopal Ananthanarayan.

Lord Sainsbury at Almaden