« September 2006 | Main | November 2006 »

October 26, 2006

Neural Theory of Language

Today, we had a wonderful time with Dr. Srini Narayanan who leads the AI Group at ICSI and is an Adjunct Associate Professor at UC Berkeley.

Here is abstract of the talk:

The UCB/ICSI NTL project has been developing an explicitly neural theory of language. The core premise is that language is largely determined by the computational character of neural networks, the structure of our brains, and our interactions with the physical and social environment. Work within the NTL project coupled with a variety of converging evidence suggests that understanding involves embodied enactment or "simulation semantics".

Simulation semantics hypothesizes the mind as "simulating" the external world while functioning in it. The "simulation" takes sensory input about the state of the world (whether linguistic or perceptual) together with general knowledge and makes new inferences. Monitoring the state of the external world, drawing inferences, and acting jointly constitute a dynamic ongoing interactive process.

We report on a neurally plausible, computational realization of the simulation semantics hypothesis, and on preliminary results from behavioral and fMRI imaging experiments testing its biological predictions.

The core ideas of NTL are captured in a recent book, From Molecule to Metaphor: A Neural Theory of Language, by Professor Jerome Feldman.

October 21, 2006

Action Potential

At SFN 2006, I learned about an interesting blog Action Potential by editors of Nature Neuroscience.

October 16, 2006

Society for Neuroscience 2006: Oct 13-16

I am currently at the SFN 2006 Conference in Atlanta, where over 25,000+ other scientists are participating. The conference has featured talks (which are held in a ballroom that can hold 5,000 people), regular presentations (in a number of parallel sessions), posters (which are held in a room which is the size of perhaps 3-4 football stadiums), symposia, mini-symposia, exhibit booths, satellite events, and a variety of scientific socials. The breadth of the topics pursued and the scale of the conference is, to say the least, mind-boggling.

I have met a number of old friends and colleagues, made new conections, and obtained a number of valuable insights for future work.

The conference is definitely dominated by scientists with a bottoms-up approach and is lacking in theorists who put together the whole picture.

In the highlight event for the conference, noted architect Frank Gehry advised us to "search for your own character" but "within the grounded constraints of reality." He explained how he started his career by going back 300 million years to the structure of primitive fish!

In his beautifully articulated acceptance speech for the Peter Gruber Prize, Professor Masao Ito (who is famous for explaining the cerebellar functioning and is credited with the understanding long-term synaptic depression) said that his dream is to eventually explain "unconscious domain of mind."

In a featured lecture, Professor J. A. Movshon described how he and his colleagues uncovered the properties of MT (V4).

In a satellite event, namely, Advances in Computational Motor Control V, Professor Chris Atkeson from CMU showed a number of very impressive robotic demos. His work is based entirely on instance-learning, one-shot learning, or motor tape theory. He openly admitted to having diverged from neuroscience. A question was raised whether the approach is "just a bag of tricks". At the same event Professor Emo Todorov explained his theory of compositionality whereby simple control systems such as linear quadratic regulators and eigen-controllers can be composed together while maintaining mathematical and analytical tractability as long as they share the underlying dynamics. Dr. Jeff McKinstry showed off a demo of a brain-based device that can learn to avoid obstacles by using a model of cerebellum that learns to replace reflexes with a predictive controller, namely, preflexes.

In a symposium on Intenal Models for Sensorimotor Integration, Professor Reza Shadmehr described how to model "living with a changing body", for example, fatigue tires us but vanishes relatively quickly whereas disease changes us for a relatively longer period. He explained several models that use two learning rules: a rule that learns and forgets fast and a rule that learns and forgets slowly. In a fascinating talk, Professor Andrea M. Green described how a monkey (whose body is made rigid) can distinguish between tilts and translations.

It is impossible to succinctly summarize the entire technical experience, but I met people who study saccades, grasping and reaching, neocortical circuits, cerebellar microcomplex, spiking neurons, firing rate neurons, neuro-anatomy, neuro-informatics, robotics, rats in various mazes, cognitive maps of memory, reinforcement learning, hippcampus, striatum, neuro-prosthetics, place cells in hippocampus and related grid cells in the cortex, thalamus, and many others.

A tasty tidbit: umami has now been recognized as the fifth basic taste.

My prediction is that SFN 2007 and beyond will start attracting a lot more (cognitive) computer scientists...at least it should!

My brain and body are tired (the distances are too large, there are too many presentations/posters, the food is terrible, the temperature is too cold, ventilation is not very good, it is hard to navigate through crowds, there is no California sunshine), but my mind is all fired up and intensely happy :) So far, the meeting has been exceptionally energizing and productive.

October 09, 2006

Jeff Hawkins: "An Enterprising Approach to Brain Science"

This week's Science issue (6 October 2006: Vol. 314. no. 5796, pp. 76 - 77) provides a wonderful account of Jeff Hawkins: "Mobile computing pioneer Jeff Hawkins has had a lifelong fascination with brains. Now he’s trying to model the human cerebral cortex—and he’s created a software company based on his ideas".

You can read about Jeff Hawkins' start-up Numenta here.  

October 05, 2006

Robot whiskers sense shapes and textures

Joseph H. Solomon and Mitra J. Hartmann reported in Nature (vol. 443, p. 525, October 2006) development of robotic whiskers:

"Several species of terrestrial and marine mammals with whiskers (vibrissae) use them to sense and navigate in their environment — for example, rats use their whiskers to discern the features of objects, and seals rely on theirs to track the hydrodynamic trails of their prey. Here we show that the bending moment — sometimes referred to as torque — at the whisker base can be used to generate three-dimensional spatial representations of the environment, and we use this principle to construct robotic whisker arrays that extract precise information about object shape and fluid flow."

October 02, 2006

The Swartz Foundation for Computational Neuroscience

From my perspective, cognitive computing is "neuroscientifically-inspired computing". The flip side of which is "computationallly-enabled (inspired) neuroscience".

Today, a major force shaping the field of computational neuroscience is Dr. Jerome (Jerry) Swartz.  Dr. Swartz is an incredible human being: a scientist (an inventor of over 200 patents), a technological innovator (winner of National Medal of Technology), a successful entrepreneur (founder of Symbol Technologies -- recently acquired by Motorola), and a philanthropist.

In 1994, Dr. Swartz established the Swartz Foundation. The Foundation has established 3 centers at Cold Spring Harbor Laboratory, Columbia University and UC San Diego, and has partnered with the Sloan Foundation to establish five centers at Salk Institute, Cal Tech, NYU/Courant, Brandeis, and UC San Francisco. In effect, the Foundation has created a "Virtual Neuroscience Institute" that brings together the very best minds in the field together.

Recently, I had an opportunity to visit the Swartz Center for Computational Neuroscience at UC San Diego, and to spend time with its director, Dr. Scott Makeig. The center is equipped with state-of-the-art EEG labs and a high-performance compute cluster. Dr. Makeig's personal interest is in applying Independent Component Analysis (a la Bell and Sejnowski) to EEG data. EEG data analysis allows us to understand how multiple brain areas interact dynamically in exhibiting a number of cognitive phenomena. He has led the development of the widely used EEGLAB software which "is an interactive Matlab toolbox for processing continuous and event-related EEG, MEG and other electrophysiological data using independent component analysis (ICA), time/frequency analysis, artifact rejection, and several modes of data visualization". Dr. Makeig has formed an impressive array of partnerships and projects, and has attracted top-notch collaborators. Very recently, EEGLAB was used by Professor Robert Knight and colleagues at UC Berkeley and UC San Francisco in their paper "High Gamma Power Is Phase-Locked to Theta Oscillations in Human Neocortex" that appeared in Science, 15 September 2006, 313: 1626-1628. Due to its noninvasive nature, EEG is likely to have a number of mainstream applications in brain-machine interfaces.