« June 2008 | Main | August 2008 »

July 25, 2008

Intel: Human and computer intelligence will merge in 40 years

"At Intel Corp., just passing its 40th anniversary and with myriad chips in its historical roster, a top company exec looks 40 years into the future to a time when human intelligence and machine intelligence have begun to merge."

See full story here.

July 18, 2008

iCub

"The iCub is an artificial toddler [robot] with senses, 53 degrees of freedom, and a modular software structure designed to allow the work of different research teams to be combined."

"This open-source robot is designed to allow academics to concentrate on implementing their theories about learning and interaction without having to focus on designing and building hardware, and is part of the general trend towards open source in the field."

You can see a wonderful article by Sunny Bains in EE Times.

July 02, 2008

Vivienne Ming

Today, we had a quite an interesting talk from Dr. Vivienne Ming.

Title: Sparse codes for natural sounds

Abstract: The auditory neural code must serve a wide range of tasks that require great sensitivity in time and frequency and be effective over the diverse array of sounds present in natural acoustic environments. It has been suggested (Barlow, 1961; Atick, 1992; Simoncelli & Olshausen, 2001; Laughlin & Sejnowski, 2003) that sensory systems might have evolved highly efficient coding strategies to maximize the information conveyed to the brain while minimizing the required energy and neural resources. In this talk, I will show that, for natural sounds, the complete acoustic waveform can be represented efficiently with a nonlinear model based on a population spike code. In this model, idealized spikes encode the precise temporal positions and magnitudes of underlying acoustic features. We find that when the features are optimized for coding either natural sounds or speech, they show striking similarities to time-domain cochlear filter estimates, have a frequency-bandwidth dependence similar to that of auditory nerve fibers, and yield significantly greater coding efficiency than conventional signal representations. These results indicate that the auditory code might approach an information theoretic optimum and that the acoustic structure of speech might be adapted to the coding capacity of the mammalian auditory system.

Bio: Vivienne Ming received her B.S. (2000) in Cognitive Neuroscience from UC San Diego, developing face and expression recognition systems in the Machine Perception Lab. She earned her M.A. (2003) and Ph.D. (2006) in Psychology from Carnegie Mellon University along with a doctoral training degree in computational neuroscience from the Center for the Neural Basis of Cognition. Her dissertation, Efficient auditory coding, combined computational and behavioral approaches to study the perception of natural sounds, including speech. Since 2006, she has worked jointly as a junior fellow and post-doctoral researcher at the Redwood Center for Theoretical Neuroscience at UC Berkeley and MBC/Mind, Brain & Cognition at Stanford University developing statistical models for auditory scene analysis.