Last Saturday, I spoke at the Singularity Summit.
The highlight of the day was the opening conversation between Vernor Vinge (Sci-Fi Author) and Bob Pisani (CNBC). Other speakers included Intel CTO Justin Rattner, MIT Professors Cynthia Breazeal and Neil Gershenfeld, Chairman and CEO of X-Prize Foundation Peter Diamandis, and, of course, Ray Kurzweil. I also enjoyed the discussion moderated by Glenn Zorpette.
The meeting was beautifully organized and run by Tyler Emerson, Susan Fonseca-Klein, Bruce Klein, Jonas Lamis, Gyale Young, and other volunteers.
An on going debate in neuroscience has been whether the neurons encode information in rate of firing or in the timing of individual firings. This is an extremely important question with respect to cognitive computing.
A study published by Yang Yang, Michael R DeWeese, Gonzalo Otazu, Anthony M Zador in Nature Neuroscience provides support for "spike-timing" hypothesis.
Millisecond-scale differences in neural activity in auditory cortex can drive decisions
Neurons in the auditory cortex can lock to the fine timing of acoustic stimuli with millisecond precision, but it is not known whether this precise spike timing can be used to guide decisions. We used chronically implanted microelectrode pairs to stimulate neurons in the rat auditory cortex directly and found that rats can exploit differences in the timing of cortical activity that are as short as 3 ms to guide decisions.
Today, my group's work was featured on KQED Public TV on their QUEST program that explores BAY AREA Science, Nature, and Environment.
You can see the video here. The people that you see are Rajagopal Ananthnarayanan, Anthony Sherbondy, and Anthony Ndirango.
Also, please see the blog entry by the producer Sheraz Sadiq here.
Today, I had the joy, privillege, and, indeed, honor to host Prof. Dr.- Ing. habil. Edgar Koerner who is President, Honda Research Institute Europe GmbH, and Director, Research Institute for Cognition and Robotics at Bielefeld University. I think that Dr. Koerner is a genuine pioneer in the field of Cognitive Computing, and that his depth of vision, his ability to communicate, his genius to create an unified program with far-reaching impact are inspirational.
His public talk is described below.
Title: The Brain-like Vision
Abstract: Intelligence is a technology and a strategy for robust and flexible problem solving in complex environments (both natural and artificial) under the constraints of limited resources (e.g. time, energy). Still the brain is the only intelligent system of that kind we know of. Understanding essential principles of how the brain controls behavior may enable us to provide our technical artifacts at least with some aspects of its performance we admire. Our approach is based on the assumption that the essence of brain computing is not in the local processing or learning algorithm but in the way the brain organizes processing. The challenge of that approach is, first, that the brain is an inhomogeneous network of a huge number of local processors at several interacting levels of complexity. There are highly specialized types of elementary processing elements, the neurons, which are in turn organized into static task-specific clusters which are organized within a macroscopic function-specific architecture; and all of which are subject of a behavior control that creates a dynamic clustering of processing resources across all complexity levels of systems organization. Second, any meaningful simulation of a large-scale hypothesis on brain function is hampered by the limitations enforced by the available technology which usually results in elimination of several levels of systems complexity via mean field concepts or other abstractions, thus collapsing the dimensionality of nested algorithmic structures of the models. As a consequence, we target brain processing control architectures at several levels of complexity in parallel: (1) We investigate the control of growth processes by gen-regulatory networks; (2) at the level of detailed cortical columnar architectures we are looking into self-referential control for storing experience; (3) the behavior based dynamic allocation of systems resources is targeted for visual scene analysis; and (4) the global behavior control architecture is explored in the context of step by step increasing capabilities for autonomous interaction of our humanoid robot ASIMO.
After substantiating the philosophy of our approach, the recent progress towards such a flexible cognitive control is shown for autonomous acquisition of visual information. For visual scene analysis a dynamical system of active processes in brain-inspired architecture is researched that can rapidly configure themselves under the control of static information both sensory and previously acquired, and then allow themselves to hand over control to changing sensory signals. That cluster of visual routines which resemble the function of the parietal visual pathway in primates provides with attention, fixation, tracking the necessary prerequisite that the subsystem modeled according to the ventral pathway can deal with a sequence of still frames in the focus of attention to acquire visual experience by on-line learning. This active vision system is being implemented into the behavior control system for autonomous interaction of our humanoid robot ASIMO. Step by step we implemented the nested control loops for reflexes, attention modulated behavior, on-line learning from sensory experience, and prediction/expectation driven behavior. The capability of ASIMO is demonstrated to learn recognize objects from interaction with human partners, and to learn associations between acoustic and visual objects, as well as the association of sound to behavioral concepts and eliciting simple prediction driven behavior. Finally, the same vision system implemented within a visually based safety system of a car shows reliable “braking to a halt” in a critical situation of a realistic road construction site.
Biography: Edgar Koerner studied electrical engineering, control engineering, and biomedical cybernetics at the Ilmenau Institute of Technology, Germany. From 1976 to 1984 he served as an assistant professor/senior staff researcher and established the bionics research laboratory at the same university. The research activities included experimental work in neurophysiology and neural systems modeling as well as in psychophysics and medical expert systems. He received his Dr.-Ing. in the field of biomedical cybernetics in 1977 and the Dr. sc. techn. (habilitation) in biocybernetics in 1984, both from Ilmenau Institute of Technology. From 1984 to 1987, he joined the Bioholonics Project of JRDC (Tokyo) as a research fellow dealing with brain-like vision systems. In 1988, Dr. Koerner was appointed full professor for biocybernetics and head of the Department of Neurocomputing and Cognitive Systems at the Ilmenau Institute of Technology focusing on research in neural architectures for vision and on neurofuzzy control systems. In 1992 he moved to Japan to join Honda R&D’s Wako Research Center near Tokyo, focusing as a chief scientist on the brain-like intelligence research. In 1997 he started research in computational neuroscience, evolutionary technology, and cognitive robotics at Honda R&D Europe, where he served as an executive vice president and head of the Future Technology Research Division. Since 2003, Dr. Koerner serves as the president of the Honda Research Institute Europe GmbH. Since October 2008, he additionally serves as a co-director of the Research Institute for Cognition and Robotics including the attached Graduate School at the University Bielefeld. His research interest covers brain-like intelligence, with a special focus on self-referential control architectures, self-organization of knowledge representation, and autonomous robots.
Today, Financial Times carried a story on AI. They nicely covered IBM's pioneering role in the field. They covered my group's work (but referred to much older numbers -- we are now able to carry out rat-scale simulations with 55 million neurons and 440 billion synapses in near real-time on 32,768 processor BlueGene/L machine):
IBM was a pioneer in the field and today continues to invest heavily in AI research. Dharmendra Modha, a scientist in the company’s California research laboratory is working on cognitive computing, which he defines as a computer model that simultaneously exhibits characteristics seated in the human brain, including perception and emotion.
His aim is to discover how the brain works, not how the mind works, he is quick to emphasise. Last year, his group achieved a milestone by managing to simulate the operation of a mouse brain on an IBM Blue Gene supercomputer. He notes: “We deployed the simulator on a 4096 processor Blue Gene/L supercomputer with 256 megabytes of memory per processor. We were able to represent 8m neurons and 6,300 synapses (connections) per neuron in the one terabyte main memory of the system.” There will be, of course, a considerable time lag before the benefits of this research are seen in actual products.
Mr Modha thinks it could be 10 years before cognitive computing of the kind he is working on makes its debut in productivity and security systems. It is, however, a giant leap from 1956 when an IBM supercomputer of the day simulated the firing of a mere 512 neurons.
As Mr Modha of IBM says of his work in cognitive computing, the technology will manifest itself in ways which today we cannot even begin to imagine.