« Walter Freeman turns 80! | Main | Towards real-time, mouse-scale cortical simulations »

Google and AI

The programming language of humans, if you will, would include the workings of your brain, said Google co-founder Larry Page, who offered his hypothesis Friday night during a plenary lecture here at the annual American Association for the Advancement of Science conference. His guess, he said, was that the brain's algorithms weren't all that complicated and could be approximated, eventually, with a lot of computational power. Specifically, Page said "When AI happens, it's going to be a lot of computation, not so much ... clever algorithms." Given the size of DNA (~600 MB compressed), the algorithms of the brain are "probably not that complicated."

"...artificial intelligence...I don't think it's that far off as people think."


  1. Press article
  2. Video


TrackBack URL for this entry:


What Page is talking about is inaccurate both Biologically and Technologically. [ http://www.searchme.co.in/2007/02/larry-and-me-not-on-same-page.html ]

That in itself is not surprising, what is shocking is the fact that the majority of the media/blogosphere seem to be accepting what he says implicitly. Frankly speaking his argument is a joke.

Larry Page seems to be confusing the programming with the hardware, and then the hardware with the plans for the hardware. A single communication at a single glutamate synapse causes spikes in the dendrite, possible action potential at the soma, backpropagating action potential from the soma, timing related NDMA receptor action causing plasticity, hundreds of compounds to be transcripted (DNA getting expressed) in the surrounding glia, calcium signaling and changes in blood flow to the region, and more signalling in a highly recurrent network, not to mention stimulating spine creation and new synapse and ion channel creation.

That all that happened was a consequence of the DNA Page was talking about, and it doesn't even address the network and dynamical systems aspects, the equilibrium states or attractors, or the shaping of the firings in the dendritic and axonal trees by scores of possible neuromodulators. After all that, which is hardware and generality, you get to talk about the program and the algorithm.

If you were writing a functional requirements document, wrote as much as possible of it in compact mathematical shorthands, avoided any formatting frills, and compressed it with algorithms that allowed it to be unpacked over months as long as it was packed well, how much could you write in 600MB?

If it implemented a piece of hardware on which you could develop complex, nonlinear dynamical systems as a means of carrying out memory and actions, how many consequent algorithms could be embedded in its output?

I hope he's right that we get computing that thinks soon, and I'm as committed as the next guy to seeing that happen, but every time in the last 100 years that we've assumed that some part of brain action was negligible and that the system was simple, we've been wrong, and we haven't got the algorithms yet.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)