« Learning in and from Brain-Based Devices | Main | E-noses Could Make Diseases Something to Sniff at »

Learning in Networks: from Spiking Neural Nets to Graphs

Yesterday, I attended an interesting talk by Victor Miagkikh as part of ACM's SF Bay Area Data Mining Special Interest Group at the beautiful campus of SAP Labs.

Abstract:

Hebbian learning is a well know principal of unsupervised learning in networks: if two events happen "close in time" then the strength of connection between the network nodes producing those events increases. Is this a complete set of learning axioms? Given a reinforcement signal (reward) for a sequence of actions we can add another axiom: "reward controls plasticity". Thus, we get a reinforcement learning algorithm that could be used for training spiking neural networks (SNN). The author will demonstrate the utility of this algorithm on a maze learning problem. Can these learning principles be applied not only to neural, but also to other kinds of networks? Yes, in fact we will see their application to economical influence networks for portfolio optimization. Then, if time allows, we consider another application: social networks for a movie recommendation engine, and other causality inducing principles instead of "close in time". By the end of the talk the author hopes that the audience would agree that the "reward controls plasticity" principle is a vital learning axiom.

TrackBack

TrackBack URL for this entry:
http://p9.hostingprod.com/@modha.org/blog-mt/mt-tb.fcgi/70

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)