How to utilize Hebbian learning?

前端 未结 4 1552
天涯浪人
天涯浪人 2021-01-30 17:43

I want to upgrade my evolution simulator to use Hebb learning, like this one. I basically want small creatures to be able to learn how to find food. I achieved that with the bas

4条回答
  •  粉色の甜心
    2021-01-30 18:24

    Hebbs law is a brilliant insight for associative learning, but its only part of the picture. And you are right, implemented as you have done, and left unchecked a weight will just keep on increasing. The key is to add in some form of normalisation or limiting process. This is illustrated quite well of the wiki page for Oja's rule. What I suggest you do is add in a post-synaptic divisive normalisation step, what this means is that you divide through a weight by the sum of all the weights converging on the same post-synaptic neuron (i.e. the sum of all weights converging on a neuron is fixed at 1).

    What you want to do can be done by building a network that utilises Hebbian learning. I'm not quite sure on what you are passing in as input into your system, or how you've set things up. But you could look at LISSOM which is an Hebbian extension to SOM, (self-organising map).

    In a layer of this kind typically all the neurons may be interconnected. You pass in the input vector, and allow the activity in the network to settle, this is some number of settling steps. Then you update the weights. You do this during the training phase, at the end of which associated items in the input space will tend to form grouped activity patches in the output map.

    It's also worth noting that the brain is massively interconnected, and highly recursive (i.e. there is feedforward, feedback, lateral interconnectivity, microcircuits, and a lot of other stuff too..).

提交回复
热议问题