Parameter Tuning for Perceptron Learning Algorithm

后端 未结 3 1811
醉话见心
醉话见心 2021-02-01 09:35

I\'m having sort of an issue trying to figure out how to tune the parameters for my perceptron algorithm so that it performs relatively well on unseen data.

I\'ve imple

3条回答
  •  情话喂你
    2021-02-01 10:13

    About the learning rate not affecting whether or not the perceptron converges - That's not true. If you choose a learning rate that is too high, you will probably get a divergent network. If you change the learning rate during learning, and it drops too fast (i.e stronger than 1/n) you can also get a network that never converges (That's because the sum of N(t) over t from 1 to inf is finite. that means the vector of weights can only change by a finite amount).

    Theoretically it can be shown for simple cases that changing n (learning rate) according to 1/t (where t is the number of presented examples) should work good, but I actually found that in practice, the best way to do this, is to find good high n value (the highest value that doesn't make your learning diverge) and low n value (this one is tricker to figure. really depends on the data and problem), and then let n change linearly over time from high n to low n.

提交回复
热议问题