Intuition for perceptron weight update rule

前端 未结 2 786
难免孤独
难免孤独 2021-02-13 14:33

I am having trouble understanding the weight update rule for perceptrons:

w(t + 1) = w(t) + y(t)x(t).

Assume we have a linearly separable data set.

2条回答
  •  时光说笑
    2021-02-13 14:49

    A better derivation of the perceptron update rule is documented here and here. The derivation is using gradient descent.

    • Basic premise of gradient descent algorithm is find the error of classification and make your parameters so that error is minimized.

    PS: I was trying very hard to get the intuition on why would someone multiply x and y to derive the update for w. Because w is the slope for a single dimension (y = wx+c) and slope w = (y/x) and not y * x.

提交回复
热议问题