Multi-layer neural network back-propagation formula (using stochastic gradient descent)
问题 Using the notations from Backpropagation calculus | Deep learning, chapter 4, I have this back-propagation code for a 4-layer (i.e. 2 hidden layers) neural network: def sigmoid_prime(z): return z * (1-z) # because σ'(x) = σ(x) (1 - σ(x)) def train(self, input_vector, target_vector): a = np.array(input_vector, ndmin=2).T y = np.array(target_vector, ndmin=2).T # forward A = [a] for k in range(3): a = sigmoid(np.dot(self.weights[k], a)) # zero bias here just for simplicity A.append(a) # Now A