Here is an example of how you can implement a feedforward neural network using numpy. First import numpy and specify the dimensions of your inputs and your targets.
import numpy as np
input_dim = 1000
target_dim = 10
We will build the network structure now. As suggested in Bishop's great "Pattern Recognition and Machine Learning", you can simply consider the last row of your numpy matrices as bias weights and the last column of your activations as bias neurons. Input/output dimensions of the first/last weight matrix needs to be 1 greater, then.
dimensions = [input_dim+1, 500, 500, target_dim+1]
weight_matrices = []
for i in range(len(dimensions)-1):
weight_matrix = np.ones((dimensions[i], dimensions[i]))
weight_matrices.append(weight_matrix)
If your inputs are stored in a 2d numpy matrix, where each row corresponds to one sample and the columns correspond to the attributes of your samples, you can propagate through the network like this: (assuming logistic sigmoid function as activation function)
def activate_network(inputs):
activations = [] # we store the activations for each layer here
a = np.ones((inputs.shape[0], inputs.shape[1]+1)) #add the bias to the inputs
a[:,:-1] = inputs
for w in weight_matrices:
x = a.dot(w) # sum of weighted inputs
a = 1. / (1. - np.exp(-x)) # apply logistic sigmoid activation
a[:,-1] = 1. # bias for the next layer.
activations.append(a)
return activations
The last element in activations
will be the output of your network, but be careful, you need to omit the additional column for the biases, so your output will be activations[-1][:,:-1]
.
To train a network, you need to implement backpropagation which takes a few additional lines of code. You need to loop from the last element of activations
to the first, basically. Make sure to set the bias column in the error signal to zero for each layer before each backpropagation step.