Training a neural network to add

后端 未结 5 1202

I need to train a network to multiply or add 2 inputs, but it doesn\'t seem to approximate well for all points after 20000 iterations. More specifically, I train it on the whole

相关标签:
5条回答
  • 2021-02-02 11:34

    I was trying to do the same. Trained 2,3,4 digit addition and was able to achive 97% accuracy. You can achieve with one of the neural network type,

    Sequence to Sequence Learning with Neural Networks

    A sample program with Juypter Notebook from keras is available at the following link,

    https://github.com/keras-team/keras/blob/master/examples/addition_rnn.py

    Hope it helps.

    Attaching the code here for reference.

    from __future__ import print_function
    from keras.models import Sequential
    from keras import layers
    import numpy as np
    from six.moves import range
    
    
    class CharacterTable(object):
        """Given a set of characters:
        + Encode them to a one hot integer representation
        + Decode the one hot integer representation to their character output
        + Decode a vector of probabilities to their character output
        """
        def __init__(self, chars):
            """Initialize character table.
            # Arguments
                chars: Characters that can appear in the input.
            """
            self.chars = sorted(set(chars))
            self.char_indices = dict((c, i) for i, c in enumerate(self.chars))
            self.indices_char = dict((i, c) for i, c in enumerate(self.chars))
    
        def encode(self, C, num_rows):
            """One hot encode given string C.
            # Arguments
                num_rows: Number of rows in the returned one hot encoding. This is
                    used to keep the # of rows for each data the same.
            """
            x = np.zeros((num_rows, len(self.chars)))
            for i, c in enumerate(C):
                x[i, self.char_indices[c]] = 1
            return x
    
        def decode(self, x, calc_argmax=True):
            if calc_argmax:
                x = x.argmax(axis=-1)
            return ''.join(self.indices_char[x] for x in x)
    
    
    class colors:
        ok = '\033[92m'
        fail = '\033[91m'
        close = '\033[0m'
    
    # Parameters for the model and dataset.
    TRAINING_SIZE = 50000
    DIGITS = 3
    INVERT = True
    
    # Maximum length of input is 'int + int' (e.g., '345+678'). Maximum length of
    # int is DIGITS.
    MAXLEN = DIGITS + 1 + DIGITS
    
    # All the numbers, plus sign and space for padding.
    chars = '0123456789+ '
    ctable = CharacterTable(chars)
    
    questions = []
    expected = []
    seen = set()
    print('Generating data...')
    while len(questions) < TRAINING_SIZE:
        f = lambda: int(''.join(np.random.choice(list('0123456789'))
                        for i in range(np.random.randint(1, DIGITS + 1))))
        a, b = f(), f()
        # Skip any addition questions we've already seen
        # Also skip any such that x+Y == Y+x (hence the sorting).
        key = tuple(sorted((a, b)))
        if key in seen:
            continue
        seen.add(key)
        # Pad the data with spaces such that it is always MAXLEN.
        q = '{}+{}'.format(a, b)
        query = q + ' ' * (MAXLEN - len(q))
        ans = str(a + b)
        # Answers can be of maximum size DIGITS + 1.
        ans += ' ' * (DIGITS + 1 - len(ans))
        if INVERT:
            # Reverse the query, e.g., '12+345  ' becomes '  543+21'. (Note the
            # space used for padding.)
            query = query[::-1]
        questions.append(query)
        expected.append(ans)
    print('Total addition questions:', len(questions))
    
    print('Vectorization...')
    x = np.zeros((len(questions), MAXLEN, len(chars)), dtype=np.bool)
    y = np.zeros((len(questions), DIGITS + 1, len(chars)), dtype=np.bool)
    for i, sentence in enumerate(questions):
        x[i] = ctable.encode(sentence, MAXLEN)
    for i, sentence in enumerate(expected):
        y[i] = ctable.encode(sentence, DIGITS + 1)
    
    # Shuffle (x, y) in unison as the later parts of x will almost all be larger
    # digits.
    indices = np.arange(len(y))
    np.random.shuffle(indices)
    x = x[indices]
    y = y[indices]
    
    # Explicitly set apart 10% for validation data that we never train over.
    split_at = len(x) - len(x) // 10
    (x_train, x_val) = x[:split_at], x[split_at:]
    (y_train, y_val) = y[:split_at], y[split_at:]
    
    print('Training Data:')
    print(x_train.shape)
    print(y_train.shape)
    
    print('Validation Data:')
    print(x_val.shape)
    print(y_val.shape)
    
    # Try replacing GRU, or SimpleRNN.
    RNN = layers.LSTM
    HIDDEN_SIZE = 128
    BATCH_SIZE = 128
    LAYERS = 1
    
    print('Build model...')
    model = Sequential()
    # "Encode" the input sequence using an RNN, producing an output of HIDDEN_SIZE.
    # Note: In a situation where your input sequences have a variable length,
    # use input_shape=(None, num_feature).
    model.add(RNN(HIDDEN_SIZE, input_shape=(MAXLEN, len(chars))))
    # As the decoder RNN's input, repeatedly provide with the last hidden state of
    # RNN for each time step. Repeat 'DIGITS + 1' times as that's the maximum
    # length of output, e.g., when DIGITS=3, max output is 999+999=1998.
    model.add(layers.RepeatVector(DIGITS + 1))
    # The decoder RNN could be multiple layers stacked or a single layer.
    for _ in range(LAYERS):
        # By setting return_sequences to True, return not only the last output but
        # all the outputs so far in the form of (num_samples, timesteps,
        # output_dim). This is necessary as TimeDistributed in the below expects
        # the first dimension to be the timesteps.
        model.add(RNN(HIDDEN_SIZE, return_sequences=True))
    
    # Apply a dense layer to the every temporal slice of an input. For each of step
    # of the output sequence, decide which character should be chosen.
    model.add(layers.TimeDistributed(layers.Dense(len(chars))))
    model.add(layers.Activation('softmax'))
    model.compile(loss='categorical_crossentropy',
                  optimizer='adam',
                  metrics=['accuracy'])
    model.summary()
    
    # Train the model each generation and show predictions against the validation
    # dataset.
    for iteration in range(1, 200):
        print()
        print('-' * 50)
        print('Iteration', iteration)
        model.fit(x_train, y_train,
                  batch_size=BATCH_SIZE,
                  epochs=1,
                  validation_data=(x_val, y_val))
        # Select 10 samples from the validation set at random so we can visualize
        # errors.
        for i in range(10):
            ind = np.random.randint(0, len(x_val))
            rowx, rowy = x_val[np.array([ind])], y_val[np.array([ind])]
            preds = model.predict_classes(rowx, verbose=0)
            q = ctable.decode(rowx[0])
            correct = ctable.decode(rowy[0])
            guess = ctable.decode(preds[0], calc_argmax=False)
            print('Q', q[::-1] if INVERT else q, end=' ')
            print('T', correct, end=' ')
            if correct == guess:
                print(colors.ok + '☑' + colors.close, end=' ')
            else:
                print(colors.fail + '☒' + colors.close, end=' ')
            print(guess)
    
    0 讨论(0)
  • 2021-02-02 11:35

    If you want to keep things neural (links have weights, the neuron calculates the the ponderated sum of the inputs by the weights and answers 0 or 1 depending on the sigmoid of the sum and you use backpropagation of the gradient), then you should think about a neuron of the hidden layer as classifiers. They define a line that separates the input space in to classes: 1 class corresponds to the part where the neuron responds 1, the other when it responds 0. A second neuron of the hidden layer will define another separation and so forth. The output neuron combines the outputs of the hidden layer by adapting its weights for its output to correspond to the ones you presented during learning.
    Hence, a single neuron will classify the input space in 2 classes (maybe corresponding to a addition depending on the learning database). Two neurons will be able to define 4 classes. Three neurons 8 classes, etc. Think of the output of the hidden neurons as powers of 2: h1*2^0 + h2*2^1+...+hn*2^n, where hi is the output of hidden neuron i. NB: you will need n output neurons. This answers the question about the number of hidden neurons to use.
    But the NN doesn't compute the addition. It sees it as a classification problem based on what it learned. It will never be able to generate a correct answer for values that are out of its learning base. During the learning phase, it adjusts the weights in order to place the separators (lines in 2D) so as to produce the correct answer. If your inputs are in [0,10], it will learn to produce to correct answers for additions of values in [0,10]^2 but will never give a good answer for 12 + 11.
    If your last values are well learned and the first forgotten, try to lower the learning rate: the modifications of the weights (depending on the gradient) of the last examples may override the first one (if you're using stochastic backprop). Be sure that your learning base is fair. You can also present the badly learned examples more often. And try many values of the learning rate until you find a good one.

    0 讨论(0)
  • 2021-02-02 11:37

    It may be too late, but a simple solution is to use a RNN (Recurrent Neural Network).

    After converting your numbers to digits, your NN will take a couple of digits from the sequence of digits from left to right.

    The RNN has to loop one of its output so that it can automatically understand that there is a digit to carry (if the sum is 2, write a 0 and carry 1).

    To train it, you'll need to give it the inputs consisting of two digits (one from the first number, the second from the second number) and the desired output. And the RNN will end up finding how to do the sum.

    Notice that this RNN will only need to know the 8 following cases to learn how to sum two numbers:

    • 1 + 1, 0 + 0, 1 + 0, 0 + 1 with carry
    • 1 + 1, 0 + 0, 1 + 0, 0 + 1 without carry
    0 讨论(0)
  • 2021-02-02 11:44

    Think about what would happen if you replaced your tanh(x) threshold function with a linear function of x - call it a.x - and treat a as the sole learning parameter in each neuron. That's effectively what your network will be optimising towards; it's an approximation of the zero-crossing of the tanh function.

    Now, what happens when you layer neurons of this linear type? You multiply the output of each neuron as the pulse goes from input to output. You're trying to approximate addition with a set of multiplications. That, as they say, does not compute.

    0 讨论(0)
  • 2021-02-02 11:49

    A network consisting of a single neuron with weights={1,1}, bias=0 and linear activation function performs the addition of the two input numbers.

    Multiplication may be harder. Here are two approaches that a net can use:

    1. Convert one of the numbers to digits (for example, binary) and perform multiplication as you did in elementary school. a*b = a*(b0*2^0 + b1*2^1 + ... + bk*2^k) = a*b0*2^0 + a*b1*2^1 + ... + a*bk*2^k. This approach is simple, but requires variable number of neurons proportional to the length (logarithm) of the input b.
    2. Take logarithms of the inputs, add them and exponentiate the result. a*b = exp(ln(a) + ln(b)) This network can work on numbers of any length as long as it can approximate the logarithm and exponent well enough.
    0 讨论(0)
提交回复
热议问题