Keras: test, cross validation and accuracy while processing batched data with train_on_batch

假装没事ソ 提交于 2021-02-19 05:40:07

问题


Can someone point me to a complete example that does all of the following?

  • Fits batched (and pickled) data in a loop using train_on_batch()
  • Sets aside data from each batch for validation purposes
  • Sets aside test data for accuracy evaluation after all batches have been processed (see last line of my example below).

I'm finding lots of 1 - 5 line code snippets on the internet illustrating how to call train_on_batch() or fit_generator(), but so far nothing that clearly illustrates how to separate out and handle both validation and test data while using train_on_batch().

F. Chollet's great example Cifar10_cnn (https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py) does not illustrate all of the points I listed above.

You can say, "Hey handling test data is your problem. Do it manually." Fine! But I don't understand what these routines do well enough to even know if that is necessary. They are mostly black boxes, and for all I know, they handle validation & test data automagically under the hood. My hope is that more complete example would clear up the confusion.

For instance, in the example below where I read batches iteratively from pickle files, how would I modify the call to train_on_batch to handle validation_data? And how do I set aside test data (test_x & test_y) for purposes of evaluating accuracy at the end of the algorithm?

while 1:
    try:
        batch = np.array(pickle.load(fvecs))
        polarities = np.array(pickle.load(fpols)) 

        # Divide a batch of 1000 documents (movie reviews) into:
        # 800 rows of training data, and
        # 200 rows of test (validation?) data
        train_x, val_x, train_y, val_y = train_test_split(batch, polarities, test_size=0.2)

        doc_size = 30
        x_batch = pad_sequences(train_x, maxlen=doc_size)
        y_batch = train_y

        # Fit the model 
        model.train_on_batch(x_batch, y_batch)
        # model.fit(train_x, train_y, validation_data=(val_x, val_y), epochs=2, batch_size=800, verbose=2)

    except EOFError:
        print("EOF detected.")
        break

# Final evaluation of the model
scores = model.evaluate(test_x, test_y, verbose=0)
print("Accuracy: %.2f%%" % (scores[1] * 100))

回答1:


I can't supply you with a complete example but as you can see here you have both train_on_batch as well as test_on_batch which should suggest that the "train_on_batch" function should only train the model and not test it.

Just to be extra sure you can see in the code itself that the function uses the entire batch for training and nothing is used to test/validate.

For your convenience I'm quoting the relevant code below:

def train_on_batch(self, x, y,
                   sample_weight=None,
                   class_weight=None):
    """Runs a single gradient update on a single batch of data.
    # Arguments
        x: Numpy array of training data,
            or list of Numpy arrays if the model has multiple inputs.
            If all inputs in the model are named,
            you can also pass a dictionary
            mapping input names to Numpy arrays.
        y: Numpy array of target data,
            or list of Numpy arrays if the model has multiple outputs.
            If all outputs in the model are named,
            you can also pass a dictionary
            mapping output names to Numpy arrays.
        sample_weight: Optional array of the same length as x, containing
            weights to apply to the model's loss for each sample.
            In the case of temporal data, you can pass a 2D array
            with shape (samples, sequence_length),
            to apply a different weight to every timestep of every sample.
            In this case you should make sure to specify
            sample_weight_mode="temporal" in compile().
        class_weight: Optional dictionary mapping
            class indices (integers) to
            a weight (float) to apply to the model's loss for the samples
            from this class during training.
            This can be useful to tell the model to "pay more attention" to
            samples from an under-represented class.
    # Returns
        Scalar training loss
        (if the model has a single output and no metrics)
        or list of scalars (if the model has multiple outputs
        and/or metrics). The attribute `model.metrics_names` will give you
        the display labels for the scalar outputs.
    """
    x, y, sample_weights = self._standardize_user_data(
        x, y,
        sample_weight=sample_weight,
        class_weight=class_weight,
        check_batch_axis=True)
    if self.uses_learning_phase and not isinstance(K.learning_phase(), int):
        ins = x + y + sample_weights + [1.]
    else:
        ins = x + y + sample_weights
    self._make_train_function()
    outputs = self.train_function(ins)
    if len(outputs) == 1:
        return outputs[0]
    return outputs


来源:https://stackoverflow.com/questions/46993179/keras-test-cross-validation-and-accuracy-while-processing-batched-data-with-tr

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!