Keras, How to get the output of each layer?

后端 未结 10 1844
借酒劲吻你
借酒劲吻你 2020-11-22 07:34

I have trained a binary classification model with CNN, and here is my code

model = Sequential()
model.add(Convolution2D(nb_filters, kernel_size[0], kernel_si         


        
相关标签:
10条回答
  • 2020-11-22 07:53

    Well, other answers are very complete, but there is a very basic way to "see", not to "get" the shapes.

    Just do a model.summary(). It will print all layers and their output shapes. "None" values will indicate variable dimensions, and the first dimension will be the batch size.

    0 讨论(0)
  • 2020-11-22 07:54

    Based on all the good answers of this thread, I wrote a library to fetch the output of each layer. It abstracts all the complexity and has been designed to be as user-friendly as possible:

    https://github.com/philipperemy/keract

    It handles almost all the edge cases

    Hope it helps!

    0 讨论(0)
  • 2020-11-22 07:54

    Following looks very simple to me:

    model.layers[idx].output
    

    Above is a tensor object, so you can modify it using operations that can be applied to a tensor object.

    For example, to get the shape model.layers[idx].output.get_shape()

    idx is the index of the layer and you can find it from model.summary()

    0 讨论(0)
  • 2020-11-22 08:02

    You can easily get the outputs of any layer by using: model.layers[index].output

    For all layers use this:

    from keras import backend as K
    
    inp = model.input                                           # input placeholder
    outputs = [layer.output for layer in model.layers]          # all layer outputs
    functors = [K.function([inp, K.learning_phase()], [out]) for out in outputs]    # evaluation functions
    
    # Testing
    test = np.random.random(input_shape)[np.newaxis,...]
    layer_outs = [func([test, 1.]) for func in functors]
    print layer_outs
    

    Note: To simulate Dropout use learning_phase as 1. in layer_outs otherwise use 0.

    Edit: (based on comments)

    K.function creates theano/tensorflow tensor functions which is later used to get the output from the symbolic graph given the input.

    Now K.learning_phase() is required as an input as many Keras layers like Dropout/Batchnomalization depend on it to change behavior during training and test time.

    So if you remove the dropout layer in your code you can simply use:

    from keras import backend as K
    
    inp = model.input                                           # input placeholder
    outputs = [layer.output for layer in model.layers]          # all layer outputs
    functors = [K.function([inp], [out]) for out in outputs]    # evaluation functions
    
    # Testing
    test = np.random.random(input_shape)[np.newaxis,...]
    layer_outs = [func([test]) for func in functors]
    print layer_outs
    

    Edit 2: More optimized

    I just realized that the previous answer is not that optimized as for each function evaluation the data will be transferred CPU->GPU memory and also the tensor calculations needs to be done for the lower layers over-n-over.

    Instead this is a much better way as you don't need multiple functions but a single function giving you the list of all outputs:

    from keras import backend as K
    
    inp = model.input                                           # input placeholder
    outputs = [layer.output for layer in model.layers]          # all layer outputs
    functor = K.function([inp, K.learning_phase()], outputs )   # evaluation function
    
    # Testing
    test = np.random.random(input_shape)[np.newaxis,...]
    layer_outs = functor([test, 1.])
    print layer_outs
    
    0 讨论(0)
  • 2020-11-22 08:03

    Assuming you have:

    1- Keras pre-trained model.

    2- Input x as image or set of images. The resolution of image should be compatible with dimension of the input layer. For example 80*80*3 for 3-channels (RGB) image.

    3- The name of the output layer to get the activation. For example, "flatten_2" layer. This should be include in the layer_names variable, represents name of layers of the given model.

    4- batch_size is an optional argument.

    Then you can easily use get_activation function to get the activation of the output layer for a given input x and pre-trained model:

    import six
    import numpy as np
    import keras.backend as k
    from numpy import float32
    def get_activations(x, model, layer, batch_size=128):
    """
    Return the output of the specified layer for input `x`. `layer` is specified by layer index (between 0 and
    `nb_layers - 1`) or by name. The number of layers can be determined by counting the results returned by
    calling `layer_names`.
    :param x: Input for computing the activations.
    :type x: `np.ndarray`. Example: x.shape = (80, 80, 3)
    :param model: pre-trained Keras model. Including weights.
    :type model: keras.engine.sequential.Sequential. Example: model.input_shape = (None, 80, 80, 3)
    :param layer: Layer for computing the activations
    :type layer: `int` or `str`. Example: layer = 'flatten_2'
    :param batch_size: Size of batches.
    :type batch_size: `int`
    :return: The output of `layer`, where the first dimension is the batch size corresponding to `x`.
    :rtype: `np.ndarray`. Example: activations.shape = (1, 2000)
    """
    
        layer_names = [layer.name for layer in model.layers]
        if isinstance(layer, six.string_types):
            if layer not in layer_names:
                raise ValueError('Layer name %s is not part of the graph.' % layer)
            layer_name = layer
        elif isinstance(layer, int):
            if layer < 0 or layer >= len(layer_names):
                raise ValueError('Layer index %d is outside of range (0 to %d included).'
                                 % (layer, len(layer_names) - 1))
            layer_name = layer_names[layer]
        else:
            raise TypeError('Layer must be of type `str` or `int`.')
    
        layer_output = model.get_layer(layer_name).output
        layer_input = model.input
        output_func = k.function([layer_input], [layer_output])
    
        # Apply preprocessing
        if x.shape == k.int_shape(model.input)[1:]:
            x_preproc = np.expand_dims(x, 0)
        else:
            x_preproc = x
        assert len(x_preproc.shape) == 4
    
        # Determine shape of expected output and prepare array
        output_shape = output_func([x_preproc[0][None, ...]])[0].shape
        activations = np.zeros((x_preproc.shape[0],) + output_shape[1:], dtype=float32)
    
        # Get activations with batching
        for batch_index in range(int(np.ceil(x_preproc.shape[0] / float(batch_size)))):
            begin, end = batch_index * batch_size, min((batch_index + 1) * batch_size, x_preproc.shape[0])
            activations[begin:end] = output_func([x_preproc[begin:end]])[0]
    
        return activations
    
    0 讨论(0)
  • 2020-11-22 08:04

    Wanted to add this as a comment (but don't have high enough rep.) to @indraforyou's answer to correct for the issue mentioned in @mathtick's comment. To avoid the InvalidArgumentError: input_X:Y is both fed and fetched. exception, simply replace the line outputs = [layer.output for layer in model.layers] with outputs = [layer.output for layer in model.layers][1:], i.e.

    adapting indraforyou's minimal working example:

    from keras import backend as K 
    inp = model.input                                           # input placeholder
    outputs = [layer.output for layer in model.layers][1:]        # all layer outputs except first (input) layer
    functor = K.function([inp, K.learning_phase()], outputs )   # evaluation function
    
    # Testing
    test = np.random.random(input_shape)[np.newaxis,...]
    layer_outs = functor([test, 1.])
    print layer_outs
    

    p.s. my attempts trying things such as outputs = [layer.output for layer in model.layers[1:]] did not work.

    0 讨论(0)
提交回复
热议问题