Keras retrieve value of node before activation function

前端 未结 5 1230
感动是毒
感动是毒 2021-02-07 00:48

Imagine a fully-connected neural network with its last two layers of the following structure:

[Dense]
    units = 612
    activation = softplus

[Dense]
    unit         


        
5条回答
  •  终归单人心
    2021-02-07 01:32

    (TF backend) Solution for Conv layers.

    I had the same question, and to rewrite a model's configuration was not an option. The simple hack would be to perform the call function manually. It gives control over the activation.

    Copy-paste from the Keras source, with self changed to layer. You can do the same with any other layer.

    def conv_no_activation(layer, inputs, activation=False):
    
        if layer.rank == 1:
            outputs = K.conv1d(
                inputs,
                layer.kernel,
                strides=layer.strides[0],
                padding=layer.padding,
                data_format=layer.data_format,
                dilation_rate=layer.dilation_rate[0])
        if layer.rank == 2:
            outputs = K.conv2d(
                inputs,
                layer.kernel,
                strides=layer.strides,
                padding=layer.padding,
                data_format=layer.data_format,
                dilation_rate=layer.dilation_rate)
        if layer.rank == 3:
            outputs = K.conv3d(
                inputs,
                layer.kernel,
                strides=layer.strides,
                padding=layer.padding,
                data_format=layer.data_format,
                dilation_rate=layer.dilation_rate)
    
        if layer.use_bias:
            outputs = K.bias_add(
                outputs,
                layer.bias,
                data_format=layer.data_format)
    
        if activation and layer.activation is not None:
            outputs = layer.activation(outputs)
    
        return outputs
    

    Now we need to modify the main function a little. First, identify the layer by its name. Then retrieve activations from the previous layer. And at last, compute the output from the target layer.

    def get_output_activation_control(model, images, layername, activation=False):
        """Get activations for the input from specified layer"""
    
        inp = model.input
    
        layer_id, layer = [(n, l) for n, l in enumerate(model.layers) if l.name == layername][0]
        prev_layer = model.layers[layer_id - 1]
        conv_out = conv_no_activation(layer, prev_layer.output, activation=activation)
        functor = K.function([inp] + [K.learning_phase()], [conv_out]) 
    
        return functor([images]) 
    

    Here is a tiny test. I'm using VGG16 model.

    a_relu = get_output_activation_control(vgg_model, img, 'block4_conv1', activation=True)[0]
    a_no_relu = get_output_activation_control(vgg_model, img, 'block4_conv1', activation=False)[0]
    
    print(np.sum(a_no_relu < 0))
    > 245293
    

    Set all negatives to zero to compare with the results retrieved after an embedded in VGG16 ReLu operation.

    a_no_relu[a_no_relu < 0] = 0
    print(np.allclose(a_relu, a_no_relu))
    > True
    

提交回复
热议问题