autocorrelation of the input in tensorflow/keras

后端 未结 3 597
花落未央
花落未央 2021-01-22 12:43

I have a 1D input signal. I want to compute autocorrelation as the part of the neural net for further use inside the network. I need to perform convolution of input with input i

相关标签:
3条回答
  • 2021-01-22 12:50

    Here is a possible solution.

    By self convolution, I understood a regular convolution where the filter is exactly the same as the input (if it's not that, sorry for my misunderstanding).

    We need a custom function for that, and a Lambda layer.

    At first I used padding = 'same' which brings outputs with the same length as the inputs. I'm not sure about what output length you want exactly, but if you want more, you should add padding yourself before doing the convolution. (In the example with length 7, for a complete convolution from one end to another, this manual padding would include 6 zeros before and 6 zeros after the input length, and use padding = 'valid'. Find the backend functions here)

    Working example - Input (5,7,2)

    from keras.models import Model
    from keras.layers import *
    import keras.backend as K
    
    batch_size = 5
    length = 7
    channels = 2
    channels_batch = batch_size*channels
    
    def selfConv1D(x):
        #this function unfortunately needs to know previously the shapes
        #mainly because of the for loop, for other lines, there are workarounds
        #but these workarounds are not necessary since we'll have this limitation anyway
    
        #original x: (batch_size, length, channels)
    
        #bring channels to the batch position:
        x = K.permute_dimensions(x,[2,0,1]) #(channels, batch_size, length)
    
        #suppose channels are just individual samples (since we don't mix channels)
        x = K.reshape(x,(channels_batch,length,1))
    
        #here, we get a copy of x reshaped to match filter shapes:
        filters = K.permute_dimensions(x,[1,2,0])  #(length, 1, channels_batch)
    
        #now, in the lack of a suitable available conv function, we make a loop
        allChannels = []
        for i in range (channels_batch):
    
            f = filters[:,:,i:i+1]
            allChannels.append(
                K.conv1d(
                    x[i:i+1], 
                    f, 
                    padding='same', 
                    data_format='channels_last'))
                        #although channels_last is my default config, I found this bug: 
                        #https://github.com/fchollet/keras/issues/8183
    
            #convolution output: (1, length, 1)
    
        #concatenate all results as samples
        x = K.concatenate(allChannels, axis=0) #(channels_batch,length,1)
    
        #restore the original form (passing channels to the end)
        x = K.reshape(x,(channels,batch_size,length))
        return K.permute_dimensions(x,[1,2,0]) #(batch_size, length, channels)
    
    
    #input data for the test:
    x = np.array(range(70)).reshape((5,7,2))
    
    #little model that just performs the convolution
    inp= Input((7,2))
    out = Lambda(selfConv1D)(inp)
    
    model = Model(inp,out)
    
    #checking results
    p = model.predict(x)
    for i in range(5):
        print("x",x[i])
        print("p",p[i])
    
    0 讨论(0)
  • 2021-01-22 12:51

    You can just use tf.nn.conv3d by treating the "batch size" as "depth":

    # treat the batch size as depth.
    data = tf.reshape(input_data, [1, batch, in_height, in_width, in_channels])
    kernel = [filter_depth, filter_height, filter_width, in_channels, out_channels]
    out = tf.nn.conv3d(data, kernel, [1,1,1,1,1], padding='SAME')
    
    0 讨论(0)
  • 2021-01-22 12:55

    TensorFlow now has an auto_correlation function. It should be in release 1.6. If you build from source you can use it right now (see e.g. the github code).

    0 讨论(0)
提交回复
热议问题