convolution

What is wrong with my multi-channel 1d convolution implemented in numpy (compared with tensorflow)

牧云@^-^@ 提交于 2020-01-24 09:27:19
问题 To ensure my understanding of TensorFlow's convolution operations, I implemented conv1d with multiple channels in numpy. However, I get different results, and I cannot see the problem. It seems my implementation is doubling the overlapped values compared with conv1d. Code: import tensorflow as tf import numpy as np # hand-written multi-channel 1D convolution operator # "Data", dimensions: # [0]: sample (2 samples) # [1]: time index (4 indexes) # [2]: channels (2 channels) x = np.array([[1,2,3

What is wrong with my multi-channel 1d convolution implemented in numpy (compared with tensorflow)

﹥>﹥吖頭↗ 提交于 2020-01-24 09:27:05
问题 To ensure my understanding of TensorFlow's convolution operations, I implemented conv1d with multiple channels in numpy. However, I get different results, and I cannot see the problem. It seems my implementation is doubling the overlapped values compared with conv1d. Code: import tensorflow as tf import numpy as np # hand-written multi-channel 1D convolution operator # "Data", dimensions: # [0]: sample (2 samples) # [1]: time index (4 indexes) # [2]: channels (2 channels) x = np.array([[1,2,3

Activation function after pooling layer or convolutional layer?

做~自己de王妃 提交于 2020-01-22 13:15:27
问题 The theory from these links show that the order of Convolutional Network is: Convolutional Layer - Non-linear Activation - Pooling Layer . Neural networks and deep learning (equation (125) Deep learning book (page 304, 1st paragraph) Lenet (the equation) The source in this headline But, in the last implementation from those sites, it said that the order is: Convolutional Layer - Pooling Layer - Non-linear Activation network3.py The sourcecode, LeNetConvPoolLayer class I've tried too to

Fully Convolutional Network Receptive Field

﹥>﹥吖頭↗ 提交于 2020-01-21 23:16:25
问题 There are many questions regarding the calculation of the receptive field. It is explained very well here on StackOverflow. However, there are no blogs or tutorials on how to calculate it in fully convolutional layer i.e. with residual blocks, feature map concatenation and upsampling layers (like feature pyramid network). To my understanding residual blocks and skip connections do not contribute to the receptive field and can be skipped. Answer from here. How are upsampling layers handled?

Would Richardson–Lucy deconvolution work for recovering the latent kernel?

半城伤御伤魂 提交于 2020-01-21 19:46:27
问题 I am aware that Richardson–Lucy deconvolution is for recovering the latent image, but suppose we have a noisy image and the original image. Can we find the kernel that caused the transformation? Below is a MATLAB code for Richardson-Lucy deconvolution and I am wondering if it is easy to modify and make it recover the kernel instead of the latent image . My thoughts are that we change the convolution options to valid so the output would represent the kernel, what do you think? function latent

Would Richardson–Lucy deconvolution work for recovering the latent kernel?

三世轮回 提交于 2020-01-21 19:42:54
问题 I am aware that Richardson–Lucy deconvolution is for recovering the latent image, but suppose we have a noisy image and the original image. Can we find the kernel that caused the transformation? Below is a MATLAB code for Richardson-Lucy deconvolution and I am wondering if it is easy to modify and make it recover the kernel instead of the latent image . My thoughts are that we change the convolution options to valid so the output would represent the kernel, what do you think? function latent

Understanding NumPy's Convolve

心已入冬 提交于 2020-01-19 18:20:53
问题 When calculating a simple moving average, numpy.convolve appears to do the job. Question: How is the calculation done when you use np.convolve(values, weights, 'valid') ? When the docs mentioned convolution product is only given for points where the signals overlap completely , what are the 2 signals referring to? If any explanations can include examples and illustrations, it will be extremely useful. window = 10 weights = np.repeat(1.0, window)/window smas = np.convolve(values, weights,

Implementing conv1d with numpy operations

瘦欲@ 提交于 2020-01-15 09:52:30
问题 I am trying to implement tensorflow's conv1d using numpy operations, ignoring strides and padding for now. I thought I understood it after my previous question but realized today that I was still not getting the right answer when dealing with kernels wider than 1. So now I am trying to use tflearn as a template because it computes the kernel shape for me. Now that I understand that the convolution can be computed as a matrix multiplication I am attempting to use the kernel matrix accordingly,

Convolution matrix for diagonal motion blur

試著忘記壹切 提交于 2020-01-15 05:34:18
问题 I know my question is not really a programming question but it came out of programming need. Does anyone happen to know the convolution matrix for diagonal motion blur. 3x3, 4x4 or 5x5 are all good. Thanks, 回答1: This is 5x5: 0.22222 0.27778 0.22222 0.05556 0.00000 0.27778 0.44444 0.44444 0.22222 0.05556 0.22222 0.44444 0.55556 0.44444 0.22222 0.05556 0.22222 0.44444 0.44444 0.27778 0.00000 0.05556 0.22222 0.27778 0.22222 I basically drew a diagonal line, and then blurred it a little. 来源:

Increasing Label Error Rate (Edit Distance) and Fluctuating Loss?

烂漫一生 提交于 2020-01-14 03:04:53
问题 I am training a handwriting recognition model of this architecture: { "network": [ { "layer_type": "l2_normalize" }, { "layer_type": "conv2d", "num_filters": 16, "kernel_size": 5, "stride": 1, "padding": "same" }, { "layer_type": "max_pool2d", "pool_size": 2, "stride": 2, "padding": "same" }, { "layer_type": "l2_normalize" }, { "layer_type": "dropout", "keep_prob": 0.5 }, { "layer_type": "conv2d", "num_filters": 32, "kernel_size": 5, "stride": 1, "padding": "same" }, { "layer_type": "max