theano

Neural Network Image Classification, The Most Efficient Solution / Suggestion [closed]

六月ゝ 毕业季﹏ 提交于 2020-01-17 21:15:10
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 4 years ago . I have already built a deep neural network image classifier program in Matlab (gives 1 output for each example, such as is it a car or not), using gradient descent and back propagation algorithms. It is a simple feed forward network, with 1 or 2 hidden layers. I'm using the

Neural Network Image Classification, The Most Efficient Solution / Suggestion [closed]

两盒软妹~` 提交于 2020-01-17 21:14:49
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 4 years ago . I have already built a deep neural network image classifier program in Matlab (gives 1 output for each example, such as is it a car or not), using gradient descent and back propagation algorithms. It is a simple feed forward network, with 1 or 2 hidden layers. I'm using the

Neural Network Image Classification, The Most Efficient Solution / Suggestion [closed]

*爱你&永不变心* 提交于 2020-01-17 21:14:11
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 4 years ago . I have already built a deep neural network image classifier program in Matlab (gives 1 output for each example, such as is it a car or not), using gradient descent and back propagation algorithms. It is a simple feed forward network, with 1 or 2 hidden layers. I'm using the

theano gradient with respect to matrix row

蹲街弑〆低调 提交于 2020-01-17 07:15:26
问题 As the question suggests, I would like to compute the gradient with respect to a matrix row. In code: import numpy.random as rng import theano.tensor as T from theano import function t_x = T.matrix('X') t_w = T.matrix('W') t_y = T.dot(t_x, t_w.T) t_g = T.grad(t_y[0,0], t_x[0]) # my wish, but DisconnectedInputError t_g = T.grad(t_y[0,0], t_x) # no problems, but a lot of unnecessary zeros f = function([t_x, t_w], [t_y, t_g]) y,g = f(rng.randn(2,5), rng.randn(7,5)) As the comments indicate, the

theano gradient with respect to matrix row

时光毁灭记忆、已成空白 提交于 2020-01-17 07:15:25
问题 As the question suggests, I would like to compute the gradient with respect to a matrix row. In code: import numpy.random as rng import theano.tensor as T from theano import function t_x = T.matrix('X') t_w = T.matrix('W') t_y = T.dot(t_x, t_w.T) t_g = T.grad(t_y[0,0], t_x[0]) # my wish, but DisconnectedInputError t_g = T.grad(t_y[0,0], t_x) # no problems, but a lot of unnecessary zeros f = function([t_x, t_w], [t_y, t_g]) y,g = f(rng.randn(2,5), rng.randn(7,5)) As the comments indicate, the

equivalence of categorical_crossentropy function of theano in tensorflow

柔情痞子 提交于 2020-01-17 06:05:23
问题 What might be the equivalent function of the following theano function in tensorflow? Theano.tensor.nnet.categorical_crossentropy(o, y) 回答1: For 2D tensors with probability distributions in the 2nd dimension: def crossentropy(p_approx, p_true): return -tf.reduce_sum(tf.multiply(p_true, tf.log(p_approx)), 1) 回答2: I think you would want to use softmax cross-entropy loss from Tensorflow. Remember that the input to this layer is unscaled logits i.e. you cannot feed softmax output to this layer.

Theano max_pool_3d

断了今生、忘了曾经 提交于 2020-01-17 05:04:08
问题 How do I extend theanos downsample.max_pool_2d_same_size in order to pool not only within a feature map, but also between those - in a efficient manner? Lets say i got 3 feature maps, each of size 10x10, that would be a 4D Tensor (1,3,10,10). First lets max pool ((2,2), no overlapping) each of the (10,10) feature map. The results are 3 sparse feature maps, still (10,10) but most values equal to zero: within a (2,2) window is at most one value greater than zero. This is what downsample.max

“Optimization failure due to: constant_folding” error in Theano installation on Windows, python 3.7

我是研究僧i 提交于 2020-01-16 09:35:10
问题 This same issue has been brought up a lot in the past but I have not seen any recent thread, with the newest versions of packages. I have Python 3.7 on a Windows machine, I install theano as: pip install theano I then do import theano which does not give any errors, but when I do theano.test() all hell breaks loose. The error is theano.gof.opt: ERROR: Optimization failure due to: constant_folding theano.gof.opt: ERROR: node: InplaceDimShuffle{x}(TensorConstant{1.0}) theano.gof.opt: ERROR:

Keras - Fusion of a Dense Layer with a Convolution2D Layer

浪尽此生 提交于 2020-01-12 08:26:35
问题 I want to make a custom layer which is supposed to fuse the output of a Dense Layer with a Convolution2D Layer. The Idea came from this paper and here's the network: the fusion layer tries to fuse the Convolution2D tensor ( 256x28x28 ) with the Dense tensor ( 256 ). here's the equation for it: y_global => Dense layer output with shape 256 y_mid => Convolution2D layer output with shape 256x28x28 Here's the description of the paper about the Fusion process: I ended up making a new custom layer

Keras - Fusion of a Dense Layer with a Convolution2D Layer

只愿长相守 提交于 2020-01-12 08:26:31
问题 I want to make a custom layer which is supposed to fuse the output of a Dense Layer with a Convolution2D Layer. The Idea came from this paper and here's the network: the fusion layer tries to fuse the Convolution2D tensor ( 256x28x28 ) with the Dense tensor ( 256 ). here's the equation for it: y_global => Dense layer output with shape 256 y_mid => Convolution2D layer output with shape 256x28x28 Here's the description of the paper about the Fusion process: I ended up making a new custom layer