theano

sklearn: How to reset a Regressor or classifier object in sknn

别说谁变了你拦得住时间么 提交于 2019-12-21 09:19:47
问题 I have defined a regressor as follows: nn1 = Regressor( layers=[ Layer("Rectifier", units=150), Layer("Rectifier", units=100), Layer("Linear")], regularize="L2", # dropout_rate=0.25, learning_rate=0.01, valid_size=0.1, learning_rule="adagrad", verbose=False, weight_decay=0.00030, n_stable=10, f_stable=0.00010, n_iter=200) I am using this regressor in a k-fold cross-validation. In order for cross-validation to work properly and not learn from the previous folds, it's necessary that the

Theano broadcasting different to numpy's

a 夏天 提交于 2019-12-21 09:09:02
问题 Consider the following example of numpy broadcasting: import numpy as np import theano from theano import tensor as T xval = np.array([[1, 2, 3], [4, 5, 6]]) bval = np.array([[10, 20, 30]]) print xval + bval As expected, the vector bval is added to each rows of the matrix xval and the output is: [[11 22 33] [14 25 36]] Trying to replicate the same behaviour in the git version of theano: x = T.dmatrix('x') b = theano.shared(bval) z = x + b f = theano.function([x], z) print f(xval) I get the

Theano broadcasting different to numpy's

a 夏天 提交于 2019-12-21 09:08:42
问题 Consider the following example of numpy broadcasting: import numpy as np import theano from theano import tensor as T xval = np.array([[1, 2, 3], [4, 5, 6]]) bval = np.array([[10, 20, 30]]) print xval + bval As expected, the vector bval is added to each rows of the matrix xval and the output is: [[11 22 33] [14 25 36]] Trying to replicate the same behaviour in the git version of theano: x = T.dmatrix('x') b = theano.shared(bval) z = x + b f = theano.function([x], z) print f(xval) I get the

How Adagrad works in Keras? What does self.weights mean in Keras Optimizer?

狂风中的少年 提交于 2019-12-21 05:16:28
问题 For example, the implementation of Keras' Adagrad has been: class Adagrad(Optimizer): """Adagrad optimizer. It is recommended to leave the parameters of this optimizer at their default values. # Arguments lr: float >= 0. Learning rate. epsilon: float >= 0. decay: float >= 0. Learning rate decay over each update. # References - [Adaptive Subgradient Methods for Online Learning and Stochastic Optimization](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf) """ def __init__(self, lr=0.01

Keras IndexError: indices are out-of-bounds

流过昼夜 提交于 2019-12-21 03:51:18
问题 I'm new to Keras and im trying to do Binary MLP on a dataset, and keep getting indices out of bounds with no idea why. from keras.models import Sequential from keras.layers.core import Dense, Dropout, Activation from keras.optimizers import SGD model = Sequential() model.add(Dense(64, input_dim=20, init='uniform', activation='relu')) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary

Training only one output of a network in Keras

扶醉桌前 提交于 2019-12-21 02:30:17
问题 I have a network in Keras with many outputs, however, my training data only provides information for a single output at a time. At the moment my method for training has been to run a prediction on the input in question, change the value of the particular output that I am training and then doing a single batch update. If I'm right this is the same as setting the loss for all outputs to zero except the one that I'm trying to train. Is there a better way? I've tried class weights where I set a

MAC OSX10.11.4 python3 import theano error

有些话、适合烂在心里 提交于 2019-12-20 14:20:23
问题 I upgraded my Mac to a OSX 10.11.4, and sadly I found my theano cannot be imported anymore. Here is information about my machine: ➜ ~ gcc --version Configured with: -prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.11.sdk/usr/include/c++/4.2.1 Apple LLVM version 7.3.0 (clang-703.0.29) Target: x86_64-apple-darwin15.4.0 Thread model: posix InstalledDir:/Applications/Xcode.app

Theano config directly in script

一笑奈何 提交于 2019-12-20 10:35:09
问题 I'm new to Theano and I wonder how to configure the default setting directly from script (without setting envir. variables). E.g. this is a working solution (source): $ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32 python check1.py I intend to come up with the identical solution that is executed by only: $ python check1.py and the additional parameters are set directly in the script itself. E.g. somehow like this: import theano theano.set('mode', 'FAST_RUN') theano.set('device', 'gpu')

Python: rewrite a looping numpy math function to run on GPU

折月煮酒 提交于 2019-12-20 09:39:27
问题 Can someone help me rewrite this one function (the doTheMath function) to do the calculations on the GPU? I used a few good days now trying to get my head around it but to no result. I wonder maybe somebody can help me rewrite this function in whatever way you may seem fit as log as I gives the same result at the end. I tried to use @jit from numba but for some reason it is actually much slower than running the code as usual. With a huge sample size, the goal is to decrease the execution time

How can I get a 1D convolution in theano

北城余情 提交于 2019-12-19 18:18:07
问题 The only function I can find is for 2D convolutions described here... Is there any optimised 1D function ? 回答1: While I believe there's no conv1d in theano, Lasagne (a neural network library on top of theano) has several implementations of Conv1D layer. Some are based on conv2d function of theano with one of the dimensions equal to 1, some use single or multiple dot products. I would try all of them, may be a dot-product based ones will perform better than conv2d with width=1 . https://github