keras-2

Keras: Using weights for NCE loss

我与影子孤独终老i 提交于 2019-12-05 19:37:43
So here is the model with the standard loss function. target = Input(shape=(1, ), dtype='int32') w_inputs = Input(shape=(1, ), dtype='int32') w_emb = Embedding(V, dim, embeddings_initializer='glorot_uniform',name='word_emb')(w_inputs) w_flat= Flatten()(w_emb) # context w1= Dense(input_dim=dim, units=V, activation='softmax') # because I want to use predicition on valid set) w= w1(w_flat) model = Model(inputs=[w_inputs], outputs=[w]) model.compile(loss='sparse_categorical_crossentropy', optimizer='sgd',metrics=['accuracy']) It works fine. Given NCE loss isnt available in keras, I wrote up a

how to save resized images using ImageDataGenerator and flow_from_directory in keras

岁酱吖の 提交于 2019-12-05 11:06:25
I am resizing my RGB images stored in a folder(two classes) using following code: from keras.preprocessing.image import ImageDataGenerator dataset=ImageDataGenerator() dataset.flow_from_directory('/home/1',target_size=(50,50),save_to_dir='/home/resized',class_mode='binary',save_prefix='N',save_format='jpeg',batch_size=10) My data tree is like following: 1/ 1_1/ img1.jpg img2.jpg ........ 1_2/ IMG1.jpg IMG2.jpg ........ resized/ 1_1/ (here i want to save resized images of 1_1) 2_2/ (here i want to save resized images of 1_2) After running the code i am getting following output but not images:

Validation accuracy is always greater than training accuracy in Keras

 ̄綄美尐妖づ 提交于 2019-12-03 19:13:38
问题 I am trying to train a simple neural network with the mnist dataset. For some reason, when I get the history (the parameter returned from model.fit), the validation accuracy is higher than the training accuracy, which is really odd, but if I check the score when I evaluate the model, I get a higher training accuracy than test accuracy. This happens every time, no matter the parameters of the model. Also, if I use a custom callback and access the parameters 'acc' and 'val_acc', I find the same

Can not save model using model.save following multi_gpu_model in Keras

时光怂恿深爱的人放手 提交于 2019-12-03 16:02:27
Following the upgrade to Keras 2.0.9, I have been using the multi_gpu_model utility but I can't save my models or best weights using model.save('path') The error I get is TypeError: can’t pickle module objects I suspect there is some problem gaining access to the model object. Is there a work around this issue? Workaround Here's a patched version that doesn't fail while saving: from keras.layers import Lambda, concatenate from keras import Model import tensorflow as tf def multi_gpu_model(model, gpus): if isinstance(gpus, (list, tuple)): num_gpus = len(gpus) target_gpu_ids = gpus else: num

Validation accuracy is always greater than training accuracy in Keras

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-03 03:12:09
I am trying to train a simple neural network with the mnist dataset. For some reason, when I get the history (the parameter returned from model.fit), the validation accuracy is higher than the training accuracy, which is really odd, but if I check the score when I evaluate the model, I get a higher training accuracy than test accuracy. This happens every time, no matter the parameters of the model. Also, if I use a custom callback and access the parameters 'acc' and 'val_acc', I find the same problem (the numbers are the same as the ones returned in the history). Please help me! What am I

Is there any way to get variable importance with Keras?

喜你入骨 提交于 2019-12-02 19:35:13
I am looking for a proper or best way to get variable importance in a Neural Network created with Keras. The way I currently do it is I just take the weights (not the biases) of the variables in the first layer with the assumption that more important variables will have higher weights in the first layer. Is there another/better way of doing it? Since everything will be mixed up along the network, the first layer alone can't tell you about the importante of each var. The following layers can also increase or decrease their importance, and even make one var affect the importance of another var.

how to know which node is dropped after using keras dropout layer

我与影子孤独终老i 提交于 2019-12-02 07:24:21
From nick blog it is clear that in dropout layer of CNN model we drop some nodes on the basis of bernoulli. But how to verify it, i.e. how to check which node is not selected. In DropConnect we leave some weights so I think with the help of model.get_weights() we can verify, but how in the case of dropout layer. model = Sequential() model.add(Conv2D(2, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(4, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(8, activation='relu')) model

Keras: ImportError: `save_model` requires h5py even thought the code already imported h5py

天大地大妈咪最大 提交于 2019-12-01 07:54:41
I ran into some trouble when trying to save a Keras model: Here is my code: import h5py from keras.models import load_model try: import h5py print ('import fine') except ImportError: h5py = None left.save('left.h5') # creates a HDF5 file 'my_model.h5' left_load = load_model('left.h5') But I got the following errors even though the code print 'import fine' : import fine --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-145-b641e79036fa> in <module>() 8 h5py = None 9 ---> 10 left.save('left.h5') # creates a

Keras: ImportError: `save_model` requires h5py even thought the code already imported h5py

假如想象 提交于 2019-12-01 06:08:33
问题 I ran into some trouble when trying to save a Keras model: Here is my code: import h5py from keras.models import load_model try: import h5py print ('import fine') except ImportError: h5py = None left.save('left.h5') # creates a HDF5 file 'my_model.h5' left_load = load_model('left.h5') But I got the following errors even though the code print 'import fine' : import fine --------------------------------------------------------------------------- ImportError Traceback (most recent call last)

Use “Flatten” or “Reshape” to get 1D output of unknown input shape in keras

£可爱£侵袭症+ 提交于 2019-11-30 15:32:47
I want to use the keras layer Flatten() or Reshape((-1,)) at the end of my model to output an 1D vector like [0,0,1,0,0, ... ,0,0,1,0] . Sadly there is an problem because of my unknown input shape which is: input_shape=(4, None, 1))) . So typically the input shape is something between [batch_size, 4, 64, 1] and [batch_size, 4, 256, 1] the output should be batch_size x unknown dimension (for the fist example above: [batch_size, 64] and for the secound [batch_size, 256] ). My model looks like: model = Sequential() model.add(Convolution2D(32, (4, 32), padding='same', input_shape=(4, None, 1)))