keras-2

Keras metrics with TF backend vs tensorflow metrics

我是研究僧i 提交于 2019-12-11 06:55:09
问题 When Keras 2.x removed certain metrics, the changelog said it did so because they were "Batch-based" and therefore not always accurate. What is meant by this? Do the corresponding metrics included in tensorflow suffer from the same drawbacks? For example: precision and recall metrics. 回答1: Let's take precision for example. The stateless version which was removed was implemented like so: def precision(y_true, y_pred): """Precision metric. Only computes a batch-wise average of precision.

Report Keras model evaluation metrics every 10 epochs?

时光怂恿深爱的人放手 提交于 2019-12-10 16:14:48
问题 I'd like to know the specificity and sensitivity of my model. Currently, I'm evaluating the model after all epochs are finished: from sklearn.metrics import confusion_matrix predictions = model.predict(x_test) y_test = np.argmax(y_test, axis=-1) predictions = np.argmax(predictions, axis=-1) c = confusion_matrix(y_test, predictions) print('Confusion matrix:\n', c) print('sensitivity', c[0, 0] / (c[0, 1] + c[0, 0])) print('specificity', c[1, 1] / (c[1, 1] + c[1, 0])) The disadvantage of this

How do I use categorical_hinge in Keras?

吃可爱长大的小学妹 提交于 2019-12-10 14:54:52
问题 Maybe a very dumb question but I can't find an example how to use categorical_hinge in Keras. I do classification and my target is shape(,1) with values [-1,0,1] so I have 3 categories. Using the functional API I have set up my output layer like this: output = Dense(1, name='output', activation='tanh', kernel_initializer='lecun_normal')(output1) Then I apply: model.compile(optimizer=adam, loss={'output': 'categorical_hinge'}, metrics=['accuracy']) The result is that the model is converging

Keras: Using weights for NCE loss

懵懂的女人 提交于 2019-12-10 10:11:06
问题 So here is the model with the standard loss function. target = Input(shape=(1, ), dtype='int32') w_inputs = Input(shape=(1, ), dtype='int32') w_emb = Embedding(V, dim, embeddings_initializer='glorot_uniform',name='word_emb')(w_inputs) w_flat= Flatten()(w_emb) # context w1= Dense(input_dim=dim, units=V, activation='softmax') # because I want to use predicition on valid set) w= w1(w_flat) model = Model(inputs=[w_inputs], outputs=[w]) model.compile(loss='sparse_categorical_crossentropy',

Keras cnn model output shape doesn't match model summary

时光怂恿深爱的人放手 提交于 2019-12-08 10:32:38
问题 I am trying to use the convolution part of ResNet50() model, as this: #generate batches def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=4, class_mode='categorical', target_size=(224,224)): return gen.flow_from_directory(dirname, target_size=target_size, class_mode=class_mode, shuffle=shuffle, batch_size=batch_size) trn_batches = get_batches("path_to_dirctory", shuffle=False,batch_size=4) #create model rn_mean = np.array([123.68, 116.779, 103.939], dtype=np

Concatenate input with constant vector in keras

房东的猫 提交于 2019-12-07 20:06:33
问题 I am trying to concatenate my input with a constant tensor in the keras-2 function API. In my real problem, the constants depend on some parameters in setup, but I think the example below shows the error I get. from keras.layers import* from keras.models import * from keras import backend as K import numpy as np a = Input(shape=(10, 5)) a1 = Input(tensor=K.variable(np.ones((10, 5)))) x = [a, a1] # x = [a, a] works fine b = concatenate(x, 1) x += [b] # This changes b._keras_history[0].input b

how to save resized images using ImageDataGenerator and flow_from_directory in keras

和自甴很熟 提交于 2019-12-07 07:32:45
问题 I am resizing my RGB images stored in a folder(two classes) using following code: from keras.preprocessing.image import ImageDataGenerator dataset=ImageDataGenerator() dataset.flow_from_directory('/home/1',target_size=(50,50),save_to_dir='/home/resized',class_mode='binary',save_prefix='N',save_format='jpeg',batch_size=10) My data tree is like following: 1/ 1_1/ img1.jpg img2.jpg ........ 1_2/ IMG1.jpg IMG2.jpg ........ resized/ 1_1/ (here i want to save resized images of 1_1) 2_2/ (here i

Keras TimeDistributed Conv1D Error

末鹿安然 提交于 2019-12-06 10:36:28
This is my code: cnn_input = Input(shape=(cnn_max_length,)) emb_output = Embedding(num_chars + 1, output_dim=32, input_length=cnn_max_length, trainable=True)(cnn_input) output = TimeDistributed(Convolution1D(filters=128, kernel_size=4, activation='relu'))(emb_output) I want to train a character-level CNN sequence labeler and I keep receiving this error: Traceback (most recent call last): File "word_lstm_char_cnn.py", line 24, in <module> output = kl.TimeDistributed(kl.Convolution1D(filters=128, kernel_size=4, activation='relu'))(emb_output) File "/home/user/anaconda3/envs/thesisenv/lib/python3

Concatenate input with constant vector in keras

て烟熏妆下的殇ゞ 提交于 2019-12-06 09:08:34
I am trying to concatenate my input with a constant tensor in the keras-2 function API. In my real problem, the constants depend on some parameters in setup, but I think the example below shows the error I get. from keras.layers import* from keras.models import * from keras import backend as K import numpy as np a = Input(shape=(10, 5)) a1 = Input(tensor=K.variable(np.ones((10, 5)))) x = [a, a1] # x = [a, a] works fine b = concatenate(x, 1) x += [b] # This changes b._keras_history[0].input b = concatenate(x, 1) model = Model(a, b) The error I get is: ValueError Traceback (most recent call last

Getting x_test, y_test from generator in Keras?

戏子无情 提交于 2019-12-06 07:04:27
For certain problems, the validation data can't be a generator, e.g.: TensorBoard histograms : If printing histograms, validation_data must be provided, and cannot be a generator. My current code looks like: image_data_generator = ImageDataGenerator() training_seq = image_data_generator.flow_from_directory(training_dir) validation_seq = image_data_generator.flow_from_directory(validation_dir) testing_seq = image_data_generator.flow_from_directory(testing_dir) model = Sequential(..) # .. model.compile(..) model.fit_generator(training_seq, validation_data=validation_seq, ..) How do I provide it