keras-2

Is there any way to get variable importance with Keras?

好久不见. 提交于 2019-12-31 09:03:23
问题 I am looking for a proper or best way to get variable importance in a Neural Network created with Keras. The way I currently do it is I just take the weights (not the biases) of the variables in the first layer with the assumption that more important variables will have higher weights in the first layer. Is there another/better way of doing it? 回答1: Since everything will be mixed up along the network, the first layer alone can't tell you about the importance of each variable. The following

Concatenate input with constant vector in keras. how one define the batch_size

微笑、不失礼 提交于 2019-12-24 17:17:12
问题 As a follow-up from this question: Concatenate input with constant vector in keras I am trying to use the suggested solution: constant=K.variable(np.ones((1,10, 5))) constant = K.repeat_elements(constant,rep=batch_size,axis=0) And got the following Error: NameError: name 'batch_size' is not defined I do not see how one define within the keras model the batch_size [not explicitly] so that one can concatenate a symbolic layer and a constant layer in order to use them as an input layer. 回答1: To

how to know which node is dropped after using keras dropout layer

佐手、 提交于 2019-12-24 08:40:02
问题 From nick blog it is clear that in dropout layer of CNN model we drop some nodes on the basis of bernoulli. But how to verify it, i.e. how to check which node is not selected. In DropConnect we leave some weights so I think with the help of model.get_weights() we can verify, but how in the case of dropout layer. model = Sequential() model.add(Conv2D(2, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(4, (3, 3), activation='relu')) model.add(MaxPooling2D(pool

How to chain/compose layers in keras 2 functional API without specifying input (or input shape)

烈酒焚心 提交于 2019-12-24 00:36:43
问题 I would like be able to several layers together, but before specifying the input, something like the following: # conv is just a layer, no application conv = Conv2D(64, (3,3), activation='relu', padding='same', name='conv') # this doesn't work: bn = BatchNormalization()(conv) Note that I don't want to specify the input nor its shape if it can be avoided, I want to use this as a shared layer for multiple inputs at a later point. Is there a way to do that? The above gives the following error: >

shouldn't model.trainable=False freeze weights under the model?

狂风中的少年 提交于 2019-12-18 05:54:46
问题 I am trying to freeze the free trained VGG16's layers ('conv_base' below) and add new layers on top of them for feature extracting. I expect to get same prediction results from 'conv_base' before(ret1) / after(ret2) fit of model but it is not. Is this wrong way to check weight freezing? loading VGG16 and set to untrainable conv_base = applications.VGG16(weights='imagenet', include_top=False, input_shape=[150, 150, 3]) conv_base.trainable = False result before model fit ret1 = conv_base

ImportError: cannot import name '_obtain_input_shape' from keras

自作多情 提交于 2019-12-18 03:54:50
问题 In Keras, I'm trying to import _obtain_input_shape as follows: from keras.applications.imagenet_utils import _obtain_input_shape However, I get the following error: ImportError: cannot import name '_obtain_input_shape' The reason I'm trying to import _obtain_input_shape is so that I can determine the input shape(so as to load VGG-Face as follows : I'm using it to determine the correct input shape of the input tensor as follow: input_shape = _obtain_input_shape(input_shape, default_size=224,

ImportError: cannot import name '_obtain_input_shape' from keras

送分小仙女□ 提交于 2019-12-18 03:54:26
问题 In Keras, I'm trying to import _obtain_input_shape as follows: from keras.applications.imagenet_utils import _obtain_input_shape However, I get the following error: ImportError: cannot import name '_obtain_input_shape' The reason I'm trying to import _obtain_input_shape is so that I can determine the input shape(so as to load VGG-Face as follows : I'm using it to determine the correct input shape of the input tensor as follow: input_shape = _obtain_input_shape(input_shape, default_size=224,

Keras TimeDistributed Conv1D Error

旧街凉风 提交于 2019-12-13 12:35:28
问题 This is my code: cnn_input = Input(shape=(cnn_max_length,)) emb_output = Embedding(num_chars + 1, output_dim=32, input_length=cnn_max_length, trainable=True)(cnn_input) output = TimeDistributed(Convolution1D(filters=128, kernel_size=4, activation='relu'))(emb_output) I want to train a character-level CNN sequence labeler and I keep receiving this error: Traceback (most recent call last): File "word_lstm_char_cnn.py", line 24, in <module> output = kl.TimeDistributed(kl.Convolution1D(filters

Use tf.metrics in Keras?

会有一股神秘感。 提交于 2019-12-12 12:41:13
问题 I'm especially interested in specificity_at_sensitivity. Looking through the Keras docs: from keras import metrics model.compile(loss='mean_squared_error', optimizer='sgd', metrics=[metrics.mae, metrics.categorical_accuracy]) But it looks like the metrics list must have functions of arity 2, accepting (y_true, y_pred) and returning a single tensor value. EDIT: Currently here is how I do things: from sklearn.metrics import confusion_matrix predictions = model.predict(x_test) y_test = np.argmax

Calculating precision, recall and F1 in Keras v2, am I doing it right?

。_饼干妹妹 提交于 2019-12-11 18:30:56
问题 There is already a question on how to obtain precision, recall and F1 scores in Keras v2, here is the method I'm using but the question is: am I doing it right? First of all, F. Chollet says he removed these three metrics from version 2 of Keras because they were batch-based and hence not reliable. I'm following an idea by basque21 using a Callback with method on_epoch_end, isn't this normally batch-independent since calculated at epoch end (= after all batches have finished)? Here is the