keras-layer

AttributeError: module 'tensorflow' has no attribute 'get_default_graph'

与世无争的帅哥 提交于 2020-12-25 04:12:21
问题 I am doing some task related to image captioning and I have loaded the weights of inception model like this model = InceptionV3(weights='imagenet') But I am getting error like this: AttributeError: module 'tensorflow' has no attribute 'get_default_graph' What should I do? Please help. Here is the full output of above code. 1 . --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () 1 # Load the inception v3 model ---->

AttributeError: module 'tensorflow' has no attribute 'get_default_graph'

扶醉桌前 提交于 2020-12-25 04:09:42
问题 I am doing some task related to image captioning and I have loaded the weights of inception model like this model = InceptionV3(weights='imagenet') But I am getting error like this: AttributeError: module 'tensorflow' has no attribute 'get_default_graph' What should I do? Please help. Here is the full output of above code. 1 . --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) in () 1 # Load the inception v3 model ---->

how to see tensor value of a layer output in keras

点点圈 提交于 2020-12-13 09:27:21
问题 I have a Seq2Seq model. I am interested to print out the matrix value of the output of the encoder per iteration. So for example as the dimension of the matrix in the encoder is (?,20) and the epoch =5 and in each epoch, there are 10 iteration, I would like to see 10 matrix of the dimension (?,20) per epoch . I have gone to several links as here but it still does not print out the value matrix. With this code as mentioned in the aboved link: import keras.backend as K k_value = K.print_tensor

Unable to understand the behavior of method `build` in tensorflow keras layers (tf.keras.layers.Layer)

拈花ヽ惹草 提交于 2020-12-12 04:36:18
问题 Layers in tensorflow keras have a method build that is used to defer the weights creation to a time when you have seen what the input is going to be. a layer's build method I have a few questions i have not been able to find the answer of: here it is said that If you assign a Layer instance as attribute of another Layer, the outer layer will start tracking the weights of the inner layer. What does it mean to track the weights of a layer? The same link also mentions that We recommend creating

Shape of image after MaxPooling2D with padding ='same' --calculating layer-by-layer shape in convolution autoencoder

时光总嘲笑我的痴心妄想 提交于 2020-12-06 18:46:32
问题 Very briefly my question relates to image-size not remaining the same as the input image size after a maxpool layer when I use padding = 'same' in Keras code. I am going through the Keras blog: Building Autoencoders in Keras. I am building Convolution autoencoder. The autoencoder code is as follows: input_layer = Input(shape=(28, 28, 1)) x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_layer) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu',

Shape of image after MaxPooling2D with padding ='same' --calculating layer-by-layer shape in convolution autoencoder

£可爱£侵袭症+ 提交于 2020-12-06 18:40:30
问题 Very briefly my question relates to image-size not remaining the same as the input image size after a maxpool layer when I use padding = 'same' in Keras code. I am going through the Keras blog: Building Autoencoders in Keras. I am building Convolution autoencoder. The autoencoder code is as follows: input_layer = Input(shape=(28, 28, 1)) x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_layer) x = MaxPooling2D((2, 2), padding='same')(x) x = Conv2D(8, (3, 3), activation='relu',

Removing layers from a pretrained keras model gives the same output as original model

允我心安 提交于 2020-12-02 07:12:40
问题 During some feature extraction experiments, I noticed that the 'model.pop()' functionality is not working as expected. For a pretrained model like vgg16, after using 'model.pop()' , model.summary() shows that the layer has been removed (expected 4096 features), however on passing an image through the new model, it results in the same number of features (1000) as the original model. No matter how many layers are removed including a completely empty model, it generates the same output. Looking

Removing layers from a pretrained keras model gives the same output as original model

懵懂的女人 提交于 2020-12-02 07:09:21
问题 During some feature extraction experiments, I noticed that the 'model.pop()' functionality is not working as expected. For a pretrained model like vgg16, after using 'model.pop()' , model.summary() shows that the layer has been removed (expected 4096 features), however on passing an image through the new model, it results in the same number of features (1000) as the original model. No matter how many layers are removed including a completely empty model, it generates the same output. Looking