vgg-net

Keras - All layer names should be unique

烂漫一生 提交于 2019-12-04 03:14:21
问题 I combine two VGG net in keras together to make classification task. When I run the program, it shows an error: RuntimeError: The name "predictions" is used 2 times in the model. All layer names should be unique. I was confused because I only use prediction layer once in my code: from keras.layers import Dense import keras from keras.models import Model model1 = keras.applications.vgg16.VGG16(include_top=True, weights='imagenet', input_tensor=None, input_shape=None, pooling=None, classes=1000

Stop and Restart Training on VGG-16

元气小坏坏 提交于 2019-12-02 05:12:25
I am using pre-trained VGG-16 model for image classification. I am adding custom last layer as the number of my classification classes are 10. I am training the model for 200 epochs. My question is: is there any way if I randomly stop (by closing python window) the training at some epoch, let's say epoch no. 50 and resume from there? I have read about saving and reloading model but my understanding is that works for our custom models only instead of pre-trained models like VGG-16. You can use ModelCheckpoint callback to save your model regularly. To use it, pass a callbacks parameter to the

Keras VGG16 fine tuning

大兔子大兔子 提交于 2019-12-01 17:03:39
There is an example of VGG16 fine-tuning on keras blog , but I can't reproduce it. More precisely, here is code used to init VGG16 without top layer and to freeze all blocks except the topmost: WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5' weights_path = get_file('vgg16_weights.h5', WEIGHTS_PATH_NO_TOP) model = Sequential() model.add(InputLayer(input_shape=(150, 150, 3))) model.add(Conv2D(64, (3, 3), activation='relu', padding='same')) model.add(Conv2D(64, (3, 3), activation='relu', padding=

Keras VGG16 fine tuning

我怕爱的太早我们不能终老 提交于 2019-12-01 15:05:32
问题 There is an example of VGG16 fine-tuning on keras blog, but I can't reproduce it. More precisely, here is code used to init VGG16 without top layer and to freeze all blocks except the topmost: WEIGHTS_PATH_NO_TOP = 'https://github.com/fchollet/deep-learning-models/releases/download/v0.1/vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5' weights_path = get_file('vgg16_weights.h5', WEIGHTS_PATH_NO_TOP) model = Sequential() model.add(InputLayer(input_shape=(150, 150, 3))) model.add(Conv2D(64, (3

Integrating Keras model into TensorFlow

故事扮演 提交于 2019-11-29 03:58:41
I am trying to use a pre-trained Keras model within TensorFlow code, as described in this Keras blog post under section II: Using Keras models with TensorFlow. I want to use the pre-trained VGG16 network available in Keras to extract convolutional feature maps from images, and add my own TensorFlow code over that. So I've done this: import tensorflow as tf from tensorflow.python.keras.applications.vgg16 import VGG16, preprocess_input from tensorflow.python.keras import backend as K # images = a NumPy array containing 8 images model = VGG16(include_top=False, weights='imagenet') inputs = tf

How to calculate the number of parameters of convolutional neural networks?

萝らか妹 提交于 2019-11-28 15:21:22
I can't give the correct number of parameters of AlexNet or VGG Net . For example, to calculate the number of parameters of a conv3-256 layer of VGG Net, the answer is 0.59M = (3*3)*(256*256), that is (kernel size) * (product of both number of channels in the joint layers), however in that way, I can't get the 138M parameters. So could you please show me where is wrong with my calculation, or show me the right calculation procedure? If you refer to VGG Net with 16-layer (table 1, column D) then 138M refers to the total number of parameters of this network, i.e including all convolutional

How to calculate the number of parameters of convolutional neural networks?

与世无争的帅哥 提交于 2019-11-27 09:09:57
问题 I can't give the correct number of parameters of AlexNet or VGG Net. For example, to calculate the number of parameters of a conv3-256 layer of VGG Net, the answer is 0.59M = (3*3)*(256*256), that is (kernel size) * (product of both number of channels in the joint layers), however in that way, I can't get the 138M parameters. So could you please show me where is wrong with my calculation, or show me the right calculation procedure? 回答1: If you refer to VGG Net with 16-layer (table 1, column D

Caffe shape mismatch error using pretrained VGG-16 model

元气小坏坏 提交于 2019-11-26 17:18:37
问题 I am using PyCaffe to implement a neural network inspired by the VGG 16 layer network. I want to use the pre-trained model available from their GitHub page. Generally this works by matching layer names. For my "fc6" layer I have the following definition in my train.prototxt file: layer { name: "fc6" type: "InnerProduct" bottom: "pool5" top: "fc6" inner_product_param { num_output: 4096 } } Here is the prototxt file for the VGG-16 deploy architecture. Note that the "fc6" in their prototxt is