transfer-learning

Keras model params are all “NaN”s after reloading

对着背影说爱祢 提交于 2019-12-13 03:56:18
问题 I use transfer learning with Resnet50. I create a new model out of the pretrained model provided by Keras (the 'imagenet'). After training my new model, I save it as following: # Save the Siamese Network architecture siamese_model_json = siamese_network.to_json() with open("saved_model/siamese_network_arch.json", "w") as json_file: json_file.write(siamese_model_json) # save the Siamese Network model weights siamese_network.save_weights('saved_model/siamese_model_weights.h5') And later, I

How to add customm layers inside vgg16 when doing transfer learning?

余生长醉 提交于 2019-12-11 18:25:17
问题 I am trying to use transfer learning using vgg16. My main concept is to train the first few layers of vgg16, and add my own layer, afterwords add the rest of the layers from vgg16, and add my own output layer to the end. To do this I follow this sequence: (1) load layers and freez layers, (2) add my layers, (3) load the rest of layers (except the output layer) [THIS IS WHERE I ENCOUNTER THE FOLLOWING ERROR] and freez the layer, (4) add output layer. Is my approach ok? If not, then where I am

Keras Transfer Learning Issue

穿精又带淫゛_ 提交于 2019-12-11 17:27:22
问题 I have trained & saved a smaller network on my small dataset, and I want to use transfer learning. I want to use this saved network on top of the conv part of the pretrained VGG16, specifically I want to freeze some layers of VGG but not all then I want to use the fc that I have already trained on my smaller dataset, and learn a model which is a combination of both with transferred weights. I am following a mish and mash of tutorials: https://blog.keras.io/building-powerful-image

How to convert a retrained model to tflite format?

喜你入骨 提交于 2019-12-11 08:37:45
问题 I have retrained an image classifier model on MobileNet, I have these files. Further, I used toco to compress the retrained model to convert the model to .lite format, but I need it in .tflite format. Is there anyway I can get to tflite format from existing files? 回答1: You can rename the .lite model to .tflite and it should work just fine. Alternatively, with toco, you can rename the output as it is created : toco \ --input_file=tf_files/retrained_graph.pb \ --output_file=tf_files/optimized

Transfer Learning From a U-Net for Image Segmentation [Keras]

扶醉桌前 提交于 2019-12-11 00:16:27
问题 Just getting started with Conv Nets and trying out an image segmentation problem. I got my hands on 24 images and their masks for the dstl satellite image feature detection competition. (https://www.kaggle.com/c/dstl-satellite-imagery-feature-detection/data) I thought I’d try to follow the tips here https://blog.keras.io/building-powerful-image-classification-models-using-very-little-data.html but I’m stuck. I downloaded the pre-trained weights for ZF_UNET_224, the 2nd place winners’ approach

Fine-tuning and transfer learning by the example of YOLO

十年热恋 提交于 2019-12-08 03:15:22
问题 I have a general question regarding fine-tuning and transfer learning, which came up when I tried to figure out how to best get yolo to detect my custom object (being hands). I apologize for the long text possibily containing lots of false information. I would be glad if someone had the patience to read it and help me clear my confusion. After lots of googling, I learned that many people regard fine-tuning to be a sub-class of transfer learning while others believe that they are to different

Is it required to have predefined Image size to use transfer learning in tensorflow?

怎甘沉沦 提交于 2019-12-08 02:39:51
问题 I intend to use pre-trained model like faster_rcnn_resnet101_pets for Object Detection in Tensorflow environment as described here I have collected several images for training and testing set. All these images are of varying size. Do I have to resize them to a common size ? faster_rcnn_resnet101_pets uses resnet with input size 224x224x3. Does this mean I have to resize all my images before sending for training ? Or It is taken care automatically by TF. python train.py --logtostderr --train

Is it required to have predefined Image size to use transfer learning in tensorflow?

浪子不回头ぞ 提交于 2019-12-06 09:26:01
I intend to use pre-trained model like faster_rcnn_resnet101_pets for Object Detection in Tensorflow environment as described here I have collected several images for training and testing set. All these images are of varying size. Do I have to resize them to a common size ? faster_rcnn_resnet101_pets uses resnet with input size 224x224x3. Does this mean I have to resize all my images before sending for training ? Or It is taken care automatically by TF. python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_resnet101_pets.config In general, is it a good

Transfer learning why remove last hidden layer?

风流意气都作罢 提交于 2019-12-02 15:23:39
问题 Often when reading blogs about transfer learning it says - remove the last layer, or remove the last two layers. That is, remove output layer and last hidden layer. So if the transfer learning implies changing the cost function also, e.g. from cross-entropy to mean squared errro, I understand that you need to change the last output layer from 1001 layer of softmax values to a Dense(1) layer which outputs a float, but: why also change the last hidden layer? what weights is the two last new

Transfer learning why remove last hidden layer?

吃可爱长大的小学妹 提交于 2019-12-02 12:26:35
Often when reading blogs about transfer learning it says - remove the last layer, or remove the last two layers. That is, remove output layer and last hidden layer. So if the transfer learning implies changing the cost function also, e.g. from cross-entropy to mean squared errro, I understand that you need to change the last output layer from 1001 layer of softmax values to a Dense(1) layer which outputs a float, but: why also change the last hidden layer? what weights is the two last new layers get initialized with if using Keras and one of the predefined CNN models with imagenet weights? He