transfer-learning

Get output from a non final keras model layer

a 夏天 提交于 2021-02-18 06:53:15
问题 I am using ubuntu with python 3 and keras over tensorflow, I am trying to create a model using transfer learning from a pre trained keras model as explained here: I am using the following code import numpy as np from keras.applications import vgg16, inception_v3, resnet50, mobilenet from keras import Model a = np.random.rand(1, 224, 224, 3) + 0.001 a = mobilenet.preprocess_input(a) mobilenet_model = mobilenet.MobileNet(weights='imagenet') mobilenet_model.summary() inputLayer = mobilenet_model

PyTorch model prediction fail for single item

生来就可爱ヽ(ⅴ<●) 提交于 2021-02-08 08:52:31
问题 I use PyTorch and transfer learning to train mobilenet_v2 based classifier. I use a batch of 20 images during training and my test accuracy is ~80%. I try to use the model with single image for individual prediction and output is a wrong class. At the same time if I will take a batch from my test dataset and insert my single image in it instead of element 0 it will have a correct prediction. Prediction 0 will be a correct class. So model works for a batch but not for an individual item. If I

Transfer learning: model is giving unchanged loss results. Is it not training? [closed]

一笑奈何 提交于 2021-01-27 13:33:14
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 2 months ago . Improve this question I'm trying to train a Regression Model on Inception V3. Inputs are images of size (96,320,3). There are a total of 16k+ images out of which 12k+ are for training and the rest for validation. I have frozen all layers in Inception, but unfreezing them

Transfer learning: model is giving unchanged loss results. Is it not training? [closed]

≯℡__Kan透↙ 提交于 2021-01-27 13:24:40
问题 Closed . This question is opinion-based. It is not currently accepting answers. Want to improve this question? Update the question so it can be answered with facts and citations by editing this post. Closed 2 months ago . Improve this question I'm trying to train a Regression Model on Inception V3. Inputs are images of size (96,320,3). There are a total of 16k+ images out of which 12k+ are for training and the rest for validation. I have frozen all layers in Inception, but unfreezing them

pretrained VGG16 model misclassifies even though val accuracy is high and val loss is low [closed]

半腔热情 提交于 2020-07-22 05:50:47
问题 Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 8 hours ago . Improve this question I am new to Deep Learning and started with some tutorials, where I implemented VGG16 Net from Scratch. I wanted to classify integrated circuits in defect and non defect classes. I played around with it, changed the hyperparamters and got a really good result with

Pretraining a language model on a small custom corpus

烂漫一生 提交于 2020-07-21 07:55:47
问题 I was curious if it is possible to use transfer learning in text generation, and re-train/pre-train it on a specific kind of text. For example, having a pre-trained BERT model and a small corpus of medical (or any "type") text, make a language model that is able to generate medical text. The assumption is that you do not have a huge amount of "medical texts" and that is why you have to use transfer learning. Putting it as a pipeline, I would describe this as: Using a pre-trained BERT