pre-trained-model

AttributeError: 'Node' object has no attribute 'output_masks'

杀马特。学长 韩版系。学妹 提交于 2019-12-30 09:18:17
问题 I use Keras pretrained model VGG16. The problem is that after configuring tensorflow to use the GPU I get an error that I didn't have before when using the CPU. The error is the following one: Traceback (most recent call last): File "/home/guillaume/Documents/Allianz/ConstatOrNotConstatv3/train_network.py", line 109, in <module> model = LeNet.build(width=100, height=100, depth=3, classes=5) File "/home/guillaume/Documents/Allianz/ConstatOrNotConstatv3/lenet.py", line 39, in build output =

Input channels equal to 6 on tensorflow

空扰寡人 提交于 2019-12-24 15:55:52
问题 I need to merge RGB and YCRCB channels, as the input data of retraining on tensorflow with mobilenet_1.0_224 model. I changed the file /tensorflow/examples/image_retraining/retrain.py , in func get_random_distorted_bottlenecks : bottlenecks.append(bottleneck_values) ground_truths.append(ground_truth) to bottlenecks.append(bottleneck_values) image = cv2.imread(image_path) image_ycrcb = cv2.cvtColor(image,cv2.COLOR_RGB2YCrCb) bottlenecks = np.dstack(bottlenecks, image_ycrcb) ground_truths

Retraining a CNN without a high-level API

家住魔仙堡 提交于 2019-12-11 17:55:35
问题 Summary : I am trying to retrain a simple CNN for MNIST without using a high-level API. I already succeeded doing so by retraining the entire network, but my current goal is to retrain only the last one or two Fully Connected layers. Work so far: Say I have a CNN with the following structure Convolutional Layer RELU Pooling Layer Convolutional Layer RELU Pooling Layer Fully Connected Layer RELU Dropout Layer Fully Connected Layer to 10 output classes My goal is to retrain either the last

How to convert a retrained model to tflite format?

喜你入骨 提交于 2019-12-11 08:37:45
问题 I have retrained an image classifier model on MobileNet, I have these files. Further, I used toco to compress the retrained model to convert the model to .lite format, but I need it in .tflite format. Is there anyway I can get to tflite format from existing files? 回答1: You can rename the .lite model to .tflite and it should work just fine. Alternatively, with toco, you can rename the output as it is created : toco \ --input_file=tf_files/retrained_graph.pb \ --output_file=tf_files/optimized

How can I get access to intermediate activation maps of the pre-trained models in NiftyNet?

蓝咒 提交于 2019-12-11 06:34:40
问题 I could download and successfully test brain parcellation demo of NiftyNet package. However, this only gives me the ultimate parcellation result of a pre-trained network, whereas I need to get access to the output of the intermediate layers too. According to this demo, the following line downloads a pre-trained model and a test MR volume: wget -c https://www.dropbox.com/s/rxhluo9sub7ewlp/parcellation_demo.tar.gz -P ${demopath} where ${demopath} is the path to the demo folder. Extracting the

Why does vgg.prepare() method create 9 copies of the given image?

浪尽此生 提交于 2019-12-11 04:24:45
问题 I get this result when I apply vgg.prepare() to the following image: I use this line of code: Image.fromarray(np.uint8(vgg.prepare(pep).reshape(224,224,3))) And get an image which is combined of 9 copies of the given image: 回答1: I finally got what you did... the only mistake is .reshape . Because the image is transposed , not reshaped , you have to re-transpose to restore the original image. pep = pep.transpose((1, 2, 0)) # transpose pep += [103.939, 116.779, 123.68] # un-normalize pep = pep

ValueError: `decode_predictions` expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 7)

心已入冬 提交于 2019-12-09 01:08:48
问题 I am using VGG16 with keras for transfer learning (I have 7 classes in my new model) and as such I want to use the build-in decode_predictions method to output the predictions of my model. However, using the following code: preds = model.predict(img) decode_predictions(preds, top=3)[0] I receive the following error message: ValueError: decode_predictions expects a batch of predictions (i.e. a 2D array of shape (samples, 1000)). Found array with shape: (1, 7) Now I wonder why it expects 1000

Tensorflow load pre-trained model use different optimizer

给你一囗甜甜゛ 提交于 2019-12-06 05:25:01
问题 I want to load a pre-trained model (optimized by AdadeltaOptimizer) and continue training with SGD (GradientDescentOptimizer). The models are saved and loaded with tensorlayer API: save model: import tensorlayer as tl tl.files.save_npz(network.all_params, name=model_dir + "model-%d.npz" % global_step) load model: load_params = tl.files.load_npz(path=resume_dir + '/', name=model_name) tl.files.assign_params(sess, load_params, network) If I continue training with adadelta, the training loss

Tensorflow load pre-trained model use different optimizer

纵饮孤独 提交于 2019-12-04 11:30:48
I want to load a pre-trained model (optimized by AdadeltaOptimizer) and continue training with SGD (GradientDescentOptimizer). The models are saved and loaded with tensorlayer API : save model: import tensorlayer as tl tl.files.save_npz(network.all_params, name=model_dir + "model-%d.npz" % global_step) load model: load_params = tl.files.load_npz(path=resume_dir + '/', name=model_name) tl.files.assign_params(sess, load_params, network) If I continue training with adadelta, the training loss (cross entropy) looks normal (start at a close value as the loaded model). However, if I change the

Duplicate node name in graph: 'conv2d_0/kernel/Adam'

冷暖自知 提交于 2019-12-04 11:10:20
I just saved a model, by that code: def train(): with tf.Session() as sess: saver = tf.train.Saver(max_to_keep = 2) Loss = myYoloLoss([Scale1,Scale2,Scale3],[Y1, Y2 ,Y3]) opt = tf.train.AdamOptimizer(2e-4).minimize(Loss) init = tf.global_variables_initializer() sess.run(init) imageNum = 0 Num = 0 while(1): #get batchInput batchImg,batchScale1,batchScale2,batchScale3 = getBatchImage(batchSize = BATCHSIZE) for epoch in range(75): _ , epochloss = sess.run([opt,Loss],feed_dict={X:batchImg,Y1:batchScale1,Y2:batchScale2,Y3:batchScale3}) if(epoch%15 == 0): print(epochloss) imageNum = imageNum +