tf-slim

Configuration/Flags for TF-Slim across multiple GPU/Machines

心不动则不痛 提交于 2020-01-04 04:44:10
问题 I am curios if there are examples on how to run TF-Slim models/slim using deployment/model_deploy.py across multiple GPU’s on multiple machines. The documentation is pretty good but I am missing a couple of pieces. Specifically what needs to be put in for worker_device and ps_device and what additionally needs to be run on each machine? An example like the one at the bottom of the distributed page would be awesome. https://www.tensorflow.org/how_tos/distributed/ 来源: https://stackoverflow.com

how to log validation loss and accuracy using tfslim

我们两清 提交于 2019-12-23 02:48:28
问题 Is there any way that I can log the validaton loss and accuracy to tensorboard when using tf-slim? When I was using keras, the following code can do this for me: model.fit_generator(generator=train_gen(), validation_data=valid_gen(),...) Then the model will evaluate the validation loss and accuracy after each epoch, which is very convenient. But how to achieve this using tf-slim? The following steps are using primitive tensorflow, which is not what I want: with tf.Session() as sess: for step

Reusing layer weights in Tensorflow

半城伤御伤魂 提交于 2019-12-21 04:51:26
问题 I am using tf.slim to implement an autoencoder. I's fully convolutional with the following architecture: [conv, outputs = 1] => [conv, outputs = 15] => [conv, outputs = 25] => => [conv_transpose, outputs = 25] => [conv_transpose, outputs = 15] => [conv_transpose, outputs = 1] It has to be fully convolutional and I cannot do pooling (limitations of the larger problem). I want to use tied weights, so encoder_W_3 = decoder_W_1_Transposed (so the weights of the first decoder layer are the ones of

How to use evaluation_loop with train_loop in tf-slim

自闭症网瘾萝莉.ら 提交于 2019-12-20 10:38:33
问题 I'm trying to implement a few different models and train them on CIFAR-10, and I want to use TF-slim to do this. It looks like TF-slim has two main loops that are useful during training: train_loop and evaluation_loop. My question is: what is the canonical way to use these loops? As a followup: is it possible to use early stopping with train_loop? Currently I have a model and my training file train.py looks like this import ... train_log_dir = ... with tf.device("/cpu:0"): images, labels,

Decoding tfrecord with tfslim

僤鯓⒐⒋嵵緔 提交于 2019-12-12 07:15:59
问题 I use Python 2.7.13 and Tensorflow 1.3.0 on CPU. I want to use DensNet( https://github.com/pudae/tensorflow-densenet ) for regression problem. My data contains 60000 jpeg images with 37 float labels for each image. I saved my data into tfrecords files by: def Read_Labels(label_path): labels_csv = pd.read_csv(label_path) labels = np.array(labels_csv) return labels[:,1:] ` def load_image(addr): # read an image and resize to (224, 224) img = cv2.imread(addr) img = cv2.resize(img, (224, 224),

Tensorflow (tf-slim) Model with is_training True and False

你。 提交于 2019-12-10 17:22:06
问题 I would like to run a given model both on the train set ( is_training=True ) and on the validation set ( is_training=False ), specifically with how dropout is applied. Right now the prebuilt models expose a parameter is_training that is passed it the dropout layer when building the network. The issue is that If I call the method twice with different values of is_training , I will get two different networks that do no share weights (I think?). How do I go about getting the two networks to

How to get misclassified files in TF-Slim's eval_image_classifier.py?

☆樱花仙子☆ 提交于 2019-12-08 08:16:38
问题 I'm using a script that comes with TF-Slim to validate my trained model. It works fine but I'd like to get a list of the misclassified files. The script makes use of https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/evaluation.py but even there I cannot find any options for printing the misclassified files. How can I achieve that? 回答1: At a high level, you need to do 3 things: 1) Get your filename from the data loader. If you are using a tf-slim dataset

How to get misclassified files in TF-Slim's eval_image_classifier.py?

我的梦境 提交于 2019-12-07 00:39:41
I'm using a script that comes with TF-Slim to validate my trained model. It works fine but I'd like to get a list of the misclassified files. The script makes use of https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/slim/python/slim/evaluation.py but even there I cannot find any options for printing the misclassified files. How can I achieve that? At a high level, you need to do 3 things: 1) Get your filename from the data loader. If you are using a tf-slim dataset from tfrecords, it is likely that the filenames are not stored in the tfrecord so you may be out of luck

how to log validation loss and accuracy using tfslim

旧时模样 提交于 2019-12-06 14:56:49
Is there any way that I can log the validaton loss and accuracy to tensorboard when using tf-slim? When I was using keras, the following code can do this for me: model.fit_generator(generator=train_gen(), validation_data=valid_gen(),...) Then the model will evaluate the validation loss and accuracy after each epoch, which is very convenient. But how to achieve this using tf-slim? The following steps are using primitive tensorflow, which is not what I want: with tf.Session() as sess: for step in range(100000): sess.run(train_op, feed_dict={X: X_train, y: y_train}) if n % batch_size * batches

Reusing layer weights in Tensorflow

谁说胖子不能爱 提交于 2019-12-03 14:51:57
I am using tf.slim to implement an autoencoder. I's fully convolutional with the following architecture: [conv, outputs = 1] => [conv, outputs = 15] => [conv, outputs = 25] => => [conv_transpose, outputs = 25] => [conv_transpose, outputs = 15] => [conv_transpose, outputs = 1] It has to be fully convolutional and I cannot do pooling (limitations of the larger problem). I want to use tied weights, so encoder_W_3 = decoder_W_1_Transposed (so the weights of the first decoder layer are the ones of the last encoder layer, transposed). If I reuse weights the regular way tfslim lets you reuse them, i