tensorflow-estimator

Tensorflow v1.10: store images as byte strings or per channel?

心已入冬 提交于 2019-12-18 07:08:18
问题 Context It is known that, at the moment, TF's Record documentation leaves something to be desired. My question is in regards to what is optimal for storing: a sequence, its per-element class probabilities, and some (context?) information (e.g. name of the sequence) as a TF Record. Namely, this questions considers storing the sequence and class probabilities as channels vs as a byte string and whether or not the meta information should go in as features of a tf.train.Example or as the context

Test a tensorflow cnn model after the training

余生长醉 提交于 2019-12-13 08:19:04
问题 I created a model of a convolutional neural network, I implemented the training and now I have to create a function to run the model in test mode but I have no idea how I could do it. Ho due dataset, uno per l'allenamento e uno per il test quindi dovrei trovare un modo per testare il modello nel dataset di test. I could load the test dataset in the same way as the training dataset but then I would not know how to do the test on the model already trained. This is the model function import

Invalid Argument error while using keras model API inside an estimator model_fn

为君一笑 提交于 2019-12-12 16:44:35
问题 The model_fn for custom estimator which I have built is as shown below, def _model_fn(features, labels, mode): """ Mask RCNN Model function """ self.keras_model = self.build_graph(mode, config) outputs = self.keras_model(features) # ERROR STATEMENT # outputs = self.keras_model(list(features.values())) # Same ERROR with this statement # Predictions if mode == tf.estimator.ModeKeys.PREDICT: ... # Defining Prediction Spec # Training if mode == tf.estimator.ModeKeys.TRAIN: # Defining Loss and

TensorFlow: What are the input nodes for tf.Estimator models

梦想的初衷 提交于 2019-12-12 14:30:00
问题 I trained a Wide & Deep model using the pre-made Estimator class (DNNLinearCombinedClassifier), by essentially following the tutorial on tensorflow.org. I wanted to do inference/serving, but without using tensorflow-serving. This basically comes down to feeding some test data to the correct input tensor and retrieving the output tensor. However, I am not sure what the input nodes/layer should be. In the tensorflow graph (graph.pbtxt), the following nodes seem relevant. But they are also

Tensorflow Estimator Graph Size Limitation for large dimensions of input

故事扮演 提交于 2019-12-12 06:49:29
问题 I think my entire training data is being stored inside the graph which is hitting the 2gb limit. How can i use feed_dict in estimator API? FYI, I am using the tensorflow estimator API down the line for training my model. Input Function: def input_fn(X_train,epochs,batch_size): ''' input X_train is the scipy sparse matrix of large input dimensions(200000) and number of rows=600000''' X_train_tf = tf.data.Dataset.from_tensor_slices((convert_sparse_matrix_to_sparse_tensor(X_train, tf.float32)))

Loop in tensorflow

◇◆丶佛笑我妖孽 提交于 2019-12-11 18:12:42
问题 I changed my question to explain my issue better: I have a function: output_image = my_dunc(x) that x should be like (1, 4, 4, 1) Please help me to fix the error in this part: out = tf.Variable(tf.zeros([1, 4, 4, 3])) index = tf.constant(0) def condition(index): return tf.less(index, tf.subtract(tf.shape(x)[3], 1)) def body(index): out[:, :, :, index].assign(my_func(x[:, :, :, index])) return tf.add(index, 1), out out = tf.while_loop(condition, body, [index]) ValueError: The two structures

Estimator.predict() has Shape Issues?

烈酒焚心 提交于 2019-12-11 16:25:32
问题 I can train and evalaute a Tensorflow Estimator model without any problems. When I do prediction, this error arises: InvalidArgumentError (see above for traceback): output_shape has incorrect number of elements: 68 should be: 2 [[Node: output = SparseToDense[T=DT_INT32, Tindices=DT_INT32, validate_indices=true, _device="/job:localhost/replica:0/task:0/device:CPU:0"](ToInt32, ToInt32_1, ToInt32_2, bidirectional_rnn/bidirectional_rnn/fw/fw/time)]] All of the model functions use the same

How do I implement early stopping with Estimator API for distributed training?

怎甘沉沦 提交于 2019-12-11 16:04:29
问题 I'm using Tensorflow 1.4 Estimator and Dataset APIs for distributed training in Google Cloud Platform. I want to implement early stopping to prevent overfitting during the training, and looked at early stopping hooks documentation below: https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/make_early_stopping_hook https://www.tensorflow.org/api_docs/python/tf/estimator/experimental/stop_if_no_decrease_hook But, none of these hooks support distributed training ; so the question

How to remove untrainable variables when saving checkpoint with tensorflow estimator?

青春壹個敷衍的年華 提交于 2019-12-11 14:22:14
问题 I have a tf model to train with a untrainable embedding layer whose size is larger than 10GB. I do not want to save this variable to my checkpoint file, because it takes too much time and space. Is it possible for me to save ckpt without this untrainable variable and use tf.estimator normally? When training the model in distributed mode, parameter server will save this variable and it takes too much time to synchronize variable. Is it possible to avoid this problem? Values of this variable

Hyperparameter tuning locally — Tensorflow Google Cloud ML Engine

吃可爱长大的小学妹 提交于 2019-12-11 06:08:50
问题 Is it possible to tune hyperparameters using ML Engine to train the model locally? The documentation only mentions training with hyperparameter tuning in the cloud (submitting a job), and has no mention to doing so locally. Otherwise, is there another commonly used hyperparameter tuning that passes in command arguments to task.py as in the census estimator tutorial? https://github.com/GoogleCloudPlatform/cloudml-samples/tree/master/census 回答1: You cannot perform HPTuning (Bayesian