tensorflow-estimator

estimator.predict raises “ValueError: None values not supported”

狂风中的少年 提交于 2019-12-11 02:53:45
问题 So I basically copypasted the code from the tensorflow tutorial adapted to this model: Which tries to model a neural network to identify "stairs" shape, as it shows here: (source: gormanalysis.com) import numpy as np import tensorflow as tf import _pickle as cPickle with open("var_x.txt", "rb") as fp: # Unpickling var_x = cPickle.load(fp) with open("var_y.txt", "rb") as fp: # Unpickling var_y = cPickle.load(fp) # Declare list of features, we only have one real-valued feature def model_fn

EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs

狂风中的少年 提交于 2019-12-11 02:43:39
问题 Tensorflow version 1.10 Using: DNNClassifier and tf.estimator.FinalExporter I'm using the Iris example from TF blog. I defined the following code: # The CSV features in our training & test data. COLUMN_NAMES = ['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth', 'Species'] FEATURE_COLUMNS = COLUMN_NAMES[:4] INPUT_COLUMNS = [ tf.feature_column.numeric_column(column) for column in COLUMN_NAMES ] def serving_input_receiver_fn(): """Build the serving inputs.""" inputs = {} for feat in INPUT

TensorFlow: how to export estimator using TensorHub module?

扶醉桌前 提交于 2019-12-10 17:04:18
问题 I have an estimator using a TensorHub text_embedding column, like so: my_dataframe = pandas.DataFrame(columns=["title"}) # populate data labels = [] # populate labels with 0|1 embedded_text_feature_column = hub.text_embedding_column( key="title" ,module_spec="https://tfhub.dev/google/nnlm-en-dim128-with-normalization/1") estimator = tf.estimator.LinearClassifier( feature_columns = [ embedded_text_feature_column ] ,optimizer=tf.train.FtrlOptimizer( learning_rate=0.1 ,l1_regularization_strength

How to create a tf.feature_column by multiplying two other tf.feature_columns?

落爺英雄遲暮 提交于 2019-12-10 10:49:42
问题 In Tensorflow there is already a function to create feature by crossing columns tf.feature_column.crossed_column , but it is more for category data. How about numeric data? For example, there are 2 columns already age = tf.feature_column.numeric_column("age") education_num = tf.feature_column.numeric_column("education_num") if i want to create a third and fourth feature columns base on age and education_num like this my_feature = age * education_num my_another_feature = age * age How can it

Custom eval_metric_ops in Estimator in Tensorflow

寵の児 提交于 2019-12-10 06:45:04
问题 I am trying to add the r squared in the eval_metric_ops in my estimator like this: def model_fn(features, labels, mode, params): predict = prediction(features, params, mode) loss = my_loss_fn eval_metric_ops = { 'rsquared': tf.subtract(1.0, tf.div(tf.reduce_sum(tf.squared_difference(label, tf.reduce_sum(tf.squared_difference(labels, tf.reduce_mean(labels)))), name = 'rsquared') } train_op = tf.contrib.layers.optimize_loss( loss = loss, global_step = global_step, learning_rate = 0.1, optimizer

Tensorflow Estimator: using predict() function in separate script

馋奶兔 提交于 2019-12-08 12:51:01
问题 I have successfully (I hope) trained and evaluated a model using the tf.Estimator where I reach a train/eval accuracy of around 83-85%. So now, I would like to test my model on a separate dataset using the predict() function call in the Estimator class. Preferably I would like to do this in a separate script. I've at this which says that I need to export as a SavedModel, but is this really necessary? Looking at the documentation for the Estimator class, it seems like I can just pass the path

How to get the last global_step from an tf.estimator.Estimator

梦想的初衷 提交于 2019-12-07 13:20:09
问题 How can I obtain the last global_step from a tf.estimator.Estimator after train(...) finishes? For instance, a typical Estimator-based training routine might be set up like this: n_epochs = 10 model_dir = '/path/to/model_dir' def model_fn(features, labels, mode, params): # some code to build the model pass def input_fn(): ds = tf.data.Dataset() # obviously with specifying a data source # manipulate the dataset return ds run_config = tf.estimator.RunConfig(model_dir=model_dir) estimator = tf

Tensorflow MNIST Estimator: batch size affects the graph expected input?

只谈情不闲聊 提交于 2019-12-07 02:07:33
问题 I have followed the TensorFlow MNIST Estimator tutorial and I have trained my MNIST model. It seems to work fine, but if I visualize it on Tensorboard I see something weird: the input shape that the model requires is 100 x 784. Here is a screenshot: as you can see in the right box, expected input size is 100x784. I thought I would see ?x784 there. Now, I did use 100 as a batch size in training, but in the Estimator model function I also specified that the amount of input samples size is

Graph optimizations on a tensorflow serveable created using tf.Estimator

百般思念 提交于 2019-12-06 18:27:50
问题 Context : I have a simple classifier based on tf.estimator.DNNClassifier that takes text and output probabilities over an intent tags. I am able to train an export the model to a serveable as well as serve the serveable using tensorflow serving. The problem is this servable is too big (around 1GB) and so I wanted to try some tensorflow graph transforms to try to reduce the size of the files being served. Problem : I understand how to take the saved_model.pb and use freeze_model.py to create a

Tensorflow : Predict in Recurrent Neural Networks for Drawing Classification tutorial

三世轮回 提交于 2019-12-06 12:01:06
I used the tutorial code from https://www.tensorflow.org/tutorials/recurrent_quickdraw and all works fine until I tried to make a prediction instead of just evaluate it. I wrote a new input function for prediction, based on the code in create_dataset.py def predict_input_fn(): def parse_line(stroke_points): """Parse an ndjson line and return ink (as np array) and classname.""" inkarray = json.loads(stroke_points) stroke_lengths = [len(stroke[0]) for stroke in inkarray] total_points = sum(stroke_lengths) np_ink = np.zeros((total_points, 3), dtype=np.float32) current_t = 0 for stroke in inkarray