tensorflow2.0

Run tensorflow model in CPP

我只是一个虾纸丫 提交于 2021-01-29 04:53:14
问题 I trained my model using tf.keras. I convert this model to '.pb' by, import os import tensorflow as tf from tensorflow.keras import backend as K K.set_learning_phase(0) from tensorflow.keras.models import load_model model = load_model('model_checkpoint.h5') model.save('model_tf2', save_format='tf') This creates a folder 'model_tf2' with 'assets', varaibles, and saved_model.pb I'm trying to load this model in cpp. Referring to many other posts (mainly, Using Tensorflow checkpoint to restore

Alternative function for tf.contrib.layers.flatten(x) Tensor Flow

家住魔仙堡 提交于 2021-01-29 00:12:50
问题 i am using Tensor flow 0.8.0 verison on Jetson TK1 with Cuda 6.5 on 32 bit arm architecture. For that i can't upgrade the Tensor Flow version and i am facing trouble in Flatten function x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) images_flat = tf.contrib.layers.flatten(x) The error i am getting at this point is AttributeError: 'module' object has no attribute 'flatten' is there any alternative to this function that may be

Alternative function for tf.contrib.layers.flatten(x) Tensor Flow

孤者浪人 提交于 2021-01-28 23:57:18
问题 i am using Tensor flow 0.8.0 verison on Jetson TK1 with Cuda 6.5 on 32 bit arm architecture. For that i can't upgrade the Tensor Flow version and i am facing trouble in Flatten function x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) images_flat = tf.contrib.layers.flatten(x) The error i am getting at this point is AttributeError: 'module' object has no attribute 'flatten' is there any alternative to this function that may be

reading a protobuf created with TF2 using TF1

生来就可爱ヽ(ⅴ<●) 提交于 2021-01-28 21:13:21
问题 I have a model stored as an hdf5 which I export to a protobuf (PB) file using saved_model.save, like this: from tensorflow import keras import tensorflow as tf model = keras.models.load_model("model.hdf5") tf.saved_model.save(model, './output_dir/') this works fine and the result is a saved_model.pb file which I can later view with other software with no issues. However, when I try to import this PB file using TensorFlow1, my code fails. As PB is supposed to be a universal format, this

Keras Nan value when computing the loss

耗尽温柔 提交于 2021-01-28 21:10:38
问题 My question is related to this one I am working to implement the method described in the article https://drive.google.com/file/d/1s-qs-ivo_fJD9BU_tM5RY8Hv-opK4Z-H/view . The final algorithm to use is here (it is on page 6): d are units vector xhi is a non-null number D is the loss function (sparse cross-entropy in my case) The idea is to do an adversarial training, by modifying the data in the direction where the network is the most sensible to small changes and training the network with the

Keras gradient wrt something else

亡梦爱人 提交于 2021-01-28 11:24:36
问题 I am working to implement the method described in the article https://drive.google.com/file/d/1s-qs-ivo_fJD9BU_tM5RY8Hv-opK4Z-H/view . The final algorithm to use is here (it is on page 6): d are units vector xhi is a non-null number D is the loss function (sparse cross-entropy in my case) The idea is to do an adversarial training, by modifying the data in the direction where the network is the most sensible to small changes and training the network with the modified data but with the same

How to initialize the model with certain weights?

让人想犯罪 __ 提交于 2021-01-28 11:22:27
问题 I am using the example "stateful_clients" in tensorflow-federated examples. I want to use my pretrained model weights to initialize the model. I use the function model.load_weights(init_weight) . But it seems that it doesn't work. The validation accuracy in the first round is still low. How can I solve the problem? def tff_model_fn(): """Constructs a fully initialized model for use in federated averaging.""" keras_model = get_five_layers_cnn([28, 28, 1]) keras_model.load_weights(init_weight)

Computing gradient of the model with modified weights

别等时光非礼了梦想. 提交于 2021-01-28 11:16:47
问题 I was implementing Sharpness Aware Minimization (SAM) using Tensorflow. The algorithm is simplified as follows Compute gradient using current weight W Compute ε according to the equation in the paper Compute gradient using the weights W + ε Update model using gradient from step 3 I have implement step 1 and 2 already, but having trouble implementing step 3 according to the code below def train_step(self, data, rho=0.05, p=2, q=2): if (1 / p) + (1 / q) != 1: raise tf.python.framework.errors

Converting saved_model.pb to model.tflite

旧街凉风 提交于 2021-01-28 08:48:43
问题 Tensorflow Version: 2.2.0 OS: Windows 10 I am trying to convert a saved_model.pb to a tflite file. Here is the code I am running: import tensorflow as tf # Convert converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_dir='C:\Data\TFOD\models\ssd_mobilenet_v2_quantized') tflite_model = converter.convert() fo = open("model.tflite", "wb") fo.write(tflite_model) fo.close This code gives an error while converting: File "C:\Users\Mr.Ace\AppData\Roaming\Python\Python38\site-packages

How to generate custom mini-batches using Tensorflow 2.0, such as those in the paper “In defense of the triplet loss”?

给你一囗甜甜゛ 提交于 2021-01-28 07:03:55
问题 I want to implement a custom mini-batch generator in Tensorflow 2.0 using tf.data.Dataset API. Concretely, I have image data, 100 classes with ~200 examples each. For each mini-batch, I want to randomly sample P classes, and K images from each class, for a total of P*K examples in a mini-batch (as described in the paper In Defense of the Triplet Loss for Person Re-Identification]). I've been searching through documentation for tf.data.Dataset, but can't seem to find the right method. I've