tensorflow2.0

module 'tensorflow' has no attribute 'random'

我怕爱的太早我们不能终老 提交于 2021-01-29 14:39:07
问题 i'm trying to generate seed using TensorFlow 2.1 but this error appears module 'TensorFlow' has no attribute 'random' 回答1: I suspect that you think you're using TensorFlow 2.0 but actually, it is a previous version of tensorflow. It's easy to check: import tensorflow as tf print(tf.__version__) This should produce output like this: 2.1.0 If not, you know you have the wrong TensorFlow version installed. It's also possible that you have multiple versions of python installed. For example: Python

ValueError: Input 0 of layer cu_dnnlstm is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: [None, 175]

核能气质少年 提交于 2021-01-29 13:58:06
问题 I am experimenting with CuDNNLSTMs and, i dont know why, even though i am following a tutorial on this, i get this weird error, that i can understand, but i can't debug: So i have a 4073 time-series * 175 features array and i am trying to pass those 175 features to the Sequential model, one at a time, to a CuDNNLSTM layer, in order for the model to learn something from it. "AlvoH" is the target of the RNN. The code: train_x, train_y = trainDF, trainDF["AlvoH"] validation_x, validation_y =

How do I get reproducible results with Tensorflow 2.0?

时光总嘲笑我的痴心妄想 提交于 2021-01-29 10:23:12
问题 I have seen this FAQ and this stackoverflow about reproducibility in keras and TF 1.x. How do I do something similar in TF 2.0 because it no longer has tf.Session ? I know I could still set the graph seed and the seed for each initialization in the layer by passing something like tf.keras.initializers.GlorotNormal(seed=10) . However, I am wondering if there is something more convenient. 回答1: Consider using tf.random.set_seed(seed) at the startup. In my use cases it provides reproducible

Tensorflow error - tensorflow.python.framework.errors_impl.NotFoundError - on running command rasa init --no-prompt

喜欢而已 提交于 2021-01-29 09:57:25
问题 When I run rasa init --no-prompt I am getting the above error. I am not able to debug the cause for this error, Above are the commands I have used to install Rasa. pip3 install rasa pip3 install --upgrade tensorflow rasa pip3 install --upgrade tensorflow-addons rasa pip install --upgrade pip pip3 install --upgrade tensorflow-addons rasa --use-feature=2020-resolver Above are my details of the versions used Rasa version: 1.10.10 Python version: 3.6.9 Operating system Ubuntu 18.04.4 64 bit

How to use smart reply custom ops in python or tfjs?

可紊 提交于 2021-01-29 09:49:17
问题 I'm trying to implement smart reply tflite model in python or tfjs, but they are using custom ops. Please refer https://github.com/tensorflow/examples/tree/master/lite/examples/smart_reply/android/app/libs/cc. So how to build that custom op separately and use that custom op in python or tfjs? 来源: https://stackoverflow.com/questions/59644961/how-to-use-smart-reply-custom-ops-in-python-or-tfjs

Tensorflow 2.0 turn off tf.function retracing for prediction

|▌冷眼眸甩不掉的悲伤 提交于 2021-01-29 08:11:21
问题 I am trying to generate prediction intervals for a simple RNN using dropout. I'm using the functional API with training=True to enable dropout during testing. To try different dropout levels, I defined a small function to edit the model configs: from keras.models import Model, Sequential def dropout_model(model, dropout): conf = model.get_config() for layer in conf['layers']: if layer["class_name"]=="Dropout": layer["config"]["rate"] = dropout elif "dropout" in layer["config"].keys(): layer[

tf.keras.predict() is much slower than Keras predict()

橙三吉。 提交于 2021-01-29 06:03:24
问题 When using the Keras that comes embedded with Tensorflow (Tensorflow 2), I noticed a severe increase in computational time when using the predict() function from the Keras embedded inside Tensorflow and the predict() from standalone Keras. See the toy code below: import tensorflow import keras import numpy as np import time test = np.array([[0.1, 0.1, 0.1, 0.1, 0.1, 0.5, 0.1, 0., 0.1, 0.2]]) # Keras from inside Tensorflow model_1 = tensorflow.keras.Sequential([ tensorflow.keras.layers.Dense(1

Tf 2: Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR

落爺英雄遲暮 提交于 2021-01-29 05:54:29
问题 I am getting the above error (Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR) when I execute the code below. I have cheked if my gpu is woking using tf.test.is_gpu_available # coding: utf-8 import tensorflow as tf import numpy as np import keras from models import * import os import gc TF_FORCE_GPU_ALLOW_GROWTH = True np.random.seed(1000) #Paths MODEL_CONF = "../models/conf/" MODEL_WEIGHTS = "../models/weights/" #Model informations N_CLASSES = 3 def load_array(name): return np

TensorFlow summary Scalar written to event log as Tensor in example

佐手、 提交于 2021-01-29 05:18:14
问题 TensorFlow version = 2.0.0 I am following the example of how to use the TensorFlow summary module at https://www.tensorflow.org/api_docs/python/tf/summary; the first one on the page, which for completeness I will paste below: writer = tf.summary.create_file_writer("/tmp/mylogs") with writer.as_default(): for step in range(100): # other model code would go here tf.summary.scalar("my_metric", 0.5, step=step) writer.flush() Running this is fine, and I get event logs that I can view in

Run tensorflow model in CPP

烂漫一生 提交于 2021-01-29 04:56:00
问题 I trained my model using tf.keras. I convert this model to '.pb' by, import os import tensorflow as tf from tensorflow.keras import backend as K K.set_learning_phase(0) from tensorflow.keras.models import load_model model = load_model('model_checkpoint.h5') model.save('model_tf2', save_format='tf') This creates a folder 'model_tf2' with 'assets', varaibles, and saved_model.pb I'm trying to load this model in cpp. Referring to many other posts (mainly, Using Tensorflow checkpoint to restore