tensorflow2.x

Why is Tensorflow Official CNN example stuck at 10 percent accuracy (= random prediction) on my machine?

岁酱吖の 提交于 2021-02-11 14:18:26
问题 I am running the CNN example from Tensorflow Official website - (https://www.tensorflow.org/tutorials/images/cnn) I have run the notebook as it is without any modifications whatsoever. My accuracy (training accuracy) is stuck at 10%. I tried to overfit by only using the first 10 (image, label) pairs, but the result is still the same. The network just does not learn. Here is my model.summary() - Model: "sequential" _________________________________________________________________ Layer (type)

How to run inference using Tensorflow 2.2 pb file?

谁说胖子不能爱 提交于 2021-02-11 06:25:15
问题 I followed the website: https://leimao.github.io/blog/Save-Load-Inference-From-TF2-Frozen-Graph/ However, I still do not know how to run inference with frozen_func (see my code below). Please advise how to run inference using pb file in TensorFlow 2.2. Thanks. import tensorflow as tf def wrap_frozen_graph(graph_def, inputs, outputs, print_graph=False): def _imports_graph_def(): tf.compat.v1.import_graph_def(graph_def, name="") wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, []

error: Illegal instruction (core dumped) - tensorflow==2.1.0

爷,独闯天下 提交于 2021-02-05 06:50:34
问题 I am importing tensorflow in my ubuntu (Lenovo 110-Ideapad laptop) python using following commands- (tfx-test) chandni@mxnet:~/Chandni/TFX$ python Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf Illegal instruction (core dumped) And the program exits. Kindly let me know the reason. 回答1: You may need to downgrade to CPU 1.5. #Try running pip uninstall tensorflow #And then pip

error: Illegal instruction (core dumped) - tensorflow==2.1.0

我们两清 提交于 2021-02-05 06:50:14
问题 I am importing tensorflow in my ubuntu (Lenovo 110-Ideapad laptop) python using following commands- (tfx-test) chandni@mxnet:~/Chandni/TFX$ python Python 3.6.9 (default, Nov 7 2019, 10:44:02) [GCC 8.3.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow as tf Illegal instruction (core dumped) And the program exits. Kindly let me know the reason. 回答1: You may need to downgrade to CPU 1.5. #Try running pip uninstall tensorflow #And then pip

How to get intermediate outputs in TF 2.3 Eager with learning_phase?

两盒软妹~` 提交于 2021-01-20 09:46:18
问题 Example below works in 2.2; K.function is changed significantly in 2.3, now building a Model in Eager execution, so we're passing Model(inputs=[learning_phase,...]) . I do have a workaround in mind, but it's hackish, and lot more complex than K.function ; if none can show a simple approach, I'll post mine. from tensorflow.keras.layers import Input, Dense from tensorflow.keras.models import Model from tensorflow.python.keras import backend as K import numpy as np ipt = Input((16,)) x = Dense

TensorFlow model serving on Google AI Platform online prediction too slow with instance batches

可紊 提交于 2020-12-12 02:54:46
问题 I'm trying to deploy a TensorFlow model to Google AI Platform for Online Prediction. I'm having latency and throughput issues . The model runs on my machine in less than 1 second (with only an Intel Core I7 4790K CPU) for a single image. I deployed it to AI Platform on a machine with 8 cores and an NVIDIA T4 GPU. When running the model on AI Platform on the mentioned configuration, it takes a little less than a second when sending only one image. If I start sending many requests, each with

TensorFlow model serving on Google AI Platform online prediction too slow with instance batches

∥☆過路亽.° 提交于 2020-12-12 02:52:56
问题 I'm trying to deploy a TensorFlow model to Google AI Platform for Online Prediction. I'm having latency and throughput issues . The model runs on my machine in less than 1 second (with only an Intel Core I7 4790K CPU) for a single image. I deployed it to AI Platform on a machine with 8 cores and an NVIDIA T4 GPU. When running the model on AI Platform on the mentioned configuration, it takes a little less than a second when sending only one image. If I start sending many requests, each with

Should I use @tf.function for all functions?

风流意气都作罢 提交于 2020-08-21 06:30:48
问题 An official tutorial on @tf.function says: To get peak performance and to make your model deployable anywhere, use tf.function to make graphs out of your programs. Thanks to AutoGraph, a surprising amount of Python code just works with tf.function, but there are still pitfalls to be wary of. The main takeaways and recommendations are: Don't rely on Python side effects like object mutation or list appends. tf.function works best with TensorFlow ops, rather than NumPy ops or Python primitives.

Should I use @tf.function for all functions?

痞子三分冷 提交于 2020-08-21 06:30:28
问题 An official tutorial on @tf.function says: To get peak performance and to make your model deployable anywhere, use tf.function to make graphs out of your programs. Thanks to AutoGraph, a surprising amount of Python code just works with tf.function, but there are still pitfalls to be wary of. The main takeaways and recommendations are: Don't rely on Python side effects like object mutation or list appends. tf.function works best with TensorFlow ops, rather than NumPy ops or Python primitives.

Should I use @tf.function for all functions?

Deadly 提交于 2020-08-21 06:30:22
问题 An official tutorial on @tf.function says: To get peak performance and to make your model deployable anywhere, use tf.function to make graphs out of your programs. Thanks to AutoGraph, a surprising amount of Python code just works with tf.function, but there are still pitfalls to be wary of. The main takeaways and recommendations are: Don't rely on Python side effects like object mutation or list appends. tf.function works best with TensorFlow ops, rather than NumPy ops or Python primitives.