tensorflow2.0

What is the difference in purpose between tf.py_function and tf.function?

主宰稳场 提交于 2020-12-30 01:43:46
问题 The difference between the two is muddled in my head, notwithstanding the nuances of what is eager and what isn't. From what I gather, the @tf.function decorator has two benefits in that it converts functions into TensorFlow graphs for performance, and allows for a more Pythonic style of coding by interpreting many (but not all) common-place Python operations into tensor operations, e.g. if into tf.cond , etc. From the definition of tf.py_function , it seems that it does just #2 above. Hence,

What is the difference in purpose between tf.py_function and tf.function?

|▌冷眼眸甩不掉的悲伤 提交于 2020-12-30 01:37:57
问题 The difference between the two is muddled in my head, notwithstanding the nuances of what is eager and what isn't. From what I gather, the @tf.function decorator has two benefits in that it converts functions into TensorFlow graphs for performance, and allows for a more Pythonic style of coding by interpreting many (but not all) common-place Python operations into tensor operations, e.g. if into tf.cond , etc. From the definition of tf.py_function , it seems that it does just #2 above. Hence,

Deploy python app to Heroku “Slug Size too large”

杀马特。学长 韩版系。学妹 提交于 2020-12-29 13:20:53
问题 I'm trying to deploy a Streamlit app written in python to Heroku. My whole directory is 4.73 MB, where 4.68 MB is my ML model. My requirements.txt looks like this: absl-py==0.9.0 altair==4.0.1 astor==0.8.1 attrs==19.3.0 backcall==0.1.0 base58==2.0.0 bleach==3.1.3 blinker==1.4 boto3==1.12.29 botocore==1.15.29 cachetools==4.0.0 certifi==2019.11.28 chardet==3.0.4 click==7.1.1 colorama==0.4.3 cycler==0.10.0 decorator==4.4.2 defusedxml==0.6.0 docutils==0.15.2 entrypoints==0.3 enum-compat==0.0.3

Deploy python app to Heroku “Slug Size too large”

生来就可爱ヽ(ⅴ<●) 提交于 2020-12-29 13:18:27
问题 I'm trying to deploy a Streamlit app written in python to Heroku. My whole directory is 4.73 MB, where 4.68 MB is my ML model. My requirements.txt looks like this: absl-py==0.9.0 altair==4.0.1 astor==0.8.1 attrs==19.3.0 backcall==0.1.0 base58==2.0.0 bleach==3.1.3 blinker==1.4 boto3==1.12.29 botocore==1.15.29 cachetools==4.0.0 certifi==2019.11.28 chardet==3.0.4 click==7.1.1 colorama==0.4.3 cycler==0.10.0 decorator==4.4.2 defusedxml==0.6.0 docutils==0.15.2 entrypoints==0.3 enum-compat==0.0.3

Should the custom loss function in Keras return a single loss value for the batch or an arrary of losses for every sample in the training batch?

不想你离开。 提交于 2020-12-23 09:40:26
问题 I'm learning keras API in tensorflow(2.3). In this guide on tensorflow website, I found an example of custom loss funciton: def custom_mean_squared_error(y_true, y_pred): return tf.math.reduce_mean(tf.square(y_true - y_pred)) The reduce_mean function in this custom loss function will return an scalar. Is it right to define loss function like this? As far as I know, the first dimension of the shapes of y_true and y_pred is the batch size. I think the loss function should return loss values for

Slow training on CPU and GPU in a small network (tensorflow)

怎甘沉沦 提交于 2020-12-13 03:11:47
问题 Here is the original script I'm trying to run on both CPU and GPU, I'm expecting a much faster training on GPU however it's taking almost the same time. I made the following modification to main() (the first 4 lines) because the original script does not activate / use the GPU. Suggestions ... ? def main(): physical_devices = tf.config.experimental.list_physical_devices('GPU') if len(physical_devices) > 0: tf.config.experimental.set_memory_growth(physical_devices[0], True) print('GPU activated

Slow training on CPU and GPU in a small network (tensorflow)

六月ゝ 毕业季﹏ 提交于 2020-12-13 03:09:23
问题 Here is the original script I'm trying to run on both CPU and GPU, I'm expecting a much faster training on GPU however it's taking almost the same time. I made the following modification to main() (the first 4 lines) because the original script does not activate / use the GPU. Suggestions ... ? def main(): physical_devices = tf.config.experimental.list_physical_devices('GPU') if len(physical_devices) > 0: tf.config.experimental.set_memory_growth(physical_devices[0], True) print('GPU activated

Unable to understand the behavior of method `build` in tensorflow keras layers (tf.keras.layers.Layer)

拈花ヽ惹草 提交于 2020-12-12 04:36:18
问题 Layers in tensorflow keras have a method build that is used to defer the weights creation to a time when you have seen what the input is going to be. a layer's build method I have a few questions i have not been able to find the answer of: here it is said that If you assign a Layer instance as attribute of another Layer, the outer layer will start tracking the weights of the inner layer. What does it mean to track the weights of a layer? The same link also mentions that We recommend creating

How to create a One-hot Encoded Matrix from a PNG for Per Pixel Classification in Tensorflow 2

血红的双手。 提交于 2020-12-07 14:46:20
问题 I'm attempting to train a Unet to provide each pixel of a 256x256 image with a label, similar to the tutorial given here. In the example, the predictions of the Unet are a (128x128x3) output where the 3 denotes one of the classifications assigned to each pixel. In my case, I need a (256x256x10) output having 10 different classifications (Essentially a one-hot encoded array for each pixel in the image). I can load the images but I'm struggling to convert each image's corresponding segmentation

tfRecords shown faulty in TF2

筅森魡賤 提交于 2020-12-05 11:38:54
问题 I have a couple of own tfrecord file made by myself. They are working perfectly in tf1, I used them in several projects. However if i want to use them in Tensorflow Object Detection API with tf2 (running the model_main_tf2.py script), I see the following in tensorboard: tensorboard images tab It totally masses up the images. (Running the /work/tfapi/research/object_detection/model_main.py script or even legacy_train and they looks fine) Is tf2 using different kind of encoding in tfrecords? Or