google-cloud-tpu

File system scheme '[local]' not implemented in Google Colab TPU

。_饼干妹妹 提交于 2021-01-02 19:13:11
问题 I am using TPU runtime in Google Colab, but having problems in reading files (not sure). I initialized TPU using: import tensorflow as tf import os import tensorflow_datasets as tfds resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR']) tf.config.experimental_connect_to_cluster(resolver) # This is the TPU initialization code that has to be at the beginning. tf.tpu.experimental.initialize_tpu_system(resolver) print("All devices: ", tf.config

Google Colab: Why is CPU faster than TPU?

久未见 提交于 2020-07-19 06:45:18
问题 I'm using Google colab TPU to train a simple Keras model. Removing the distributed strategy and running the same program on the CPU is much faster than TPU . How is that possible? import timeit import os import tensorflow as tf from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam # Load Iris dataset x = load_iris().data y

Google Colab: Why is CPU faster than TPU?

情到浓时终转凉″ 提交于 2020-07-19 06:44:06
问题 I'm using Google colab TPU to train a simple Keras model. Removing the distributed strategy and running the same program on the CPU is much faster than TPU . How is that possible? import timeit import os import tensorflow as tf from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam # Load Iris dataset x = load_iris().data y

Mask R-CNN for TPU on Google Colab

∥☆過路亽.° 提交于 2020-06-15 06:41:10
问题 We are trying to build an image segmentation deep learning model using Google Colab TPU. Our model is Mask R-CNN. TPU_WORKER = 'grpc://' + os.environ['COLAB_TPU_ADDR'] import tensorflow as tf tpu_model = tf.contrib.tpu.keras_to_tpu_model( model.keras_model, strategy=tf.contrib.tpu.TPUDistributionStrategy( tf.contrib.cluster_resolver.TPUClusterResolver(TPU_WORKER))) However I am running into issues while converting our Mask R-CNN model to TPU model as pasted below. ValueError: Layer <keras

Memory reduction Tensorflow TPU v2/v3 bfloat16

早过忘川 提交于 2020-05-30 03:14:33
问题 My model is too big to get a batch >64 with the normal v2 TPU devices. On the troubleshooting site it is mentioned that upcoming tensorflow versions will have bfloat16 support. Are the newly supported tf versions 1.9-1.12 capable to use bfloat16 now and if yes, is there a limited set of optimizers I can use? I did not find any further documentation on this but saw the usage of bfloat16 in the tensor2tensor model, so I guess there must be a way. Furthermore I read that TPU v3 supports bigger

Memory reduction Tensorflow TPU v2/v3 bfloat16

*爱你&永不变心* 提交于 2020-05-30 03:12:45
问题 My model is too big to get a batch >64 with the normal v2 TPU devices. On the troubleshooting site it is mentioned that upcoming tensorflow versions will have bfloat16 support. Are the newly supported tf versions 1.9-1.12 capable to use bfloat16 now and if yes, is there a limited set of optimizers I can use? I did not find any further documentation on this but saw the usage of bfloat16 in the tensor2tensor model, so I guess there must be a way. Furthermore I read that TPU v3 supports bigger

tf.data.Dataset: The `batch_size` argument must not be specified for the given input type

老子叫甜甜 提交于 2020-05-08 06:48:37
问题 I'm using Talos and Google colab TPU to run hyperparameter tuning of a Keras model. Note that I'm using Tensorflow 1.15.0 and Keras 2.2.4-tf. import os import tensorflow as tf import talos as ta from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam from sklearn.model_selection import train_test_split def iris_model(x_train, y_train, x_val, y_val, params): # Specify a distributed strategy to use TPU resolver = tf

tf.data.Dataset: The `batch_size` argument must not be specified for the given input type

感情迁移 提交于 2020-05-08 06:47:58
问题 I'm using Talos and Google colab TPU to run hyperparameter tuning of a Keras model. Note that I'm using Tensorflow 1.15.0 and Keras 2.2.4-tf. import os import tensorflow as tf import talos as ta from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam from sklearn.model_selection import train_test_split def iris_model(x_train, y_train, x_val, y_val, params): # Specify a distributed strategy to use TPU resolver = tf

Huggingface Bert TPU fine-tuning works on Colab but not in GCP

拟墨画扇 提交于 2020-02-06 07:55:10
问题 I'm trying to fine-tune a Huggingface transformers BERT model on TPU. It works in Colab but fails when I switch to a paid TPU on GCP. Jupyter notebook code is as follows: [1] model = transformers.TFBertModel.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') # works [2] cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver( tpu='[My TPU]', zone='us-central1-a', project='[My Project]' ) tf.config.experimental_connect_to_cluster(cluster_resolver) tf.tpu