huggingface-transformers

Huggingface Bert TPU fine-tuning works on Colab but not in GCP

拟墨画扇 提交于 2020-02-06 07:55:10
问题 I'm trying to fine-tune a Huggingface transformers BERT model on TPU. It works in Colab but fails when I switch to a paid TPU on GCP. Jupyter notebook code is as follows: [1] model = transformers.TFBertModel.from_pretrained('bert-large-uncased-whole-word-masking-finetuned-squad') # works [2] cluster_resolver = tf.distribute.cluster_resolver.TPUClusterResolver( tpu='[My TPU]', zone='us-central1-a', project='[My Project]' ) tf.config.experimental_connect_to_cluster(cluster_resolver) tf.tpu