google-cloud-ml

How do I change the Signatures of my SavedModel without retraining the model?

允我心安 提交于 2019-12-19 04:16:55
问题 I just finished training my model only to find out that I exported a model for serving that had problems with the signatures. How do I update them? (One common problem is setting the wrong shape for CloudML Engine). 回答1: Don't worry -- you don't need to retrain your model. That said, there is a little work to be done. You're going to create a new (corrected) serving graph, load the checkpoints into that graph, and then export this graph. For example, suppose you add a placeholder, but didn't

Retrained inception_v3 model deployed in Cloud ML Engine always outputs the same predictions

两盒软妹~` 提交于 2019-12-18 13:37:49
问题 I followed the codelab TensorFlow For Poets for transfer learning using inception_v3. It generates retrained_graph.pb and retrained_labels.txt files, which can used to make predictions locally (running label_image.py). Then, I wanted to deploy this model to Cloud ML Engine, so that I could make online predictions. For that, I had to export the retrained_graph.pb to SavedModel format. I managed to do it by following the indications in this answer from Google's @rhaertel80 and this python file

Which Google Cloud Platform service is the easiest for running Tensorflow?

£可爱£侵袭症+ 提交于 2019-12-18 10:20:06
问题 While working on Udacity Deep Learning assignments, I encountered memory problem. I need to switch to a cloud platform. I worked with AWS EC2 before but now I would like to try Google Cloud Platform (GCP). I will need at least 8GB memory. I know how to use docker locally but never tried it on the cloud. Is there any ready-made solution for running Tensorflow on GCP? If not, which service (Compute Engine or Container Engine) would make it easier to get started? Any other tip is also

how make correct predictions of jpeg image in cloud-ml

本小妞迷上赌 提交于 2019-12-17 19:57:20
问题 I want to predict a jpeg image in cloud-ml. My training model is the inception model, and I would like to send the input to the first layer of the graph: 'DecodeJpeg/contents:0' (where I have to send a jpeg image). I have set this layer as possible input by adding in retrain.py: inputs = {'image_bytes': 'DecodeJpeg/contents:0'} tf.add_to_collection('inputs', json.dumps(inputs)) Then I save the results of the training in two files (export and export.meta) with: saver.save(sess, os.path.join

In Tensorflow for serving a model, what does the serving input function supposed to do exactly

[亡魂溺海] 提交于 2019-12-17 19:04:34
问题 So, I've been struggling to understand what the main task of a serving_input_fn() is when a trained model is exported in Tensorflow for serving purposes. There are some examples online that explain it but I'm having problems defining it for myself. The problem I'm trying to solve is a regression problem where I have 29 inputs and one output. Is there a template for creating a corresponding serving input function for that? What if I use a one-class classification problem? Would my serving

Google Storage (gs) wrapper file input/out for Cloud ML?

二次信任 提交于 2019-12-17 10:42:13
问题 Google recently announced the Clould ML, https://cloud.google.com/ml/ and it's very useful. However, one limitation is that the input/out of a Tensorflow program should support gs://. If we use all tensorflow APIS to read/write files, it should OK, since these APIs support gs:// . However, if we use native file IO APIs such as open , it does not work, because they don't understand gs:// For example: with open(vocab_file, 'wb') as f: cPickle.dump(self.words, f) This code won't work in Google

Google Cloud ML and GCS Bucket issues

谁都会走 提交于 2019-12-17 06:51:01
问题 I'm using open source Tensorflow implementations of research papers, for example DCGAN-tensorflow. Most of the libraries I'm using are configured to train the model locally, but I want to use Google Cloud ML to train the model since I don't have a GPU on my laptop. I'm finding it difficult to change the code to support GCS buckets. At the moment, I'm saving my logs and models to /tmp and then running a 'gsutil' command to copy the directory to gs://my-bucket at the end of training (example

Google Cloud ML and GCS Bucket issues

a 夏天 提交于 2019-12-17 06:50:59
问题 I'm using open source Tensorflow implementations of research papers, for example DCGAN-tensorflow. Most of the libraries I'm using are configured to train the model locally, but I want to use Google Cloud ML to train the model since I don't have a GPU on my laptop. I'm finding it difficult to change the code to support GCS buckets. At the moment, I'm saving my logs and models to /tmp and then running a 'gsutil' command to copy the directory to gs://my-bucket at the end of training (example

Google Cloud AI Platform Notebook Instance won't use GPU with Jupyter

和自甴很熟 提交于 2019-12-13 10:26:53
问题 I'm using the pre-built AI Platform Jupyter Notebook instances to train a model with a single Tesla K80 card. The issue is that I don't believe the model is actually training on the GPU. nvidia-smi returns the following during training: No Running Processes Found Not the "No Running Process Found" yet "Volatile GPU Usage" is 100%. Something seems strange... ...And the training is excruciatingly slow. A few days ago, I was having issues with the GPU not being released after each notebook run.

Google Cloud ML returns empty predictions with object detection model

眉间皱痕 提交于 2019-12-13 03:57:56
问题 I am deploying a model to Google Cloud ML for the first time. I have trained and tested the model locally and it still needs work but it works ok. I have uploaded it to Cloud ML and tested with the same example images I test locally that I know get detections. (using this tutorial) When I do this, I get no detections. At first I thought I had uploaded the wrong checkpoint but I tested and the same checkpoint works with these images offline, I don't know how to debug further. When I look at