google-cloud-ml

ML engine serving seems to not be working as intended

最后都变了- 提交于 2019-12-24 20:17:22
问题 While using the following code and doing a gcloud ml-engine local predict I get: InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype string and shape [?] [[Node: Placeholder = Placeholderdtype=DT_STRING, shape=[?], _device="/job:localhost/replica:0/task:0/device:CPU:0"]] (Error code: 2) tf_files_path = './tf' # os.makedirs(tf_files_path) # temp dir estimator =\ tf.keras.estimator.model_to_estimator(keras_model_path="model_data

How do I use Google Cloud Machine Learning Engine Client Library for Java for prediction

我的梦境 提交于 2019-12-24 19:14:57
问题 I have a working uploaded ML-model on Goggle Cloud platform (Tested via python and gcloud ml-engine predict). I am currently trying to get predictions from Android using this library: Client Library for Java with this javadoc. I use a service account for access and Android code in a AsyncTask that looks like this: JsonFactory jsonFactory = JacksonFactory.getDefaultInstance(); HttpTransport httpTransport = new com.google.api.client.http.javanet.NetHttpTransport(); GoogleCredential credential =

ML Engine Online Prediction - Unexpected tensor name: values

吃可爱长大的小学妹 提交于 2019-12-24 11:15:43
问题 I get the following error when trying to make an online prediction on my ML Engine model. The key "values" is not correct. (See error on image.) enter image description here I already tested with RAW image data : {"image_bytes":{"b64": base64.b64encode(jpeg_data)}} & Converted the data to a numpy array. Currently I have the following code: from googleapiclient import discovery import base64 import os from PIL import Image import json import numpy as np os.environ["GOOGLE_APPLICATION

Cloud ML Engine batch predictions - How to simply match returned predictions with input data?

我是研究僧i 提交于 2019-12-24 07:36:38
问题 According to the ML Engine documentation, an instance key is required to match the returned predictions with the input data. For simplicity purposes, I would like to use a DNNClassifier but apparently canned estimators don't seem to support instance keys yet (only custom or tensorflow core estimators). So I looked at the Census code examples of Custom/TensorflowCore Estimators but they look quite complex for what I am trying to achieve. I would prefer using a similar approach as described in

Google Cloud ML Engine Error 429 Out of Memory

杀马特。学长 韩版系。学妹 提交于 2019-12-24 01:04:52
问题 I uploaded my model to ML-engine and when trying to make a prediction I receive the following error: ERROR: (gcloud.ml-engine.predict) HTTP request failed. Response: { "error": { "code": 429, "message": "Prediction server is out of memory, possibly because model size is too big.", "status": "RESOURCE_EXHAUSTED" } } My model size is 151.1 MB. I already did all the suggested actions from google cloud website such as quantise. Is there a possible solution or any other thing I could do to make it

Unable to deploy a Cloud ML model

拟墨画扇 提交于 2019-12-23 18:34:27
问题 Why I try to deploy my trained model to Google Cloud ML, I get the following error: Create Version failed.Model validation failed: Model metagraph does not have inputs collection. What does this mean and how to get around this? 回答1: The Tensorflow model deployed on CloudML did not have a collection named “inputs”. This collection should name all the input tensors for your graph. Similarly, a collection named “outputs” is required to name the output tensors for your graph. Assuming your graph

Why does online prediction fail with “Unable to get element from the feed as bytes”?

半城伤御伤魂 提交于 2019-12-23 16:17:05
问题 Online prediction is failing with "Unable to get elements from the feed as bytes". What does this mean and how can I fix it? I'm generating predictions using the following code: request_data = [{ 'examples' : {'pickup_longitude': -73.885262, 'pickup_latitude': 40.773008, 'dropoff_longitude': -73.987232, 'dropoff_latitude': 40.732403, 'fare_amount': 0, 'passenger_count': 2}}] parent = 'projects/%s/models/%s/versions/%s' % ('some project', 'taxifare', 'v1') response = api.projects().predict

Distributed Tensorflow device placement in Google Cloud ML engine

瘦欲@ 提交于 2019-12-23 13:16:33
问题 I am running a large distributed Tensorflow model in google cloud ML engine. I want to use machines with GPUs. My graph consists of two main the parts the input/data reader function and the computation part. I wish to place variables in the PS task, the input part in the CPU and the computation part on the GPU. The function tf.train.replica_device_setter automatically places variables in the PS server. This is what my code looks like: with tf.device(tf.train.replica_device_setter(cluster

Distributed Tensorflow device placement in Google Cloud ML engine

一笑奈何 提交于 2019-12-23 13:16:17
问题 I am running a large distributed Tensorflow model in google cloud ML engine. I want to use machines with GPUs. My graph consists of two main the parts the input/data reader function and the computation part. I wish to place variables in the PS task, the input part in the CPU and the computation part on the GPU. The function tf.train.replica_device_setter automatically places variables in the PS server. This is what my code looks like: with tf.device(tf.train.replica_device_setter(cluster

Prediction failed: contents must be scalar

老子叫甜甜 提交于 2019-12-23 12:27:22
问题 I have successfully trained, exported and uploaded my 'retrained_graph.pb' to ML Engine. My export script is as follows: import tensorflow as tf from tensorflow.python.saved_model import signature_constants from tensorflow.python.saved_model import tag_constants from tensorflow.python.saved_model import builder as saved_model_builder input_graph = 'retrained_graph.pb' saved_model_dir = 'my_model' with tf.Graph().as_default() as graph: # Read in the export graph with tf.gfile.FastGFile(input