google-cloud-ml

Solved : No space left on device in google Cloudml BASIC TIER. What is the disk size of each tier in cloudml?

怎甘沉沦 提交于 2020-01-16 16:30:21
问题 While training my model for data greater than 20GB in BASIC Tier in Cloud ML my jobs are failing because there is no disk space available in the Cloudml machines and I am not able to find any details in gcloud ml documentations [https://cloud.google.com/ml-engine/docs/tensorflow/machine-types]. Need help in deciding the TIER for my training jobs also the utilisation is very less in Job Details Graphs. Expand all | Collapse all { insertId: "1klpt2" jsonPayload: { created: 1554434546.3576794

ml-engine - no module named trainer

♀尐吖头ヾ 提交于 2020-01-16 15:47:27
问题 I have a directory of: /ml_engine setup.py /trainer __init__.py task.py model.py I have a custom model used with sklearn mixins that allows me to use the model as a sklearn model. However when I try to upload the model I would get the error: gcloud alpha ml-engine versions create m_0_03 \ --model model_9281830085_204245556_prophet \ --origin gs://BUCKET/9281830085_204245556/2018-08-23T13:37:00.000218 \ --runtime-version=1.9 \ --framework SCIKIT_LEARN \ --python-version=3.5 \ --package-uris=[

Understanding inputs for google ai platform custom prediction routines

◇◆丶佛笑我妖孽 提交于 2020-01-15 03:40:07
问题 I am following this documentation on custom prediction routines and I am trying to understand how the inputs for custom prediction routine looks like. The code to send the input looks like this: instances = [ [6.7, 3.1, 4.7, 1.5], [4.6, 3.1, 1.5, 0.2], ] service = discovery.build('ml', 'v1') name = 'projects/{}/models/{}'.format(project, model) if version is not None: name += '/versions/{}'.format(version) response = service.projects().predict( name=name, body={'instances': instances} )

Error message while submitting Google Cloud ML Training job for Tensorflow Object Detection

大兔子大兔子 提交于 2020-01-14 10:34:14
问题 Trying to submit a Google Cloud ML Training job for Tensorflow Object Detection task and I am following the official guideline Following is the job that I am submitting: export CONFIG=trainer/cloud.yaml export TRAIN_DIR=kt-1000/training export PIPELINE_CONFIG=kt-1000/training/ssd_mobilenet_v1_pets.config gcloud ml-engine jobs submit training object_detection_`date +%s` \ --job-dir=gs://${TRAIN_DIR} \ --packages dist/object_detection-0.1.tar.gz,slim/dist/slim 0.1.tar.gz \ --module-name object

Exception during xgboost prediction: can not initialize DMatrix from DMatrix

半城伤御伤魂 提交于 2020-01-13 16:27:19
问题 I trained a xgboost model in Python using the Scikit-Learn Python API, and serialized it using pickle library. I uploaded the model to ML Engine, but when I try to do online predictions, i get the following exception: Prediction failed: Exception during xgboost prediction: can not initialize DMatrix from DMatrix An example of the json I'm using for prediction is the following: { "instances":[ [ 24.90625, 21.6435643564356, 20.3762376237624, 24.3679245283019, 30.2075471698113, 28.0947368421053,

How do I use pandas.read_csv on Google Cloud ML?

风流意气都作罢 提交于 2020-01-12 10:53:08
问题 I'm trying to deploy a training script on Google Cloud ML. Of course, I've uploaded my datasets (CSV files) in a bucket on GCS. I used to import my data with read_csv from pandas, but it doesn't seem to work with a GCS path. How should I proceed (I would like to keep using pandas) ? import pandas as pd data = pd.read_csv("gs://bucket/folder/file.csv") output : ERROR 2018-02-01 18:43:34 +0100 master-replica-0 IOError: File gs://bucket/folder/file.csv does not exist 回答1: You will require to use

gcloud components update permission denied

随声附和 提交于 2020-01-11 08:14:05
问题 All of a sudden I started getting "Permission Denied" issues when trying to run any gcloud commands such as gcloud components update -- the issue was avoided if I ran sudo gcloud components update but it's not clear to my why the sudo command is suddenly required? I have actually been trying to run a GCMLE experiment and it had the same error/warning, so I tried updating components and still ran into this issue. I have been travelling for a couple days and did not make any changes since these

gcloud ml-engine returns error on large files

浪尽此生 提交于 2020-01-04 07:18:12
问题 I have a trained model that takes in a somewhat large input. I generally do this as a numpy array of the shape (1,473,473,3). When I put that to JSON I end up getting about a 9.2MB file. Even if I convert that to a base64 encoding for the JSON file the input is still rather large. ml-engine predict rejects my request when sending the JSON file with the following error: (gcloud.ml-engine.predict) HTTP request failed. Response: { "error": { "code": 400, "message": "Request payload size exceeds

Using CloudML prediction API in production without gcloud

廉价感情. 提交于 2020-01-04 02:09:10
问题 What is the best way to use CloudML prediction API in production service? I've seen: https://cloud.google.com/ml/docs/quickstarts/prediction but it relies on gcloud tool I'm looking into solution that doesn't depend on having gcloud installed and initialized on machine making the request. It would be great to have solution that works on GCP, AWS and possibly other clouds. Thanks 回答1: I'll show you how to authenticate your production environment to use CloudML online prediction. The CloudML

Deploying Keras model to Google Cloud ML for serving predictions

江枫思渺然 提交于 2020-01-03 05:33:32
问题 I need to understand how to deploy models on Google Cloud ML. My first task is to deploy a very simple text classifier on the service. I do it in the following steps (could perhaps be shortened to fewer steps, if so, feel free to let me know): Define the model using Keras and export to YAML Load up YAML and export as a Tensorflow SavedModel Upload model to Google Cloud Storage Deploy model from storage to Google Cloud ML Set the upload model version as default on the models website. Run model