google-cloud-ml

Upgrade to tf.dataset not working properly when parsing csv

我是研究僧i 提交于 2019-12-23 08:47:15
问题 I have a GCMLE experiment and I am trying to upgrade my input_fn to use the new tf.data functionality. I have created the following input_fn based off of this sample def input_fn(...): dataset = tf.data.Dataset.list_files(filenames).shuffle(num_shards) # shuffle up the list of input files dataset = dataset.interleave(lambda filename: # mix together records from cycle_length number of shards tf.data.TextLineDataset(filename).skip(1).map(lambda row: parse_csv(row, hparams)), cycle_length=5) if

Upgrade to tf.dataset not working properly when parsing csv

别来无恙 提交于 2019-12-23 08:46:30
问题 I have a GCMLE experiment and I am trying to upgrade my input_fn to use the new tf.data functionality. I have created the following input_fn based off of this sample def input_fn(...): dataset = tf.data.Dataset.list_files(filenames).shuffle(num_shards) # shuffle up the list of input files dataset = dataset.interleave(lambda filename: # mix together records from cycle_length number of shards tf.data.TextLineDataset(filename).skip(1).map(lambda row: parse_csv(row, hparams)), cycle_length=5) if

No module named trainer

人盡茶涼 提交于 2019-12-23 08:06:20
问题 I have a very simple trainer that follows the sample directory structure: /dist __init__.py setup.py /trainer __init__.py task.py Under the /dist directory, runs fine locally: $ gcloud ml-engine local train --package-path=trainer --module-name=trainer.task Now, when trying to deploy it, under the /dist directory and this command: $ gcloud ml-engine jobs submit training testA --package-path=trainer --module-name=trainer.task --staging-bucket=$JOB_DIR --region us-central1 It gives me an error

No module named trainer

ぐ巨炮叔叔 提交于 2019-12-23 08:05:52
问题 I have a very simple trainer that follows the sample directory structure: /dist __init__.py setup.py /trainer __init__.py task.py Under the /dist directory, runs fine locally: $ gcloud ml-engine local train --package-path=trainer --module-name=trainer.task Now, when trying to deploy it, under the /dist directory and this command: $ gcloud ml-engine jobs submit training testA --package-path=trainer --module-name=trainer.task --staging-bucket=$JOB_DIR --region us-central1 It gives me an error

No module named trainer

谁说我不能喝 提交于 2019-12-23 08:05:10
问题 I have a very simple trainer that follows the sample directory structure: /dist __init__.py setup.py /trainer __init__.py task.py Under the /dist directory, runs fine locally: $ gcloud ml-engine local train --package-path=trainer --module-name=trainer.task Now, when trying to deploy it, under the /dist directory and this command: $ gcloud ml-engine jobs submit training testA --package-path=trainer --module-name=trainer.task --staging-bucket=$JOB_DIR --region us-central1 It gives me an error

Google cloudml Always Gives Me The Same Results

为君一笑 提交于 2019-12-22 18:10:14
问题 I'm working on machine learning and I would like to use Google Cloud ml service. At this moment, I have trained my model with retrain.py code of Tensorflow (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py#L103) and I have exported the results to a cloudml (export and export.meta files). However when I try to make a prediction of new data with command (https://cloud.google.com/ml/reference/commandline/predict): gcloud beta ml predict it

Google Cloud ML Tensorflow Version

旧街凉风 提交于 2019-12-22 05:16:25
问题 The docs for setting up Google Cloud ML suggest installing Tensorflow version r0.11. I've observed that TensorFlow functions newly available in r0.12 raise exceptions when run on Cloud ML. Is there a timeline for Cloud ML supporting r0.12? Will switching between r0.11 and r0.12 be optional or mandatory? 回答1: Yes, you can specify --runtime-version=0.12 to get a 0.12 build. This is a new feature and is documented at https://cloud.google.com/ml/docs/concepts/runtime-version-list Note, however,

Using Training TFRecords that are stored on Google Cloud

前提是你 提交于 2019-12-21 07:14:51
问题 My goal is to use training data (format: tfrecords) stored on Google Cloud storage when I run my Tensorflow Training App, locally. (Why locally? : I am testing before I turn it into a training package for Cloud ML) Based on this thread I shouldn't have to do anything since the underlying Tensorflow API's should be able to read a gs://(url) However thats not the case and the errors I see are of the format: 2017-06-06 15:38:55.589068: I tensorflow/core/platform/cloud/retrying_utils.cc:77] The

How can I get the Cloud ML service account programmatically in Python?

余生颓废 提交于 2019-12-20 02:28:17
问题 The Cloud ML instructions show how to obtain the service account using shell commands. How can I do this programmatically in Python? e.g. in Datalab? 回答1: You can use Google Cloud's Python client libraries to issue the getConfig request. from googleapiclient import discovery from googleapiclient import http from oauth2client.client import GoogleCredentials credentials = GoogleCredentials.get_application_default() ml_client = discovery.build( 'ml', 'v1beta1', requestBuilder=http.HttpRequest,

How do I convert a CloudML Alpha model to a SavedModel?

南笙酒味 提交于 2019-12-19 10:49:08
问题 In the alpha release of CloudML's online prediction service, the format for exporting model was: inputs = {"x": x, "y_bytes": y} g.add_to_collection("inputs", json.dumps(inputs)) outputs = {"a": a, "b_bytes": b} g.add_to_collection("outputs", json.dumps(outputs)) I would like to convert this to a SavedModel without retraining my model. How can I do that? 回答1: We can convert this to a SavedModel by importing the old model, creating the Signatures, and re-exporting it. This code is untested,