amazon-sagemaker

Lambda Function not able to invoke sagemaker endpoint

本秂侑毒 提交于 2020-01-06 04:54:05
问题 predict = [0.1,0.2] payload = { "instances": [ { "features": predict } ] } response = linear_regressor.predict(json.dumps(payload)) predictions = json.loads(response) print(json.dumps(predictions, indent=2)) The above code is able to invoke the endpoint which is the linear-learner endpoint and give the below result. { "predictions": [ { "score": 0.13421717286109924 } ] } But when I try to invoke the endpoint using below lambda function, import json import io import boto3 client = boto3.client

AWS Sagemaker SKlearn entry point allow multiple script

萝らか妹 提交于 2020-01-03 18:19:09
问题 I am trying to follow the tutorial here to implement a custom inference pipeline for feature preprocessing. It uses the python sklearn sdk to bring in custom preprocessing pipeline from a script. For example: from sagemaker.sklearn.estimator import SKLearn script_path = 'preprocessing.py' sklearn_preprocessor = SKLearn( entry_point=script_path, role=role, train_instance_type="ml.c4.xlarge", sagemaker_session=sagemaker_session) However I can't find a way to send multiple files. The reason I

Errors running Sagemaker Batch Transformation with LDA model

不羁岁月 提交于 2019-12-25 04:22:32
问题 I've successfully trained a LDA model with sagemaker, I've been able to set up an Inference API but it has a limit of how many records I can query at a time. I need to get predictions for a large file and have been trying to use Batch Transformation however am running against roadblock. My input date is in application/x-recordio-protobuf content type, code is as follows: # Initialize the transformer object transformer =sagemaker.transformer.Transformer( base_transform_job_name='Batch

Errors running Sagemaker Batch Transformation with LDA model

一笑奈何 提交于 2019-12-25 04:22:02
问题 I've successfully trained a LDA model with sagemaker, I've been able to set up an Inference API but it has a limit of how many records I can query at a time. I need to get predictions for a large file and have been trying to use Batch Transformation however am running against roadblock. My input date is in application/x-recordio-protobuf content type, code is as follows: # Initialize the transformer object transformer =sagemaker.transformer.Transformer( base_transform_job_name='Batch

SageMaker NodeJS's SDK is not locking the API Version

纵然是瞬间 提交于 2019-12-24 14:48:38
问题 I am running some code in AWS Lambda that dynamically creates SageMaker models. I am locking Sagemaker's API version like so: const sagemaker = new AWS.SageMaker({apiVersion: '2017-07-24'}); And here's the code to create the model: await sagemaker.createModel({ ExecutionRoleArn: 'xxxxxx', ModelName: sageMakerConfigId, Containers: [{ Image: ecrUrl }] }).promise() This code runs just fine locally with aws-sdk on 2.418.0 . However, when this code is deployed to Lambda, it doesn't work due to

Invalid .lst file in sagemaker

落花浮王杯 提交于 2019-12-24 10:48:16
问题 Folder structure for my S3 bucket is: Bucket ->training-set ->medium -> img1.jpeg -> img2.jpeg -> img3.PNG My training-set.lst file looks like this: 1 \t 1 \t medium/img1.jpeg 2 \t 1 \t medium/img2.jpeg 3 \t 1 \t medium/img3.PNG I created this using excel sheet. Error: Training failed with the following error: ClientError: Invalid lst file: training-set.lst "InputDataConfig": [ { "ChannelName": "train", "CompressionType": "None", "ContentType": "application/x-image", "DataSource": {

is there some kind of persistent local storage in aws sagemaker model training?

坚强是说给别人听的谎言 提交于 2019-12-24 04:05:14
问题 I did some experimentation with aws sagemaker, and the download time of large data sets from S3 is very problematic, especially when the model is still in development, and you want some kind of initial feedback relatively fast is there some kind of local storage or other way to speed things up? EDIT I refer to the batch training service, that allows you to submit a job as a docker container. While this service is intended for already validated jobs that typically run for a long time (which

How to use sagemaker java API to invoke a endpoint?

和自甴很熟 提交于 2019-12-24 00:43:16
问题 I was trying to run this example: tensorflow_abalone_age_predictor_using_layers , in which abalone_predictor.predict(tensor_proto) is used to call the endpoint and make the prediction. I was trying to use the java API AmazonSageMakerRuntime to achieve the same effect, but I don't know how to specify the body and contentType for the InvokeEndPointRequest . The document is not in detailed abou the format of the request. Greatly appreciate any piece of help! 回答1: I have not tried the specific

SageMaker and TensorFlow 2.0

杀马特。学长 韩版系。学妹 提交于 2019-12-23 07:47:56
问题 What is the best way to run TensorFlow 2.0 with AWS Sagemeker? As of today (Aug 7th, 2019) AWS does not provide TensorFlow 2.0 SageMaker containers, so my understanding is that I need to build my own. What is the best Base image to use? Example Dockerfile? 回答1: Here is an example Dockerfile that uses the underlying SageMaker Containers library (this is what is used in the official pre-built Docker images): FROM tensorflow/tensorflow:2.0.0b1 RUN pip install sagemaker-containers # Copies the

More efficient way to send a request than JSON to deployed tensorflow model in Sagemaker?

爷,独闯天下 提交于 2019-12-23 03:50:13
问题 I have trained a tf.estimator based TensorFlow model in Sagemaker and deployed it and it works fine. But I can only send requests to it in JSON format. I need to send some big input tensors and this seems very inefficient and also quickly breaks InvokeEndpoints 5MB request limit. Is it possible to use a more effective format against the tensorflow serving based endpoint? I tried sending a protobuf based request: from sagemaker.tensorflow.serving import Model from sagemaker.tensorflow