amazon-sagemaker

How to use a pretrained model from s3 to predict some data?

送分小仙女□ 提交于 2020-08-09 05:41:05
问题 I have trained a semantic segmentation model using the sagemaker and the out has been saved to a s3 bucket. I want to load this model from the s3 to predict some images in sagemaker. I know how to predict if I leave the notebook instance running after the training as its just an easy deploy but doesn't really help if I want to use an older model. I have looked at these sources and been able to come up with something myself but it doesn't work hence me being here: https://course.fast.ai

Is there a way to turn on SageMaker model endpoints only when I am receiving inference requests

左心房为你撑大大i 提交于 2020-07-10 10:25:50
问题 I have created a model endpoint which is InService and deployed on an ml.m4.xlarge instance. I am also using API Gateway to create a RESTful API. Questions: Is it possible to have my model endpoint only Inservice (or on standby) when I receive inference requests? Maybe by writing a lambda function or something that turns off the endpoint (so that it does not keep accumulating the per hour charges) If q1 is possible, would this have some weird latency issues on the end users? Because it

Feature Importance for XGBoost in Sagemaker

三世轮回 提交于 2020-06-27 12:51:27
问题 I have built an XGBoost model using Amazon Sagemaker, but I was unable to find anything which will help me interpret the model and validate if it has learned the right dependencies. Generally, we can see Feature Importance for XGBoost by get_fscore() function in the python API (https://xgboost.readthedocs.io/en/latest/python/python_api.html) I see nothing of that sort in the sagemaker api(https://sagemaker.readthedocs.io/en/stable/estimators.html). I know I can build my own model and then

Feature Importance for XGBoost in Sagemaker

北城以北 提交于 2020-06-27 12:51:09
问题 I have built an XGBoost model using Amazon Sagemaker, but I was unable to find anything which will help me interpret the model and validate if it has learned the right dependencies. Generally, we can see Feature Importance for XGBoost by get_fscore() function in the python API (https://xgboost.readthedocs.io/en/latest/python/python_api.html) I see nothing of that sort in the sagemaker api(https://sagemaker.readthedocs.io/en/stable/estimators.html). I know I can build my own model and then

AWS Sagemaker - Install External Library and Make it Persist

自闭症网瘾萝莉.ら 提交于 2020-06-12 04:53:52
问题 I have a sagemaker instance up and running and I have a few libraries that I frequently use with it but each time I restart the instance they get wiped and I have to reinstall them. Is it possible to install my libraries to one of the anaconda environments and have the change remain? 回答1: The supported way to do this for Sagemaker notebook instances is with Lifecycle Configurations . You can create an onStart lifecycle hook that can install the required packages into the respective Conda

'no SavedModel bundles found!' on tensorflow_hub model deployment to AWS SageMaker

不问归期 提交于 2020-05-31 04:59:05
问题 I attempting to deploy the universal-sentence-encoder model to a aws Sagemaker endpoint and am getting the error raise ValueError('no SavedModel bundles found!') I have shown my code below, I have a feeling that one of my paths is incorrect import tensorflow as tf import tensorflow_hub as hub import numpy as np from sagemaker import get_execution_role from sagemaker.tensorflow.serving import Model def tfhub_to_savedmodel(model_name,uri): tfhub_uri = uri model_path = 'encoder_model/' + model