问题
I have a python code and a model that is pre-trained and has a model.pkl file with me in the same directory where the code i, now i have to run or deploy this to the aws sagemaker but not getting any solution for this as aws sagemaker supports only two commands train or serve for training and deploying respectively.
currently, I am running the program using the command "python filename.py" and it is running successfully I want the same to run on the aws sagemaker.
Any Solution??
I tried the same as deploying the model to the s3 and call at the time of deploy I don't know is it correct or wrong.
回答1:
If you have a pretrained model and a file filename.py
that you want to run on SageMaker Endpoints, you just need to package this up as a Docker image to create a model which you can then deploy to an Endpoint and make invocations.
To do this, I'm just following this guide on the AWS documentation on using your own inference code.
The steps will be:
- Create the model code
- Create a Docker image out of the code
- Create our Endpoint with this image
Step 1: Create the model code
Let's take this simple model as an example in Python:
from flask import Flask, request
app = Flask(__name__)
@app.route('/ping')
def ping():
return ''
@app.route('/invocations')
def invoke():
return 'should do inference with your model here'
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0', port=8080)
Here's are requirements.txt:
Flask==0.10.1
Step 2: Create the Docker image
We need a Dockerfile to build our image. Here's the one I used:
Dockerfile:
FROM ubuntu:16.04
RUN apt-get update -y && apt-get install -y python-pip python-dev
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
RUN pip install -r requirements.txt
COPY . /app
EXPOSE 8080
ENTRYPOINT ["python"]
CMD ["model.py"]
We can build the image by running: docker build -t simple-model:latest .
This will create the image and now we can test it by running it:
docker run -d -p 8080:8080 simple-model
If it is running, you should be able to curl
any of the endpoints:
curl localhost:8080/ping
> ok
Now we need to publish it to ECR as SageMaker reads the model from ECR. I'm following this guide from AWS
Grab the image id by running docker images
Use that here. For convenience, I'm using us-west-2. Replace that with your chosen region:
docker tag <image id> <aws accound id>.dkr.ecr.us-west-2.amazonaws.com/simple-model
Now we should push it to ECR:
docker push <aws accound id>.dkr.ecr.us-west-2.amazonaws.com/simple-model
Step 3: Create Endpoint
Now we can create a model with this image. First, you need a SageMaker Execution role. This will be used to access your image and other resources. You can set that up here on this AWS doc page.
Secondly, you need to have the AWS CLI setup.
Let's get started.
Let's create the model first. This will point to your ECR image you created in the last step. Substitute the role name you created above in this command:
aws sagemaker create-model --model-name "SimpleModel" --execution-role-arn "arn:aws:iam::<aws account id>:role/<role name>" --primary-container "{
\"ContainerHostname\": \"ModelHostname\",
\"Image\": \"<aws account id>.dkr.ecr.us-west-2.amazonaws.com/simple-model:latest\"
}"
That'll create your model. Now we need to create an EndpointConfig
which will tell your SageMaker Endpoint how it needs to be configured:
aws sagemaker create-endpoint-config --endpoint-config-name "SimpleConfig" --production-variants "[
{
\"VariantName\" : \"SimpleVariant\",
\"ModelName\" : \"SimpleModel\",
\"InitialInstanceCount\" : 1,
\"InstanceType\" : \"ml.t2.medium\"
}
]"
And now finally, we can create our Endpoint using that config:
aws sagemaker create-endpoint --endpoint-name "SimpleEndpoint" --endpoint-config-name "SimpleConfig"
If all that works, wait until aws sagemaker describe-endpoint --endpoint-name SimpleEndpoint
says it is InService
.
Once it is, we can now call invocations against it:
aws sagemaker-runtime invoke-endpoint --endpoint-name SimpleEndpoint --body "empty"
Conclusion
If that all worked, you'll have your own endpoint. The next steps would be to customize that Python script to do your own inference with your own model. SageMaker also has the ability to grab your model artifacts automatically and you don't have to include them in your model container. See the documentation here.
Hopefully that helps!
来源:https://stackoverflow.com/questions/58300841/how-to-run-a-python-file-inside-a-aws-sagemaker-using-dockerfile