I am calling a Sagemaker endpoint using java Sagemaker SDK. The data that I am sending needs little cleaning before the model can use it for prediction. How can I do that in
There is now a new feature in SageMaker, called inference pipelines. This lets you build a linear sequence of two to five containers that pre/post-process requests. The whole pipeline is then deployed on a single endpoint.
https://docs.aws.amazon.com/sagemaker/latest/dg/inference-pipelines.html
You need to write a script and supply that while creating you model. That script would have a input_fn where you can do your preprocessing. Please refer to aws docs for more details.
https://docs.aws.amazon.com/sagemaker/latest/dg/mxnet-training-inference-code-template.html
The SageMaker MXNet container is open source.
You add pandas do the docker container here: https://github.com/aws/sagemaker-mxnet-containers/blob/master/docker/1.1.0/Dockerfile.gpu#L4
The repo has instructions on how to build the container as well: https://github.com/aws/sagemaker-mxnet-containers#building-your-image
sagemaker container amazon-sagemaker
One option is to put your pre-processing code as part of an AWS Lambda function and use that Lambda to call the invoke-endpoint of SageMaker, once the pre-processing is done. AWS Lambda supports Python and it should be easy to have the same code that you have in your Jupyter notebook, also within that Lambda function. You can also use that Lambda to call external services such as DynamoDB for lookups for data enrichment.
You can find more information in the SageMaker documentations: https://docs.aws.amazon.com/sagemaker/latest/dg/getting-started-client-app.html