Maintained with ☕️ by
IcePanel logo

Amazon SageMaker now supports inference endpoint testing from SageMaker Studio

Share

Services

You can now get real-time inference results from your models hosted by Amazon SageMaker directly from Amazon SageMaker Studio. Amazon SageMaker is a fully managed service that provides developers and data scientists the ability to quickly build, train, and deploy machine learning (ML) models. Amazon SageMaker Studio provides a single, web-based visual interface where you can perform all ML development steps. Once a model is deployed to SageMaker, customers can get predictions from their models deployed on SageMaker real-time endpoints. As part of model development and verification, customers want to ensure inferences from the model are returning as expected from the endpoint. Previously, customers used third-party tooling such as curl or wrote code in Jupyter Notebooks to invoke the endpoints for inference. Now, customers can provide a JSON payload, send the inference request to the endpoint, and receive results directly from SageMaker Studio. The results are displayed directly in SageMaker Studio and can be downloaded for further analysis. This feature is generally available in all regions where SageMaker and SageMaker Studio is available. To see where SageMaker is available, review the [AWS region table](/about-aws/global-infrastructure/regional-product-services/). To learn more about this feature, please see our [documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/realtime-endpoints-test-endpoints.html). To learn more about SageMaker, visit our [product page](/sagemaker/).