Amazon Bedrock batch inference now supports the Converse API format
Share
Services
Amazon Bedrock batch inference now supports the Converse API as a model invocation type, enabling you to use a consistent, model-agnostic input format for your batch workloads.
Previously, batch inference required model-specific request formats using the InvokeModel API. Now, when creating a batch inference job, you can select Converse as the model invocation type and structure your input data using the standard Converse API request format. Output for Converse batch jobs follows the Converse API response format. With this feature, you can use the same unified request format for both real-time and batch inference, simplifying prompt management and reducing the effort needed to switch between models. You can configure the Converse model invocation type through both the Amazon Bedrock console and the API.
This capability is available in all AWS Regions that support Amazon Bedrock batch inference. To get started, see [Create a batch inference job](https://docs.aws.amazon.com/bedrock/latest/userguide/batch-inference-create.html) and [Format and upload your batch inference data](https://docs.aws.amazon.com/bedrock/latest/userguide/batch-inference-data.html) in the Amazon Bedrock User Guide.
What else is happening at Amazon Web Services?
Read update
Services
Share
Amazon EC2 C8id instances are now available in Europe (Spain)
about 18 hours ago
Services
Share
Read update
Services
Share
Read update
Services
Share
Read update
Services
Share
Amazon CloudWatch Database Insights on-demand analysis now available in AWS Govcloud (US) Regions
about 20 hours ago
Services
Share