Amazon Bedrock batch inference now supports the Converse API format
Share
Services
Amazon Bedrock batch inference now supports the Converse API as a model invocation type, enabling you to use a consistent, model-agnostic input format for your batch workloads.
Previously, batch inference required model-specific request formats using the InvokeModel API. Now, when creating a batch inference job, you can select Converse as the model invocation type and structure your input data using the standard Converse API request format. Output for Converse batch jobs follows the Converse API response format. With this feature, you can use the same unified request format for both real-time and batch inference, simplifying prompt management and reducing the effort needed to switch between models. You can configure the Converse model invocation type through both the Amazon Bedrock console and the API.
This capability is available in all AWS Regions that support Amazon Bedrock batch inference. To get started, see [Create a batch inference job](https://docs.aws.amazon.com/bedrock/latest/userguide/batch-inference-create.html) and [Format and upload your batch inference data](https://docs.aws.amazon.com/bedrock/latest/userguide/batch-inference-data.html) in the Amazon Bedrock User Guide.
What else is happening at Amazon Web Services?
Read update
Services
Share
EC2 Image Builder enhances lifecycle policies with wildcard support and simplified IAM
about 5 hours ago
Services
Share
AWS Network Firewall now supports firewall state change notifications through Amazon EventBridge
about 8 hours ago
Services
Share
Amazon CloudWatch logs centralization rules now support customizable destination log group structure
about 8 hours ago
Services
Share
Read update
Services
Share