Maintained with ☕️ by
IcePanel logo

Amazon EMR Serverless introduces Job Run Concurrency and Queuing controls

Share

Services

[Amazon EMR Serverless](https://aws.amazon.com/emr/serverless/) is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce job run admission control on Amazon EMR Serverless with support for job run concurrency and queuing controls. Job run concurrency and queuing enables you to configure the maximum number of concurrent job runs for an application and automatically queues all other submitted job runs. This prevents job run failures caused when API limits are exceeded due to a spike in job run submissions or when resources are exhausted either due to an account or application's maximum concurrent vCPUs limit or an underlying subnet's IP address limit being exceeded. Job run queuing also simplifies job run management by eliminating the need to build complex queuing management systems to retry failed jobs due to limit errors (e.g., maximum concurrent vCPUs, subnet IP address limits etc.). With this feature, jobs are automatically queued and processed as concurrency slots become available, ensuring efficient resource utilization and preventing job failures. Amazon EMR Serverless job run concurrency and queuing is available in all AWS Regions where AWS EMR Serverless is available, including the AWS GovCloud (US) Regions and excluding China regions. To learn more, visit [Job concurrency and queuing](https://docs.aws.amazon.com/emr/latest/EMR-Serverless-UserGuide/applications-concurrency-queuing.html) in the EMR Serverless documentation.