Amazon Elastic Inference introduces new Accelerators with higher GPU memory
Share
Services
Amazon Elastic Inference has introduced new Elastic Inference Accelerators called EIA2, with up to 8GB of GPU memory. Customers can now use Amazon Elastic Inference on larger models or models that have larger input sizes for image processing, object detection, image classification, automated speech processing and natural language processing and other deep learning use cases.
Amazon Elastic Inference allows you to attach just the right amount of GPU-powered acceleration to any Amazon EC2 instance, Amazon SageMaker instance or Amazon ECS task to reduce the cost of running deep learning inference by up to 75%. With Amazon Elastic Inference, you can choose the instance type that is best suited to the overall CPU and memory needs of your application, and separately configure the amount of inference acceleration that you need with no code changes. Until now, you could provision a maximum of 4GB of GPU memory on Elastic Inference. Now, you can choose among 3 new accelerator types, which have 2GB, 4GB and 8GB of GPU memory respectively. Amazon Elastic Inference supports TensorFlow, Apache MXNet, and ONNX models, with more frameworks coming soon.
The new Elastic Inference Accelerators are available in US East (Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Seoul) and EU (Ireland). Support for other regions is coming soon.
For more information, see the Amazon Elastic Inference [product page](/machine-learning/elastic-inference/) and [documentation](/machine-learning/elastic-inference/resources/).
What else is happening at Amazon Web Services?
Amazon AppStream 2.0 users can now save their user preferences between streaming sessions
December 13th, 2024
Services
Share
AWS Elemental MediaConnect Gateway now supports source-specific multicast
December 13th, 2024
Services
Share
Amazon EC2 instances support bandwidth configurations for VPC and EBS
December 13th, 2024
Services
Share
AWS announces new AWS Direct Connect location in Osaka, Japan
December 13th, 2024
Services
Share
Amazon DynamoDB announces support for FIPS 140-3 interface VPC and Streams endpoints
December 13th, 2024
Services
Share