Amazon EC2 Inf1 instances now supports TensorFlow 2
Share
Services
[AWS Neuron](/machine-learning/neuron/), the SDK for running machine learning inference on [AWS Inferentia](/machine-learning/inferentia/)\-based Amazon EC2 Inf1 instances now supports TensorFlow 2\. Starting with Neuron 1.15.0 you can execute your TensorFlow 2 BERT based models on Inf1 instances with support for additional models coming soon. To learn more about Neuron TensorFlow 2 support, visit our [TensorFlow 2 FAQ](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-frameworks/tensorflow-neuron/tf2%5Ffaq.html) page.
We have also updated our resources with new documentation including a tutorial that [help you get started with TensorFlow 2](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/tensorflow/huggingface%5Fbert/huggingface%5Fbert.html), a tutorial that will guide you on how to [deploy a HuggingFace BERT model container](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/src/examples/pytorch/byoc%5Fsm%5Fbert%5Ftutorial/sagemaker%5Fcontainer%5Fneuron.html) on Inferentia using AWS Sagemaker hosting, the [inference performance page](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/benchmark/index.html) to help you compare and replicate our results and a new application note to help you discover the [types of deep learning architectures](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/models/models-inferentia.html) that will perform well out of the box on Inferentia.
AWS Neuron is natively integrated with popular ML frameworks such as TensorFlow, PyTorch and Apache MXNet. It includes a deep learning compiler, runtime and tools that assist you with extracting the best performance for your applications. To learn more visit the [AWS Neuron](/machine-learning/neuron/) page and [AWS Neuron documentation](https://awsdocs-neuron.readthedocs-hosted.com/en/latest/).
Amazon EC2 Inf1 instances deliver the lowest cost for deep learning inference in the cloud and are available in 23 regions including US East (N. Virginia, Ohio), US West (Oregon, N. California), AWS GovCloud (US-East, US-West), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm), Asia Pacific (Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo), Middle East (Bahrain), South America (São Paulo) and China (Beijing, Ningxia). You can leverage Amazon EC2 Inf1 instances in the region that will best meet your real-time latency requirements for machine learning inference. To learn more visit the [Amazon EC2 Inf1 instance page](/ec2/instance-types/inf1/).
What else is happening at Amazon Web Services?
Read update
Services
Share
Read update
Services
Share
Amazon Managed Service for Prometheus now supports configuring a minimum firing period for alerts
October 16th, 2024
Services
Share