Powered by foundation model, Amazon Transcribe now supports over 100 languages
Share
Services
Today, we are excited to announce Amazon Transcribe’s next generation, multi-billion parameter speech foundation model-powered system that expands automatic speech recognition (ASR) to over [100 languages](https://docs.aws.amazon.com/transcribe/latest/dg/supported-languages.html). [Amazon Transcribe](https://aws.amazon.com/transcribe/) is a fully managed ASR service that makes it easy for customers to add speech-to-text capabilities to their applications. Our speech foundation model is trained using best-in-class self-supervised algorithms to learn the inherent universal patterns of human speech across languages and accents.
With the advent of generative AI, thousands of enterprises are using Amazon Transcribe to unlock rich insights from their audio content, as well as increase the accessibility and discoverability of their audio and video content. For instance, contact centers transcribe and analyze customer calls to identify insights and subsequently, improve customer experience and agent productivity. Content producers and media distributors automatically generate subtitles using Amazon Transcribe to improve content accessibility.
All existing and new customers using Amazon Transcribe in batch mode can realise the accuracy improvements for 100+ languages without needing any change to either the API endpoint or input parameters. These new languages are available in the following [AWS Regions](https://docs.aws.amazon.com/general/latest/gr/transcribe.html#transcribe%5Fregion): US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (São Paulo). To get started, go to the [service console](https://console.aws.amazon.com/transcribe/), [create an audio transcript with a 10-minute tutorial](https://aws.amazon.com/getting-started/hands-on/create-audio-transcript-transcribe/), or to learn more, see the [blog post](https://aws.amazon.com/blogs/machine-learning/amazon-transcribe-announces-a-new-speech-foundation-model-powered-asr-system-that-expands-support-to-over-100-languages/) and [documentation](https://docs.aws.amazon.com/transcribe/latest/dg/what-is.html).
What else is happening at Amazon Web Services?
Amazon AppStream 2.0 users can now save their user preferences between streaming sessions
December 13th, 2024
Services
Share
AWS Elemental MediaConnect Gateway now supports source-specific multicast
December 13th, 2024
Services
Share
Amazon EC2 instances support bandwidth configurations for VPC and EBS
December 13th, 2024
Services
Share
AWS announces new AWS Direct Connect location in Osaka, Japan
December 13th, 2024
Services
Share
Amazon DynamoDB announces support for FIPS 140-3 interface VPC and Streams endpoints
December 13th, 2024
Services
Share