Amazon Transcribe now supports Custom Language Models for German and Japanese languages
Share
Services
Today, we are excited to announce that Amazon Transcribe Custom Language Models (CLM) now support German and Japanese languages in both batch and streaming mode. [Amazon Transcribe](/transcribe/) is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications. CLM allows you to use pre-existing data to build a custom speech engine for your specific batch and streaming transcription use cases. No prior machine learning experience is required to create your CLM.
CLM uses text data that you already possess, such as website content, instruction manuals, and other assets that cover your domain’s unique lexicon and vocabulary. Upload your training dataset to create a CLM and run transcription jobs using your new CLM. Amazon Transcribe CLM is meant for customers who operate in domains as diverse as law, finance, hospitality, insurance, and media. CLMs are designed to improve transcription accuracy for domain-specific speech. This includes any content outside of what you would hear in normal, everyday conversations. For example, if you're transcribing the proceedings from a scientific conference, a standard transcription is unlikely to recognize many of the scientific terms used by presenters. Using Amazon Transcribe CLM, you can train a custom language model to recognize the specialized terms used in your discipline.
CLM now supports German and Japanese for batch and streaming transcriptions and is available in all [AWS Regions](https://docs.aws.amazon.com/general/latest/gr/transcribe.html#transcribe%5Fregion) where Amazon Transcribe operates. To start building your own custom speech recognition model, log in to the [Amazon Transcribe service console](https://console.aws.amazon.com/transcribe/home). For more details about the CLM feature, visit the “[Building custom language models to supercharge speech-to-text performance for Amazon Transcribe](https://aws.amazon.com/blogs/machine-learning/building-custom-language-models-to-supercharge-speech-to-text-performance-for-amazon-transcribe/)” post. You can learn more by checking out the [Amazon Transcribe documentation page](https://docs.aws.amazon.com/transcribe/latest/dg/custom-language-models.html).
What else is happening at Amazon Web Services?
Amazon AppStream 2.0 users can now save their user preferences between streaming sessions
December 13th, 2024
Services
Share
AWS Elemental MediaConnect Gateway now supports source-specific multicast
December 13th, 2024
Services
Share
Amazon EC2 instances support bandwidth configurations for VPC and EBS
December 13th, 2024
Services
Share
AWS announces new AWS Direct Connect location in Osaka, Japan
December 13th, 2024
Services
Share
Amazon DynamoDB announces support for FIPS 140-3 interface VPC and Streams endpoints
December 13th, 2024
Services
Share