Maintained with ☕️ by
IcePanel logo

Amazon Chime SDK now supports live transcription with automatic language identification



[Amazon Chime SDK](/chime/chime-sdk/) lets developers add real-time audio, video, and screen share to their web and mobile applications. Live transcription uses an integration with [Amazon Transcribe](/transcribe/) to generate live audio transcription for use as subtitles or transcripts. Starting today, developers can use automatic language identification to detect the language spoken and generate transcriptions in that language. Live transcription uses your Amazon Transcribe account to process the audio from the top two active talkers, and deliver user-attributed transcriptions to every meeting participant via data messages. Before today, the language of the meeting had to be manually selected. Now, with at least of three seconds of audio, Amazon Transcribe can automatically identify the dominant language for transcript generation. The identified language is provided in the data messages for incorporation into the user interface as desired. Developers can access all the [streaming languages supported by Amazon Transcribe](, as well as features such as vocabulary filters, content identification, custom vocabularies, and custom language models. Standard [Amazon Transcribe costs](/transcribe/pricing/) apply. To learn more about the Amazon Chime SDK and live transcription with Amazon Transcribe, review the following resources: * [Amazon Chime SDK](/chime/chime-sdk/) and [Amazon Transcribe](/transcribe/) websites * [Using live transcription]( in the Amazon Chime SDK Developer Guide * [Live transcription APIs]( in the Amazon Chime SDK API Reference