Speech-to-Text (STT) APIs enable developers to embed automatic transcription into any voice-enabled app. APIs are built on top of highly accurate and trainable deep learning asr models and we support both batch and streaming use cases.
Invoke our STT APIs using our highly scalable cloud service or deploy a containerized version of Voicegain in your VPC or datacenter. Our APIs can convert audio/video files in batch or a real-time media stream into text and we support 40+ audio formats.
On a broad benchmark, our accuracy of 89% is on par with the very best
Talk to us in English, Spanish, German, Portuguese, Korean (more coming)
Tested on compute instances on Google, AWS, Azure, IBM & Oracle
Integrates with Twilio, Genesys, FreeSWITCH and other CCaaS and CPaaS platforms
This article outlines the evaluation criteria involved in selecting a real-time Speech-to-Text or ASR for LLM-powered AI Copilots and Real-time agent assist applications in the contact center. This article is intended for Product Managers and Engineering leads in Contact Center AI SaaS companies and CIO/CDO organizations in enterprises that are looking to build such AI co-pilots.
A very popular use-case for Generative AI & LLMs is the AI Co-pilot or Realtime Agent Assist in contact centers. By transcribing an agent-customer conversation in real-time and feeding the transcript to modern LLMs like Open AI's GPT, Facebook's LLAMA2 or Google's Gemini, contact centers can guide their agents to handle their calls more effectively and efficiently.
An AI Co-pilot can deliver great business benefits. It can improve CSAT and NPS as the AI can quickly search and present relevant knowledge-base to the agent, enabling them to be more knowledgeable and productive. It can also save Agent FTE costs by reducing AHT and eliminating wrap time.
In addition by building a library of "gold-standard" calls across various key call types, LLM can also deliver personalized coaching to agents in an automated way using Generative AI.Companies are finding that while Gen AI-powered Co-Pilots are especially beneficial to new hires, they also deliver benefits to agents with tenure too.
Building an AI-powered Co-Pilot requires three main components - a) A real-time ASR/Speech-to-Text engine for transcription 2) An LLM to understand the transcript and 3) Agent and Supervisor/Manager facing web applications. The focus of this blog post is on the first component - the real-time ASR/Speech-to-Text engine.
Now here are the four key factors that you should look at while evaluating the real-time ASR/Speech-to-Text engine.
The first step for any AI Co-Pilot is to stream the agent and customer real-time media to an ASR that supports streaming Speech-to-Text. This is easily the most involved engineering design decision in this process.
There are two main approaches - 1) Streaming audio from the server-side. In an enterprise contact center, that would mean forking the media from either an enterprise Session Border Controller or the Contact Center Platform (which is the IP-PBX). 2) Streaming audio from the client side - i.e from the Agent Desktop. An Agent desktop can be a OS based thick client or a browser-based thin client - this depends on the actual CCaaS/Contact-Center platform being used.
Selecting the method of integration is an involved decision. While there are advantages and disadvantages to both approaches, server-side approaches have been the preferred option. This is because you would avoid the need to install client software and plan for compute resources at the agent desktop level.
However if you have an on-premise contact center like an Avaya, Cisco or Genesys, the integration can become more involved. This is because each platform has its own mechanism to fork these media streams and you also need to install the ASR/STT behind the corporate firewall (or open it up to access a Cloud-based ASR/STT).
Net-net, there is a case to be made for client-side streaming too - because not all companies may have the expertise available within the company.
There are modern CCaaS platforms like Amazon Connect, Twilio Flex, Genesys Cloud and Five9 that offer APIs/programmable access to the media streams. You are in luck if you have one of these platforms. Also if the PSTN access is through a programmable CPaaS platform - like Twilio, Signalwire, Telnyx etc, then it is quite a
Once you finalize a method to fork the audio, you would need to consider the standard protocols supported by the ASR/Speech-to-text engine. Ideally, the ASR/STT engine should be flexible and support multiple options. One of the most common approaches today to stream audio over websockets. It is important to confirm that the ASR/Speech-to-Text vendor supports two-channel/stereo audio submission over websockets. There are other approaches - sharing audio over gRPC and over raw RTP.
The next big consideration is the latency of real-time ASR/Speech-to-Text model - which in turn depends on the underlying neural network architecture of the model. In order to provide timely recommendations to the Agent, it is important to target ASRs that can deliver word-by-word transcript in less than one second and ideally in about 500 milliseconds. This is because there is additional latency associated with collecting and submitting the transcript to LLMs and then delivering the insights onto the Agent Desktop.
Last but not the least, it is really important that the price for real-time transcription is affordable in order to build a strong business case for the AI Co-Pilot. It is important to confirm that the agent and caller channel are not priced independently as that very often kills the business case.
If you are building an LLM-powered AI Co-pilot and would like to engage in a deeper discussion, please give us a shout! You can reach us at email@example.com.
Since June 2020, Voicegain has published benchmarks on the accuracy of its Speech-to-Text relative to big tech ASRs/Speech-to-Text engines like Amazon, Google, IBM and Microsoft.
The benchmark dataset for this comparison has been a 3rd Party dataset published by an independent party and it includes a wide variety of audio data – audiobooks, youtube videos, podcasts, phone conversations, zoom meetings and more.
Here is a link to some of the benchmarks that we have published.
1. Link to June 2020 Accuracy Benchmark
2. Link to Sep 2020 Accuracy Benchmark
3. Link to June 2021 Accuracy Benchmark
4. Link to Oct 2021 Accuracy Benchmark
5. Link to June 2022 Accuracy Benchmark
Through this process, we have gained insights into what it takes to deliver high accuracy for a specific use case.
We are now introducing an industry-first relative Speech-to-Text accuracy benchmark to our clients. By "relative", Voicegain’s accuracy (measured by Word Error Rate) shall be compared with a big tech player that the client is comparing us to. Voicegain will provide an SLA that its accuracy vis-à-vis this big tech player will be practically on-par.
We follow the following 4 step process to calculate relative accuracy SLA
In partnership with the client, Voicegain selects benchmark audio dataset that is representative of the actual data that the client shall process. Usually this is a randomized selection of client audio. We also recommend that clients retain their own independent benchmark dataset which is not shared with Voicegain to validate our results.
Voicegain partners with industry leading manual AI labeling companies to generate a 99% human generated accurate transcript of this benchmark dataset. We refer to this as the golden reference.
On this benchmark dataset, Voicegain shall provide scripts that enable clients to run a Word Error Rate (WER) comparison between the Voicegain platform and any one of the industry leading ASR providers that the client is comparing us to.
Currently Voicegain calculate the following two(2) KPIs
a. Median Word Error Rate: This is the median WER across all the audio files in the benchmark dataset for both the ASRs
b. Fourth Quartile Word Error Rate: After you organize the audio files in the benchmark dataset in increasing order of WER with the Big Tech ASR, we compute and compare the average WER of the fourth quartile for both Voicegain and the Big Tech ASR
So we contractually guarantee that Voicegain’s accuracy for the above 2 KPIs relative to the other ASR shall be within a threshold that is acceptable to the client.
Voicegain measures this accuracy SLA twice in the first year of the contract and annually once from the second year onwards.
If Voicegain does not meet the terms of the relative accuracy SLA, then we will train the underlying acoustic model to meet the accuracy SLA. We will take on the expenses associated with labeling and training . Voicegain shall guarantee that it shall meet the accuracy SLA within 90 days of the date of measurement.
1. Click here for instructions to access our live demo site.
2. If you are building a cool voice app and you are looking to test our APIs, click here to sign up for a developer account and receive $50 in free credits
3. If you want to take Voicegain as your own AI Transcription Assistant to meetings, click here.