This article outlines the evaluation criteria involved in selecting a real-time Speech-to-Text or ASR for LLM-powered AI Copilots and Real-time agent assist applications in the contact center. This article is intended for Product Managers and Engineering leads in Contact Center AI SaaS companies and CIO/CDO organizations in enterprises that are looking to build such AI co-pilots.
The buzz around Gen AI-powered Co-Pilot & Realtime Agent Assist
A very popular use-case for Generative AI & LLMs is the AI Co-pilot or Realtime Agent Assist in contact centers. By transcribing an agent-customer conversation in real-time and feeding the transcript to modern LLMs like Open AI's GPT, Facebook's LLAMA2 or Google's Gemini, contact centers can guide their agents to handle their calls more effectively and efficiently.
An AI Co-pilot can deliver great business benefits. It can improve CSAT and NPS as the AI can quickly search and present relevant knowledge-base to the agent, enabling them to be more knowledgeable and productive. It can also save Agent FTE costs by reducing AHT and eliminating wrap time.
In addition by building a library of "gold-standard" calls across various key call types, LLM can also deliver personalized coaching to agents in an automated way using Generative AI.Companies are finding that while Gen AI-powered Co-Pilots are especially beneficial to new hires, they also deliver benefits to agents with tenure too.
Building an AI-powered Co-Pilot requires three main components - a) A real-time ASR/Speech-to-Text engine for transcription 2) An LLM to understand the transcript and 3) Agent and Supervisor/Manager facing web applications. The focus of this blog post is on the first component - the real-time ASR/Speech-to-Text engine.
Now here are the four key factors that you should look at while evaluating the real-time ASR/Speech-to-Text engine.
1. Ease of Integration with Audio Source
The first step for any AI Co-Pilot is to stream the agent and customer real-time media to an ASR that supports streaming Speech-to-Text. This is easily the most involved engineering design decision in this process.
There are two main approaches - 1) Streaming audio from the server-side. In an enterprise contact center, that would mean forking the media from either an enterprise Session Border Controller or the Contact Center Platform (which is the IP-PBX). 2) Streaming audio from the client side - i.e from the Agent Desktop. An Agent desktop can be a OS based thick client or a browser-based thin client - this depends on the actual CCaaS/Contact-Center platform being used.
Selecting the method of integration is an involved decision. While there are advantages and disadvantages to both approaches, server-side approaches have been the preferred option. This is because you would avoid the need to install client software and plan for compute resources at the agent desktop level.
However if you have an on-premise contact center like an Avaya, Cisco or Genesys, the integration can become more involved. This is because each platform has its own mechanism to fork these media streams and you also need to install the ASR/STT behind the corporate firewall (or open it up to access a Cloud-based ASR/STT).
Net-net, there is a case to be made for client-side streaming too - because not all companies may have the expertise available within the company.
There are modern CCaaS platforms like Amazon Connect, Twilio Flex, Genesys Cloud and Five9 that offer APIs/programmable access to the media streams. You are in luck if you have one of these platforms. Also if the PSTN access is through a programmable CPaaS platform - like Twilio, Signalwire, Telnyx etc, then it is quite a
2. Protocol support from the ASR/STT
Once you finalize a method to fork the audio, you would need to consider the standard protocols supported by the ASR/Speech-to-text engine. Ideally, the ASR/STT engine should be flexible and support multiple options. One of the most common approaches today to stream audio over websockets. It is important to confirm that the ASR/Speech-to-Text vendor supports two-channel/stereo audio submission over websockets. There are other approaches - sharing audio over gRPC and over raw RTP.
3. Speed/ Latency of ASR/Speech-to-Text model
The next big consideration is the latency of real-time ASR/Speech-to-Text model - which in turn depends on the underlying neural network architecture of the model. In order to provide timely recommendations to the Agent, it is important to target ASRs that can deliver word-by-word transcript in less than one second and ideally in about 500 milliseconds. This is because there is additional latency associated with collecting and submitting the transcript to LLMs and then delivering the insights onto the Agent Desktop.
4. Affordability
Last but not the least, it is really important that the price for real-time transcription is affordable in order to build a strong business case for the AI Co-Pilot. It is important to confirm that the agent and caller channel are not priced independently as that very often kills the business case.
If you are building an LLM-powered AI Co-pilot and would like to engage in a deeper discussion, please give us a shout! You can reach us at sales@voicegain.ai.