Voicegain is excited to announce the launch of Voicegain Casey, a payer focused AI Voice Agent that transforms the end-to-end call center experience with the power of generative AI. Voicegain Casey is a software suite of the following three Voice AI SaaS applications that helps a health plan or TPA call center improve operational efficiency and increase the CSAT and NPS (Net Promoter Score):
The AI Voice Agent replaces a touch-tone IVR with a modern LLM-powered human-like conversational voice experience. The AI Voice Agent can answer all calls that are received at a Health Plan or TPA Call center. It engages callers in a natural conversation and automates routine telephone calls like Claims Status, benefits inquiries and eligibility verifications. There is a very compelling business case to automate Provider phone calls in Health Plan and TPA call centers. Voicegain Casey has been specifically designed and developed for this goal. The AI Voice Assistant is also trained to perform HIPAA Validation and triaging of calls. So if the AI has not been trained to answer a specific question, it routes the call to the call center for live assistance.
Voicegain AI Co-Pilot is a browser extension that runs as a browser side-panel of Call Center Agent's CRM. This Co-Pilot is integrated with the Contact Center/CCaaS platform used in the Call Center. When a call transferred by the AI Voice Agent is eventually answered by a Live Agent, all the information collected by the AI Voice Assistant is presented as a "Screen-Pop" on the Desktop of the Live Agent (also referred to as CTI). This CTI/Screen pop feature ensures that the front-line call center staff can continue the conversation from where the AI Voice Agent left off. In addition to this Screen-Pop, the AI Co-Pilot also guides the front-line call center staff in real-time by listening, transcribing and analyzing the conversation and providing real-time guidance . The AI Co-Pilot also generates a summary of the conversation within five seconds of the completion of the call. This automated summarization easily saves 1-2 mins of wrap-up time or after call work which is very common in these health plan and TPA call centers.
Voicegain AI QA & Coach is a browser-based AI SaaS application that is used by Team-leaders, QA Call Coaches/Analysts and Operations Managers in a call center. This AI SaaS app records, transcribes and analyzes the entire conversation. It measure the sentiment of the callers and computes the QA score. Voicegain uses the latest open-source reasoning LLMs (like LLAMA 3, Gemma) and closed-source reasoning models like o-3 from Open AI. With the power of modern reasoning models, almost the entire QA score-card (approximately 80% of the questions) can be easily answered using AI. This SaaS App also provides a database of all whole-call-recordings of the entire conversation of the customer - which includes the AI Voice Assistant part, the transfer to the specific Call Center queue and eventually the entire conversation between the Live Agent and the Caller.
Voicegain Casey requires the following 3 key integrations to help with automation and real-time assistance.
Voicegain Casey integrates with modern CCaaS platforms. Current Integrations include Aircall, Five9 and Genesys Cloud. Planned integrations include Ringcentral, NICE CXOne and Dialpad.
Voicegain Casey integrates with the CRM software of the Health plan or the TPA. This can be an off-the-shelf CRM like Zendesk or Salesforce. It can also be a proprietary/homegrown CRM. As long as the CRM is a browser-based SaaS application, this should not be an issue. Voicegain Casey AI Co-Pilot is a browser-extension that is installed in the side-panel of the same browser tab as the CRM. At the end of the call, the summary of the call is automatically generated and available on the browser extension within 5 seconds of the end of the call.
Voicegain Casey needs access to the member eligibility and claims data.
For further information on Voicegain Casey, including a demo, please visit this link
If you would like to understand Voicegain Casey in more detail or if you would prefer a detailed product demo over a Zoom video call, please do not hesitate to send us an email. You can reach us at sales@voicegain.ai or support@voicegain.ai
This articles provides an overview of Voicegain SIP Media Stream Back-to-Back User Agent (B2BUA), a Contact Center Platform agnostic solution that forks real-time SIP media streams from premise-based contact centers to Voicegain Speech-to-Text for real-time transcription. In SIP telephony, B2BUA stands for Back-to-Back User Agent and is a specific network element that can both terminate and originate media streams. This is explained further in this post.
The Voicegain SIP Media B2BUA is a containerized solution that is deployed in the same network as the Contact Center platform. Once configured, Developers or enterprise customers can get real-time access to speaker-separated transcripts (over a Websocket connection).
The Voicegain SIP Media B2BUA is a solution for enterprises and SaaS ISVs looking to extend their real-time LLM-powered Voice AI application to premise-based contact centers. Examples of such on-premise contact center platforms include Avaya, Genesys or Cisco. The Generative Voice AI applications supported include Real-Time Agent Assist or Voice AI Co-Pilots, Real-time sentiment analysis, voice biometrics and other types of real-time speech analytics apps.
Most premise-based Contact Center platforms - whether it is from Avaya, Genesys, or Cisco - do not provide programmatic access to real-time media streams. While these systems are reliable for call routing and management, they were not designed for modern LLM-powered AI applications.
Traditionally forking of media streams is supported by a Session Border Controller (SBC), a separate network element that sits "in front" of the Call Center platform. These SBCs rely on a protocol called SIPREC to fork these media streams. However SIPREC is primarily intended for network-based compliant call recording and commercial compliant recording vendors like NICE or Verint leverage the SIPREC protocol to access real-time media streams from premise-based contact center platforms.
However there are many pain points :
1) Only large enterprises have implemented Session Border Controllers.
2) Even if an enterprise has an SBC, the forking media capacity is used up by the call recording solution. Adding an additional streaming option for generative AI requires upgrades to hardware and software licensing on the SBC.
Voicegain offers a highly scalable, reliable and fully contained SIP Media Stream B2BUA to address the challenge discussed above. This B2BUA is a containerized network element that is deployed in the same network as the premise-based contact center. From a SIP Protocol standpoint, this Media Stream Back-to-Back User Agent (B2BUA) acts as a transparent media relay while forking SIP RTP media streams to real-time Speech-to-Text.
What is a B2BUA? Unlike a simple SIP proxy that only handles signaling, a Back-to-Back User Agent terminates and re-originates both signaling and media streams, allowing it to manipulate call flows while maintaining access to the audio content.
The Voicegain SIP Media Stream B2BUA is:
The diagram above show cases the call flow to fork media streams for real-time transcription.
At a high-level, most SIP-based Contact Center ACDs like Avaya Communication Manager, Genesys Engage and Cisco UCCE support creation of SIP Trunks. The Voicegain SIP Media Stream B2BUA is a SIP Server/SIP Peer that connects to the premise-based ACD over a dedicated SIP trunk. It can receive calls from and make calls to the premise-based ACD.
The overall call flow has the following steps:
To summarize, what is needed is to configure a SIP Trunk on the current Contact Center ACD and use that to transfer calls to Voicegain Media Stream B2BUA (which is the equivalent of initiating a SIP INVITE). The Medial Stream SIP B2BUA in turn bridges the DID of the Contact Center Queue or IVR (which is a SIP INVITE to the destination SIP URI) over the same SIP Trunk and then forks the two RTP streams (caller & Agent) to Voicegain STT.
The Voicegain SIP Media Stream B2BUA has been deployed in production application for a leading health care customer with an On-Premise Avaya Contact Center.
To deploy the Voicegain SIP Media Stream Proxy, you'll need:
If you have an on-premise contact center and you would like to discuss getting access to the real-time media stream, please contact us at support@voicegain.ai
This article outlines the evaluation criteria involved in selecting a real-time Speech-to-Text or ASR for LLM-powered AI Copilots and Real-time agent assist applications in the contact center. This article is intended for Product Managers and Engineering leads in Contact Center AI SaaS companies and CIO/CDO organizations in enterprises that are looking to build such AI co-pilots.
A very popular use-case for Generative AI & LLMs is the AI Co-pilot or Realtime Agent Assist in contact centers. By transcribing an agent-customer conversation in real-time and feeding the transcript to modern LLMs like Open AI's GPT, Facebook's LLAMA2 or Google's Gemini, contact centers can guide their agents to handle their calls more effectively and efficiently.
An AI Co-pilot can deliver great business benefits. It can improve CSAT and NPS as the AI can quickly search and present relevant knowledge-base to the agent, enabling them to be more knowledgeable and productive. It can also save Agent FTE costs by reducing AHT and eliminating wrap time.
In addition by building a library of "gold-standard" calls across various key call types, LLM can also deliver personalized coaching to agents in an automated way using Generative AI.Companies are finding that while Gen AI-powered Co-Pilots are especially beneficial to new hires, they also deliver benefits to agents with tenure too.
Building an AI-powered Co-Pilot requires three main components - a) A real-time ASR/Speech-to-Text engine for transcription 2) An LLM to understand the transcript and 3) Agent and Supervisor/Manager facing web applications. The focus of this blog post is on the first component - the real-time ASR/Speech-to-Text engine.
Now here are the four key factors that you should look at while evaluating the real-time ASR/Speech-to-Text engine.
The first step for any AI Co-Pilot is to stream the agent and customer real-time media to an ASR that supports streaming Speech-to-Text. This is easily the most involved engineering design decision in this process.
There are two main approaches - 1) Streaming audio from the server-side. In an enterprise contact center, that would mean forking the media from either an enterprise Session Border Controller or the Contact Center Platform (which is the IP-PBX). 2) Streaming audio from the client side - i.e from the Agent Desktop. An Agent desktop can be a OS based thick client or a browser-based thin client - this depends on the actual CCaaS/Contact-Center platform being used.
Selecting the method of integration is an involved decision. While there are advantages and disadvantages to both approaches, server-side approaches have been the preferred option. This is because you would avoid the need to install client software and plan for compute resources at the agent desktop level.
However if you have an on-premise contact center like an Avaya, Cisco or Genesys, the integration can become more involved. This is because each platform has its own mechanism to fork these media streams and you also need to install the ASR/STT behind the corporate firewall (or open it up to access a Cloud-based ASR/STT).
Net-net, there is a case to be made for client-side streaming too - because not all companies may have the expertise available within the company.
There are modern CCaaS platforms like Amazon Connect, Twilio Flex, Genesys Cloud and Five9 that offer APIs/programmable access to the media streams. You are in luck if you have one of these platforms. Also if the PSTN access is through a programmable CPaaS platform - like Twilio, Signalwire, Telnyx etc, then it is quite a
Once you finalize a method to fork the audio, you would need to consider the standard protocols supported by the ASR/Speech-to-text engine. Ideally, the ASR/STT engine should be flexible and support multiple options. One of the most common approaches today to stream audio over websockets. It is important to confirm that the ASR/Speech-to-Text vendor supports two-channel/stereo audio submission over websockets. There are other approaches - sharing audio over gRPC and over raw RTP.
The next big consideration is the latency of real-time ASR/Speech-to-Text model - which in turn depends on the underlying neural network architecture of the model. In order to provide timely recommendations to the Agent, it is important to target ASRs that can deliver word-by-word transcript in less than one second and ideally in about 500 milliseconds. This is because there is additional latency associated with collecting and submitting the transcript to LLMs and then delivering the insights onto the Agent Desktop.
Last but not the least, it is really important that the price for real-time transcription is affordable in order to build a strong business case for the AI Co-Pilot. It is important to confirm that the agent and caller channel are not priced independently as that very often kills the business case.
If you are building an LLM-powered AI Co-pilot and would like to engage in a deeper discussion, please give us a shout! You can reach us at sales@voicegain.ai.
This blog post is intended for anyone responsible for upgrading/migrating an MRCP-based Nuance ASR nearing EOL (End of Life). They can explore how Voicegain ASR simplifies and economically extends the life of existing speech-IVR platforms. It serves as a 'drop-in' replacement for grammar-based Nuance ASR.
There are several hundred (if not thousands) telephony-based speech-enabled IVRs that act as the 'front-door' for all customer service phone calls for enterprises of all sizes. These speech-enabled IVRs are built on platforms like Genesys Voice Portal (GVP), Genesys Engage, Avaya Aura Experience Portal(AAEP)/Avaya Voice Portal , Cisco Voice Portal (CVP), Aspect or Voxeo ProphecyVoiceXML platform and several other such VoiceXML based IVR solutions. The systems predominantly use Nuance ASR as the speech recognition engine.
Unlike contemporary large vocabulary neural-network-based ASR/STT engines, the traditional Nuance ASR is a grammar-based ASR. It uses the MRCP protocol to talk to VoiceXML based IVR platforms. Most of these systems were purchased in the last two decades (2000s and 2010s). Customers typically paid a port-based perpetual license fee (the IVR platforms were also licensed similarly). Most enterprises have a software maintenance/AMC contracts for the Nuance ASR and this is usually bundled along with the IVR platform. The Nuance Recognizer versions in the market vary between 9.0 and 11.0. As of June 2022, Nuance had announced end of support for Nuance 10.0. It is our understanding in speaking with customers that the last version of Nuance sold – Nuance 11.0 Recognizer will approach either end-of-life or end-of-Orderability sometime in 2025*.
Also in speaking with customers, we have understood that customers who currently license the MRCP grammar-based Nuance ASR would have to upgrade to Nuance’s Krypton engine, the new deep-learning based ASR in 2025. Nuance Krypton can only be accessed using the modern gRPC based API and not over MRCP, which makes this upgrade expensive and time-consuming. Because of this, Customers would need to upgrade not just their the ASR but also the entire IVR platform. This is because most legacy IVR platforms - especially would do not support gRPC. This might also entail migrating the existing call flow logic –which is likely written in a VoiceXML app studio or written in a build tool and generated as VoiceXML pages – would also need to be ported.
All of the above steps makes the upgrade process very challenging. While there is a strong case to be made for the merits of upgrading to a deep-learning based ASR to support conversational interactions (better automation rates and more natural user-experience), it is critical for customers that this upgrade/migration is done on the customer’s timelines and not under the gun on the vendor’s clock.
Voicegain offers a drop-in replacement for the Nuance grammar-based ASR. We are the only modern deep-learning/AI (neural-network-based)ASR in the market that natively supports both traditional speech grammars (grxml, SRGS) and large-vocabulary conversational interactions. We are also one of the very few ASR vendors that can be accessed both over a traditional telephony-based protocol like MRCP and a modern web-based method like web-sockets (or gRPC). So the same neural-network model supports both the old and the new protocols. This allows you a future-proof method of replacing Nuance ASR with minimal effort while safeguarding this investment for the long term.
Net-net, by just "pointing" the ASR resource on the VoiceXML platform to the IP-address of the Voicegain MRCP ASR in your network, you can replace the entire Nuance ASR with the Voicegain ASR. Customers would not need to even change or modify a single line of code of the speech-IVR application logic.
In other words, a client can retain the existing telephony/IVR setup and just perform a "drop-in replacement" of Nuance MRCP ASR with Voicegain MRCP ASR.
Longer-term the same Voicegain ASR can perform large vocabulary transcription because it is a neural-network based ASR; so when the customer is ready to replace the directed-dialog Speech IVR with a conversational interaction, the Voicegain platform will already support it.
To discuss your upgrade situation in more detail, please contact us over email at sales@voicegain.ai.We can answer any questions that you have. You could also get started with a free developer account by following these instructions. There is no credit card required and we offer 1500 hours of usage for free. Here is a link to the instructions; after you sign up, please contact us at support@voicegain.aiand request MRCP access.
* Nuance ASR and Nuance Krypton are trademarks of Nuance, Inc which is now part of Microsoft. Please confirm the End of Life announcement and the protocol capability directly with the company. Our information in this blog post is anecdotal and has not been verified with Nuance.
This article describes how users on free or unpaid Zoom plans can get AI generated meeting transcripts, summaries, and actions items.
There are many compelling generative AI powered SaaS offerings for transcription, summarization, and action item extraction for Zoom Meetings. These include companies like Otter, Grain, Read, Fireflies, Krisp, Superhuman and others. However, all these cloud-based Meeting AI SaaS solutions require paid Zoom accounts – and this because they integrate with Zoom Cloud recording which is a feature in the paid Zoom plan.
Now paid Zoom plans are quite affordable – the Pro Zoom plan(as of the date of this post) is priced at $16/month. However, many businesses – whether they are a small startup, a mid-size business or an enterprise customer– use free Zoom plans for a vast majority of the employees in the company. In speaking with prospective customers, we estimate that for many businesses only 5 – 10% of the employee base has a paid Zoom plan.
Meetings on a free Zoom plan can only be up to 40 minutes – which is adequate for most meetings. Hence it works quite well for a large segment of users. Now if these meetings need to be transcribed and summarized, users would need to upgrade to a paid plan. For many businesses, since 90%+of the users are on free Zoom plans, upgrading all them to a paid plan can be a very significant expense.
Voicegain Transcribe is an AI meeting assistant that integrates with Zoom Local Recording. Zoom Local Recording allows users to save the Zoom recording to their local computer instead of Zoom’s Cloud. A big advantage of Zoom Local Recording is that it is available on free Zoom plans. As a result, there is no need to upgrade to a paid Zoom license. Voicegain Transcribe also has a free tier that is good for 5 hours (300 minutes) every month. As a result, users that host or attend up to 10 half-hour Zoom meetings can get transcription and LLM-powered insights like summarization and action item extraction for free.
Of-course, the other major benefit of local recording is data privacy. Many businesses do not like to store sensitive meeting content on Zoom’sCloud or for that matter on any another vendor’s cloud – but they are forced to do so because of lack of options. Especially in the age of AI and LLMs, there is a lot of concern and paranoia around proprietary information being used to train AI models.
While any business can started a trial with Voicegain’s multi-tenant cloud SaaS offering, our entire solution can be deployed as a single-tenant solution in your private cloud. Voicegain transcribe can operate fully independently - without the need to connect to our cloud for any service.
You can get started and evaluate our offering by clicking here. As shared above, we offer 5 hours (300 minutes) of free transcription and LLM powered summarization every month.
If you have any questions, please send us an email to support@voicegain.ai
This article describes ideas for a business with a speech-enabled IVR to plan its upgrade/transition to a modern generative AI powered conversational Voice Bot on its own timeline and at an affordable cost.
Businesses of all sizes have an IVR system that acts as a front-door for all their customer voice conversations. In terms of functionality, these IVRs systems vary widely; they can range from performing basic call-routing and triaging to automating simple calls - like taking payments, scheduling appointments, or providing account balance etc. While most of them accept touch-tone/DTMF as input, the more advanced ones also accept natural language speech as input and hence referred to as speech-enabled IVRs.
However these IVRs are getting obsolete and there is a growing demand to upgrade to a more conversational experience.
Traditionally Speech IVR applications were deployed on-premise; built on the same platform as the main contact center ACD/Switch. But soon, IVRs were deployed on the Cloud too. The on-premise IVR vendors include Avaya, Genesys and Cisco and cloud-based IVRs include vendors like Five9, RingCentral, Mitel and 8x8.
For speech recognition, the most popular option in the past had been Nuance. Nuance’s ASR technology – which gained popularity in the early2000s - preceded today’s neural-network-based engines. It was pre-Alexa and pre-Siri– and so both the vocabulary (i.e what the customer could actually say in response to a prompt) and the accuracy was limited compared to today’s neural-network-based speech-to-Text. In addition, the protocol for communication between Nuance and the telephony stack was MRCP – a protocol that is not being actively developed for many years now.
Modern Conversational AI Stack for Voice Bots include a modern neural ASR/Speech-to-Text engine and neural Text-to-Speech and a NLU based Bot Framework. It is much more capable than what was available to build directed dialog Speech IVRs in the past.
Today’s neural ASR/STT engines can transcribe not just a few words or phrases, but entire sentences and they also do it very accurately. As consumers get used to such experiences with their voice assistants at home or in their cars, they expect the same when they contact a business over the phone.
There also been significant advances with modern no-code NLU Bot frameworks that are used to build the Bot Logic and conversation flow. These Bot frameworks are also evolving with the advent of generativeAI technologies like ChatGPT.
While the above two paragraphs describe good reasons to upgrade IVRs, there are some key factors that are driving a rather rushed timeline for businesses to plan this IVR migration
Companies with on-premise Contact Centers are increasingly migrating to the Cloud. Even the on-premise contact center vendors too are focused on migrating their install base to the Cloud. So when an enterprise plans to migrate the contact center platform to the cloud, they would need to migrate the IVRs too.
As explained above, modern AI/neural-network-based ASR/STT engines are more accurate and support a conversational experience. Hence ASR/STT vendors are focused on selling these newer offerings. It is not possible for businesses to use these newer ASRs with existing telephony stack. Both the protocol support (Web sockets and gRPC vs MRCP) and the application development method (grammar based vs. large vocabulary transcription with intent capture) are very different.
In the past companies built the application logic for Chatbot and IVR independently; very often different vendors provided the Chatbot and VoiceBot. However, given the powerful and flexible Conversational AI platforms that are available in the market, they want to use the same platform to drive the conversation turns of a Chatbot interaction and a Voice Bot interaction.
As explained above, migrating from the traditional IVR stack to a modern Conversational AI stack entails not just rewriting the application logic but it is also likely to involve moving the infrastructure from on-premise to the cloud. This can be an expensive undertaking.
At Voicegain, we think that can help companies should be able to this at their own timeline.
We have developed an ASR that can support both (a) grammar-based recognition using MRCP and (b) large vocabulary transcription on audio streamed using modern protocols like Websockets. Also our platform can be deployed on-premise or in your VPC. So our platform supports both an existing application without any rewrite while also being capable of supporting a conversation voice bot when it is developed at some point in the future.
As a result, customers can take control of when to migrate/upgrade their IVRs. Most importantly, they would not be forced into invest in an upgrade/migration of their entire IVR application just because an existing ASR vendor would stop supporting an older version of the software.
If you have any questions or you would like to schedule a discussion to understand your IVR upgrade options, contact us on support@voicegain.ai.
To test our MRCP grammar-based ASR or our large vocabulary ASR, please sign up for a free developer account. Instructions are provided here.
Voicegain, the leading Edge Voice AI platform for enterprises and Voice SaaS companies, is thrilled to announce the successful completion of a System and Organizational Control (SOC) 2 Type 1 Audit performed by Sensiba LLP.
Developed by the American Institute of Certified Public Accountants (AICPA), the SOC 2 Information security audit provides a report on the examination of controls relevant to the trust services criteria categories covering security, availability, processing integrity, confidentiality, and privacy. A SOC 2 Type I report describes a service organization's systems, whether the design of specified controls meets the relevant trust services categories. Voicegain’s SOC 2 Type I report did not have any noted exceptions and was therefore issued with a “clean” audit opinion from Sensiba.
"As a Privacy first Voice AI Platform, we take security very seriously here at Voicegain. As a developer using our APIs or as a user of our platform, you shouldn’t have to worry about the controls in place for your sensitive voice data." said Dr Jacek Jarmulak, Co-founder, CTO & CISO Of Voicegain.
"At Voicegain, we have maintained a robust information security program for over a decade now and this has been communicated throughout our organization for quite some time now. Earlier this year, we achieved PCI-DSS compliance for our Developer platform and today's successful completion of the SOC 2 Type 1 Audit marks a significant milestone in our security and compliance journey." continued Dr Jarmulak.
Service Organization Control 2(SOC2) is a set of criteria established by the American Institute of Certified Public Accountants (AICPA) to assess controls relevant to the security, availability, and processing integrity of the systems a service organization uses to process users’ data and the confidentiality and privacy of the information processed by these systems. SOC 2 compliance is important for Voice AI platforms like Voicegain, as it demonstrates that we have implemented controls to safeguard users’ data.
There are two types of SOC 2 compliance:
From a functional standpoint, achieving SOC 2 Type 1 compliance doesn’t change anything. Our APIs and Apps will work exactly as they always have and as expected. However SOC 2 Type 1 compliance means that we have established a set of controls and processes to ensure the security of our users’ data. This compliance demonstrates that we have the necessary measures in place to protect sensitive information from unauthorized access and disclosure.
Our commitment to security doesn’t end with SOC 2 Type 1. We are already working towards achieving SOC 2 Type 2 compliance, which we plan to accomplish in Q1 2024. Thiswill further validate that we maintain the highest levels of security, ensuring that our users can continue to rely on and trust Voicegain.
Voicegain's speech recognition technology has been widely recognized for its innovation and impact across industries. From call centers and customer service applications to transcription of Zoom Meetings in enterprise and healthcare and transcription of classroom lectures, Voicegain's solutions have demonstrated their ability to transform audio data into actionable insights. The attainment of SOC 2 Type 1 compliance further solidifies Voicegain's position as a reliable and responsible provider of cutting-edge speech recognition services.
"We understand that in today's digital landscape, data security is non-negotiable," added Arun Santhebennur, Co-founder & CEO of Voicegain. "By achieving SOC 2 Type 1 compliance, we aim to set an industry standard for ensuring the confidentiality and integrity of the data entrusted to us. Our customers can have full confidence that their sensitive information is protected throughout its lifecycle."
To request a copy of our SOC 2 Type 1 report, please email security.it@voicegain.ai
Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.
Read more →Interested in customizing the ASR or deploying Voicegain on your infrastructure?