Our Blog

News, Insights, sample code & more!

ASR
Announcing the launch of Voicegain Whisper ASR/Speech Recognition API for Gen AI developers

Today we are really excited to announce the launch of Voicegain Whisper, an optimized version of Open AI's Whisper Speech recognition/ASR model that runs on Voicegain managed cloud infrastructure and accessible using Voicegain APIs. Developers can use the same well-documented robust APIs and infrastructure that processes over 60 Million minutes of audio every month for leading enterprises like Samsung, Aetna and other innovative startups like Level.AI, Onvisource and DataOrb.

The Voicegain Whisper API is a robust and affordable batch Speech-to-Text API for developersa that are looking to integrate conversation transcripts with LLMs like GPT 3.5 and 4 (from Open AI) PaLM2 (from Google), Claude (from Anthropic), LLAMA 2 (Open Source from Meta), and their own private LLMs to power generative AI apps. Open AI open-sourced several versions of the Whisper models released. With today's release Voicegain supports Whisper-medium, Whisper-small and Whisper-base. Voicegain now supports transcription in over multiple languages that are supported by Whisper. 

Here is a link to our product page


There are four main reasons for developers to use Voicegain Whisper over other offerings:

1. Support for Private Cloud/On-Premise deployment (integrate with Private LLMs)

While developers can use Voicegain Whisper on our multi-tenant cloud offering, a big differentiator for Voicegain is our support for the Edge. The Voicegain platform has been architected and designed for single-tenant private cloud and datacenter deployment. In addition to the core deep-learning-based Speech-to-text model, our platform includes our REST API services, logging and monitoring systems, auto-scaling and offline task and queue management. Today the same APIs are enabling Voicegain to processes over 60 Million minutes a month. We can bring this practical real-world experience of running AI models at scale to our developer community.

Since the Voicegain platform is deployed on Kubernetes clusters, it is well suited for modern AI SaaS product companies and innovative enterprises that want to integrate with their private LLMs.

2. Affordable pricing - 40% less expensive than Open AI 

At Voicegain, we have optimized Whisper for higher throughput. As a result, we are able to offer access to the Whisper model at a price that is 40% lower than what Open AI offers.

3. Enhanced features for Contact Centers & Meetings.

Voicegain also offers critical features for contact centers and meetings. Our APIs support two-channel stereo audio - which is common in contact center recording systems. Word-level timestamps is another important feature that our API offers which is needed to map audio to text. There is another feature that we have for the Voicegain models - enhanced diarization models - which is a required feature for contact center and meeting use-cases - will soon be made available on Whisper.

4. Premium Support and uptime SLAs.

We also offer premium support and uptime SLAs for our multi-tenant cloud offering. These APIs today process over 60 millions minutes of audio every month for our enterprise and startup customers.

About OpenAI-Whisper Model

OpenAI Whisper is an open-source automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. The architecture of the model is based on encoder-decoder transformers system and has shown significant performance improvement compared to previous models because it has been trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection.

OpenAI Whisper model encoder-decoder transformer architecture

Source

Getting Started with Voicegain Whisper

Learn more about Voicegain Whisper by clicking here. Any developer - whether a one person startup or a large enterprise - can access Voicegain Whisper model by signing up for a free developer account. We offer 15,000 mins of free credits when you sign up today.

There are two ways to test Voicegain Whisper. They are outlined here. If you would like more information or if you have any questions, please drop us an email support@voicegain.ai

Read more → 
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
How to select a Speech-to-Text/ASR for LLM-powered Realtime Agent Assist and AI Copilots
Contact Center
How to select a Speech-to-Text/ASR for LLM-powered Realtime Agent Assist and AI Copilots

This article outlines the evaluation criteria involved in selecting a real-time Speech-to-Text or ASR  for LLM-powered AI Copilots and Real-time agent assist applications in the contact center. This article is intended for Product Managers and Engineering leads in Contact Center AI SaaS companies and CIO/CDO organizations in enterprises that are looking to build such AI co-pilots.

The buzz around Gen AI-powered Co-Pilot & Realtime Agent Assist  

A very popular use-case for Generative AI & LLMs is the AI Co-pilot or Realtime Agent Assist in contact centers. By transcribing an agent-customer conversation in real-time and feeding the transcript to modern LLMs like Open AI's GPT, Facebook's LLAMA2 or Google's Gemini, contact centers can guide their agents to handle their calls more effectively and efficiently.

An AI Co-pilot can deliver great business benefits. It can improve CSAT and NPS as the AI can quickly search and present relevant knowledge-base to the agent, enabling them to be more knowledgeable and productive. It can also save Agent FTE costs by reducing AHT and eliminating wrap time.

In addition by building a library of "gold-standard" calls across various key call types, LLM can also deliver personalized coaching to agents in an automated way using Generative AI.Companies are finding that while Gen AI-powered Co-Pilots are especially beneficial to new hires, they also deliver benefits to agents with tenure too.

Building an AI-powered Co-Pilot requires three main components - a) A real-time ASR/Speech-to-Text engine for transcription 2) An LLM to understand the transcript and 3) Agent and Supervisor/Manager facing web applications. The focus of this blog post is on the first component - the real-time ASR/Speech-to-Text engine.

Now here are the four key factors that you should look at while evaluating the real-time ASR/Speech-to-Text engine.

1. Ease of Integration with Audio Source

The first step for any AI Co-Pilot is to stream the agent and customer real-time media to an ASR that supports streaming Speech-to-Text. This is easily the most involved engineering design decision in this process.

There are two main approaches  - 1) Streaming audio from the server-side. In an enterprise contact center, that would mean forking the media from either an enterprise Session Border Controller or the Contact Center Platform (which is the IP-PBX). 2) Streaming audio from the client side - i.e from the Agent Desktop. An Agent desktop can be a OS based thick client or a browser-based thin client - this depends on the actual CCaaS/Contact-Center platform being used.

Selecting the method of integration is an involved decision. While there are advantages and disadvantages to both approaches, server-side approaches have been the preferred option. This is because you would avoid the need to install client software and plan for compute resources at the agent desktop level.

However if you have an on-premise contact center like an Avaya, Cisco or Genesys, the integration can become more involved. This is because each platform has its own mechanism to fork these media streams and you also need to install the ASR/STT behind the corporate firewall (or open it up to access a Cloud-based ASR/STT).

Net-net, there is a case to be made for client-side streaming too - because not all companies may have the expertise available within the company.

There are modern CCaaS platforms like Amazon Connect, Twilio Flex, Genesys Cloud and Five9 that offer APIs/programmable access to the media streams. You are in luck if you have one of these platforms. Also if the PSTN access is through a programmable CPaaS platform - like Twilio, Signalwire, Telnyx etc, then it is quite a

2. Protocol support from the ASR/STT 

Once you finalize a method to fork the audio, you would need to consider the standard protocols supported by the ASR/Speech-to-text engine. Ideally, the ASR/STT engine should be flexible and support multiple options. One of the most common approaches today to stream audio over websockets. It is important to confirm that the ASR/Speech-to-Text vendor supports two-channel/stereo audio submission over websockets. There are other approaches - sharing audio over gRPC and over raw RTP.

3. Speed/ Latency of ASR/Speech-to-Text model

The next big consideration is the latency of real-time ASR/Speech-to-Text model - which in turn depends on the underlying neural network architecture of the model. In order to provide timely recommendations to the Agent, it is important to target ASRs that can deliver word-by-word transcript in less than one second and ideally in about 500 milliseconds. This is because there is additional latency associated with collecting and submitting the transcript to LLMs and then delivering the insights onto the Agent Desktop. 

4. Affordability

Last but not the least, it is really important that the price for real-time transcription is affordable in order to build a strong business case for the AI Co-Pilot. It is important to confirm that the agent and caller channel are not priced independently as that very often kills the business case.

If you are building an LLM-powered AI Co-pilot and would like to engage in a deeper discussion, please give us a shout! You can reach us at sales@voicegain.ai.

Read more → 
Voicegain: A Seamless Drop-in Replacement for Nuance Grammar-based ASR
ASR
Voicegain: A Seamless Drop-in Replacement for Nuance Grammar-based ASR

This blog post is intended for anyone responsible for upgrading/migrating an MRCP-based Nuance ASR nearing EOL (End of Life). They can explore how Voicegain ASR simplifies and economically extends the life of existing speech-IVR platforms. It serves as a 'drop-in' replacement for grammar-based Nuance ASR.

Nuance ASR reaching End of Life

There are several hundred (if not thousands) telephony-based speech-enabled IVRs that act as the 'front-door' for all customer service phone calls for enterprises of all sizes. These speech-enabled IVRs are built on platforms like Genesys Voice Portal (GVP), Genesys Engage, Avaya Aura Experience Portal(AAEP)/Avaya Voice Portal , Cisco Voice Portal (CVP), Aspect or Voxeo ProphecyVoiceXML platform and several other such VoiceXML based IVR solutions. The systems predominantly use Nuance ASR as the speech recognition engine.

Unlike contemporary large vocabulary neural-network-based ASR/STT engines, the traditional Nuance ASR is a grammar-based ASR. It uses the MRCP protocol to talk to VoiceXML based IVR platforms. Most of these systems were purchased in the last two decades (2000s and 2010s). Customers typically paid a port-based perpetual license fee (the IVR platforms were also licensed similarly). Most enterprises have a software maintenance/AMC contracts for the Nuance ASR and this is usually bundled along with the IVR platform. The Nuance Recognizer versions in the market vary between 9.0 and 11.0. As of June 2022, Nuance had announced end of support for Nuance 10.0. It is our understanding in speaking with customers that the last version of Nuance sold – Nuance 11.0 Recognizer will approach either end-of-life or end-of-Orderability sometime in 2025*.

Nuance upgrade path is challenging 

Also in speaking with customers, we have understood that customers who currently license the MRCP grammar-based Nuance ASR would have to upgrade to Nuance’s Krypton engine, the new deep-learning based ASR in 2025. Nuance Krypton can only be accessed using the modern gRPC based API and not over MRCP, which makes this upgrade expensive and time-consuming. Because of this, Customers would need to upgrade not just their the ASR but also the entire IVR platform. This is because most legacy IVR platforms - especially would do not support gRPC. This might also entail migrating the existing call flow logic –which is likely written in a VoiceXML app studio or written in a build tool and generated as VoiceXML pages – would also need to be ported.

All of the above steps makes the upgrade process very challenging. While there is a strong case to be made for the merits of upgrading to a deep-learning based ASR to support conversational interactions (better automation rates and more natural user-experience), it is critical for customers that this upgrade/migration is done on the customer’s timelines and not under the gun on the vendor’s clock.

Voicegain as a future-proof drop-in replacement for Nuance ASR

Voicegain offers a drop-in replacement for the Nuance grammar-based ASR. We are the only modern deep-learning/AI (neural-network-based)ASR in the market that natively supports both traditional speech grammars (grxml, SRGS) and large-vocabulary conversational interactions. We are also one of the very few ASR vendors that can be accessed both over a traditional telephony-based protocol like MRCP and a modern web-based method like web-sockets (or gRPC). So the same neural-network model supports both the old and the new protocols. This allows you a future-proof method of replacing Nuance ASR with minimal effort while safeguarding this investment for the long term.

Net-net, by just "pointing" the ASR resource on the VoiceXML platform to the IP-address of the Voicegain MRCP ASR in your network, you can replace the entire Nuance ASR with the Voicegain ASR. Customers would not need to even change or modify a single line of code of the speech-IVR application logic.

In other words, a client can retain the existing telephony/IVR setup and just perform a "drop-in replacement" of Nuance MRCP ASR with Voicegain MRCP ASR.

Longer-term the same Voicegain ASR can perform large vocabulary transcription because it is a neural-network based ASR; so when the customer is ready to replace the directed-dialog Speech IVR with a conversational interaction, the Voicegain platform will already support it.

Get Started for free today

To discuss your upgrade situation in more detail, please contact us over email at sales@voicegain.ai.We can answer any questions that you have. You could also get started with a free developer account by following these instructions. There is no credit card required and we offer 1500 hours of usage for free. Here is a link to the instructions; after you sign up, please contact us at support@voicegain.aiand request MRCP access.

Nuance ASR and Nuance Krypton are trademarks of Nuance, Inc which is now part of Microsoft. Please confirm the End of Life announcement and the protocol capability directly with the company. Our information in this blog post is anecdotal and has not been verified with Nuance.

Read more → 
AI Meeting Transcription and Summaries on free Zoom accounts
Transcription
AI Meeting Transcription and Summaries on free Zoom accounts

This article describes how users on free or unpaid Zoom plans can get AI generated meeting transcripts, summaries, and actions items.

Cloud-based Meeting AI SaaS solutions do not work on free Zoom accounts

There are many compelling generative AI powered SaaS offerings for transcription, summarization, and action item extraction for Zoom Meetings. These include companies like Otter, Grain, Read, Fireflies, Krisp, Superhuman and others. However, all these cloud-based Meeting AI SaaS solutions require paid Zoom accounts – and this because they integrate with Zoom Cloud recording which is a feature in the paid Zoom plan.

Why is a this a bigger problem than it seems?

Now paid Zoom plans are quite affordable – the Pro Zoom plan(as of the date of this post) is priced at $16/month. However, many businesses – whether they are a small startup, a mid-size business or an enterprise customer– use free Zoom plans for a vast majority of the employees in the company. In speaking with prospective customers, we estimate that for many businesses only 5 – 10% of the employee base has a paid Zoom plan.

Meetings on a free Zoom plan can only be up to 40 minutes – which is adequate for most meetings. Hence it works quite well for a large segment of users. Now if these meetings need to be transcribed and summarized, users would need to upgrade to a paid plan. For many businesses, since 90%+of the users are on free Zoom plans, upgrading all them to a paid plan can be a very significant expense.

How does Voicegain Transcribe address this challenge?

Voicegain Transcribe is an AI meeting assistant that integrates with Zoom Local Recording. Zoom Local Recording allows users to save the Zoom recording to their local computer instead of Zoom’s Cloud. A big advantage of Zoom Local Recording is that it is available on free Zoom plans. As a result, there is no need to upgrade to a paid Zoom license. Voicegain Transcribe also has a free tier that is good for 5 hours (300 minutes) every month. As a result, users that host or attend up to 10 half-hour Zoom meetings can get transcription and LLM-powered insights like summarization and action item extraction for free.

The added benefit – Data Privacy

Of-course, the other major benefit of local recording is data privacy. Many businesses do not like to store sensitive meeting content on Zoom’sCloud or for that matter on any another vendor’s cloud – but they are forced to do so because of lack of options. Especially in the age of AI and LLMs, there is a lot of concern and paranoia around proprietary information being used to train AI models.  

While any business can started a trial with Voicegain’s multi-tenant cloud SaaS offering, our entire solution can be deployed as a single-tenant solution in your private cloud. Voicegain transcribe can operate fully independently - without the need to connect to our cloud for any service.

Sign up today with a free plan of Voicegain Transcribe!

You can get started and evaluate our offering by clicking here. As shared above, we offer 5 hours (300 minutes) of free transcription and LLM powered summarization every month.

If you have any questions, please send us an email to support@voicegain.ai

Read more → 
Building an affordable and unhurried upgrade path from IVRs to conversational Voice bots
Enterprise
Building an affordable and unhurried upgrade path from IVRs to conversational Voice bots

This article describes ideas for a business with a speech-enabled IVR to plan its upgrade/transition to a modern generative AI powered conversational Voice Bot on its own timeline and at an affordable cost.

Businesses of all sizes have an IVR system that acts as a front-door for all their customer voice conversations. In terms of functionality, these IVRs systems vary widely; they can range from performing basic call-routing and triaging to automating simple calls - like taking payments, scheduling appointments, or providing account balance etc. While most of them accept touch-tone/DTMF as input, the more advanced ones also accept natural language speech as input and hence referred to as speech-enabled IVRs.

However these IVRs are getting obsolete and there is a growing demand to upgrade to a more conversational experience.

1. Traditional IVR/ASR Stack is getting obsolete

Traditionally Speech IVR applications were deployed on-premise; built on the same platform as the main contact center ACD/Switch. But soon, IVRs were deployed on the Cloud too. The on-premise IVR vendors include Avaya, Genesys and Cisco and cloud-based IVRs include vendors like Five9, RingCentral, Mitel and 8x8.

For speech recognition, the most popular option in the past had been Nuance. Nuance’s ASR technology – which gained popularity in the early2000s - preceded today’s neural-network-based engines. It was pre-Alexa and pre-Siri– and so both the vocabulary (i.e what the customer could actually say in response to a prompt) and the accuracy was limited compared to today’s neural-network-based speech-to-Text. In addition, the protocol for communication between Nuance and the telephony stack was MRCP – a protocol that is not being actively developed for many years now.

2. Modern Conversational AI Stack is being reimagined with Gen AI

Modern Conversational AI Stack for Voice Bots include a modern neural ASR/Speech-to-Text engine and neural Text-to-Speech and a NLU based Bot Framework. It is much more capable than what was available to build directed dialog Speech IVRs in the past.

Today’s neural ASR/STT engines can transcribe not just a few words or phrases, but entire sentences and they also do it very accurately. As consumers get used to such experiences with their voice assistants at home or in their cars, they expect the same when they contact a business over the phone.

There also been significant advances with modern no-code NLU Bot frameworks that are used to build the Bot Logic and conversation flow. These Bot frameworks are also evolving with the advent of generativeAI technologies like ChatGPT.

While the above two paragraphs describe good reasons to upgrade IVRs, there are some key factors that are driving a rather rushed timeline for businesses to plan this IVR migration 

3. Factors  driving a rather rushed timeline for IVR Migration

Time is running out for IVR Migration

a. Contact Center platforms focused on Cloud sales

Companies with on-premise Contact Centers are increasingly migrating to the Cloud. Even the on-premise contact center vendors too are focused on migrating their install base to the Cloud. So when an enterprise plans to migrate the contact center platform to the cloud, they would need to migrate the IVRs too.

b. Modern ASR/STTs focused on selling their AI/neural-network-based offerings

As explained above, modern AI/neural-network-based ASR/STT engines are more accurate and support a conversational experience. Hence ASR/STT vendors are focused on selling these newer offerings. It is not possible for businesses to use these newer ASRs with existing telephony stack. Both the protocol support (Web sockets and gRPC vs MRCP) and the application development method (grammar based vs. large vocabulary transcription with intent capture) are very different.

c. Demand to use a single Application/Bot Framework for both Chat and Voice

In the past companies built the application logic for Chatbot and IVR independently; very often different vendors provided the Chatbot and VoiceBot. However, given the powerful and flexible Conversational AI platforms that are available in the market, they want to use the same platform to drive the conversation turns of a Chatbot interaction and a Voice Bot interaction.

4. Taking Control of when to upgrade the IVR

As explained above, migrating from the traditional IVR stack to a modern Conversational AI stack entails not just rewriting the application logic but it is also likely to involve moving the infrastructure from on-premise to the cloud. This can be an expensive undertaking.

At Voicegain, we think that can help companies should be able to this at their own timeline.

We have developed an ASR that can support both (a) grammar-based recognition using MRCP and (b) large vocabulary transcription on audio streamed using modern protocols like Websockets. Also our platform can be deployed on-premise or in your VPC. So our platform supports both an existing application without any rewrite while also being capable of supporting a conversation voice bot when it is developed at some point in the future.

As a result, customers can take control of when to migrate/upgrade their IVRs. Most importantly, they would not be forced into invest in an upgrade/migration of their entire IVR application just because an existing ASR vendor would stop supporting an older version of the software.

If you have any questions or you would like to schedule a discussion to understand your IVR upgrade options, contact us on support@voicegain.ai.

To test our MRCP grammar-based ASR or our large vocabulary ASR, please sign up for a free developer account. Instructions are provided here.

Read more → 
Voicegain Achieves SOC2 Type 1 Compliance, Reinforcing Commitment to Data Security and Privacy
Enterprise
Voicegain Achieves SOC2 Type 1 Compliance, Reinforcing Commitment to Data Security and Privacy

Voicegain, the leading Edge Voice AI platform for enterprises and Voice SaaS companies, is thrilled to announce the successful completion of a System and Organizational Control (SOC) 2 Type 1 Audit performed by Sensiba LLP.

Developed by the American Institute of Certified Public Accountants (AICPA), the SOC 2 Information security audit provides a report on the examination of controls relevant to the trust services criteria categories covering security, availability, processing integrity, confidentiality, and privacy. A SOC 2 Type I report describes a service organization's systems, whether the design of specified controls meets the relevant trust services categories. Voicegain’s SOC 2 Type I report did not have any noted exceptions and was therefore issued with a “clean” audit opinion from Sensiba.

"As a Privacy first Voice AI Platform, we take security very seriously here at Voicegain. As a developer using our APIs or as a user of our platform, you shouldn’t have to worry about the controls in place for your sensitive voice data." said Dr Jacek Jarmulak, Co-founder, CTO & CISO Of Voicegain.

"At Voicegain, we have maintained a robust information security program for over a decade now and this has been communicated throughout our organization for quite some time now. Earlier this year, we achieved PCI-DSS compliance for our Developer platform and today's successful completion of the SOC 2 Type 1 Audit marks a significant milestone in our security and compliance journey." continued Dr Jarmulak.

What Is SOC 2?

Service Organization Control 2(SOC2) is a set of criteria established by the American Institute of Certified Public Accountants (AICPA) to assess controls relevant to the security, availability, and processing integrity of the systems a service organization uses to process users’ data and the confidentiality and privacy of the information processed by these systems. SOC 2 compliance is important for Voice AI platforms like Voicegain, as it demonstrates that we have implemented controls to safeguard users’ data.

There are two types of SOC 2 compliance:

  1. SOC 2 Type 1: Validates that an organization has established appropriate controls at a specific point in time. Voicegain's successful audit established this as of Jul 14 2023.
  2. SOC 2 Type 2: Confirms that an organization has maintained and operated those controls over a period of time, typically 6 to 12 months.

Implications for Voicegain Users

From a functional standpoint, achieving SOC 2 Type 1 compliance doesn’t change anything. Our APIs and Apps will work exactly as they always have and as expected. However SOC 2 Type 1 compliance means that we have established a set of controls and processes to ensure the security of our users’ data. This compliance demonstrates that we have the necessary measures in place to protect sensitive information from unauthorized access and disclosure.

What’s Next? SOC 2 Type II

Our commitment to security doesn’t end with SOC 2 Type 1. We are already working towards achieving SOC 2 Type 2 compliance, which we plan to accomplish in Q1 2024. Thiswill further validate that we maintain the highest levels of security, ensuring that our users can continue to rely on and trust Voicegain.

Voicegain's speech recognition technology has been widely recognized for its innovation and impact across industries. From call centers and customer service applications to transcription of Zoom Meetings in enterprise and healthcare and transcription of classroom lectures, Voicegain's solutions have demonstrated their ability to transform audio data into actionable insights. The attainment of SOC 2 Type 1 compliance further solidifies Voicegain's position as a reliable and responsible provider of cutting-edge speech recognition services.

"We understand that in today's digital landscape, data security is non-negotiable," added Arun Santhebennur, Co-founder & CEO of Voicegain. "By achieving SOC 2 Type 1 compliance, we aim to set an industry standard for ensuring the confidentiality and integrity of the data entrusted to us. Our customers can have full confidence that their sensitive information is protected throughout its lifecycle."

To request a copy of our SOC 2 Type 1 report, please email security.it@voicegain.ai

Read more → 
Announcing the launch of Voicegain Whisper ASR/Speech Recognition API for Gen AI developers
ASR
Announcing the launch of Voicegain Whisper ASR/Speech Recognition API for Gen AI developers

Today we are really excited to announce the launch of Voicegain Whisper, an optimized version of Open AI's Whisper Speech recognition/ASR model that runs on Voicegain managed cloud infrastructure and accessible using Voicegain APIs. Developers can use the same well-documented robust APIs and infrastructure that processes over 60 Million minutes of audio every month for leading enterprises like Samsung, Aetna and other innovative startups like Level.AI, Onvisource and DataOrb.

The Voicegain Whisper API is a robust and affordable batch Speech-to-Text API for developersa that are looking to integrate conversation transcripts with LLMs like GPT 3.5 and 4 (from Open AI) PaLM2 (from Google), Claude (from Anthropic), LLAMA 2 (Open Source from Meta), and their own private LLMs to power generative AI apps. Open AI open-sourced several versions of the Whisper models released. With today's release Voicegain supports Whisper-medium, Whisper-small and Whisper-base. Voicegain now supports transcription in over multiple languages that are supported by Whisper. 

Here is a link to our product page


There are four main reasons for developers to use Voicegain Whisper over other offerings:

1. Support for Private Cloud/On-Premise deployment (integrate with Private LLMs)

While developers can use Voicegain Whisper on our multi-tenant cloud offering, a big differentiator for Voicegain is our support for the Edge. The Voicegain platform has been architected and designed for single-tenant private cloud and datacenter deployment. In addition to the core deep-learning-based Speech-to-text model, our platform includes our REST API services, logging and monitoring systems, auto-scaling and offline task and queue management. Today the same APIs are enabling Voicegain to processes over 60 Million minutes a month. We can bring this practical real-world experience of running AI models at scale to our developer community.

Since the Voicegain platform is deployed on Kubernetes clusters, it is well suited for modern AI SaaS product companies and innovative enterprises that want to integrate with their private LLMs.

2. Affordable pricing - 40% less expensive than Open AI 

At Voicegain, we have optimized Whisper for higher throughput. As a result, we are able to offer access to the Whisper model at a price that is 40% lower than what Open AI offers.

3. Enhanced features for Contact Centers & Meetings.

Voicegain also offers critical features for contact centers and meetings. Our APIs support two-channel stereo audio - which is common in contact center recording systems. Word-level timestamps is another important feature that our API offers which is needed to map audio to text. There is another feature that we have for the Voicegain models - enhanced diarization models - which is a required feature for contact center and meeting use-cases - will soon be made available on Whisper.

4. Premium Support and uptime SLAs.

We also offer premium support and uptime SLAs for our multi-tenant cloud offering. These APIs today process over 60 millions minutes of audio every month for our enterprise and startup customers.

About OpenAI-Whisper Model

OpenAI Whisper is an open-source automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. The architecture of the model is based on encoder-decoder transformers system and has shown significant performance improvement compared to previous models because it has been trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection.

OpenAI Whisper model encoder-decoder transformer architecture

Source

Getting Started with Voicegain Whisper

Learn more about Voicegain Whisper by clicking here. Any developer - whether a one person startup or a large enterprise - can access Voicegain Whisper model by signing up for a free developer account. We offer 15,000 mins of free credits when you sign up today.

There are two ways to test Voicegain Whisper. They are outlined here. If you would like more information or if you have any questions, please drop us an email support@voicegain.ai

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Sign up for an app today
* No credit card required.

Enterprise

Interested in customizing the ASR or deploying Voicegain on your infrastructure?

Contact Us → 
Voicegain - Speech-to-Text
Under Your Control