Our Blog

News, Insights, sample code & more!

ASR
Announcing the launch of Voicegain Whisper ASR/Speech Recognition API for Gen AI developers

Today we are really excited to announce the launch of Voicegain Whisper, an optimized version of Open AI's Whisper Speech recognition/ASR model that runs on Voicegain managed cloud infrastructure and accessible using Voicegain APIs. Developers can use the same well-documented robust APIs and infrastructure that processes over 60 Million minutes of audio every month for leading enterprises like Samsung, Aetna and other innovative startups like Level.AI, Onvisource and DataOrb.

The Voicegain Whisper API is a robust and affordable batch Speech-to-Text API for developersa that are looking to integrate conversation transcripts with LLMs like GPT 3.5 and 4 (from Open AI) PaLM2 (from Google), Claude (from Anthropic), LLAMA 2 (Open Source from Meta), and their own private LLMs to power generative AI apps. Open AI open-sourced several versions of the Whisper models released. With today's release Voicegain supports Whisper-medium, Whisper-small and Whisper-base. Voicegain now supports transcription in over multiple languages that are supported by Whisper. 

Here is a link to our product page


There are four main reasons for developers to use Voicegain Whisper over other offerings:

1. Support for Private Cloud/On-Premise deployment (integrate with Private LLMs)

While developers can use Voicegain Whisper on our multi-tenant cloud offering, a big differentiator for Voicegain is our support for the Edge. The Voicegain platform has been architected and designed for single-tenant private cloud and datacenter deployment. In addition to the core deep-learning-based Speech-to-text model, our platform includes our REST API services, logging and monitoring systems, auto-scaling and offline task and queue management. Today the same APIs are enabling Voicegain to processes over 60 Million minutes a month. We can bring this practical real-world experience of running AI models at scale to our developer community.

Since the Voicegain platform is deployed on Kubernetes clusters, it is well suited for modern AI SaaS product companies and innovative enterprises that want to integrate with their private LLMs.

2. Affordable pricing - 40% less expensive than Open AI 

At Voicegain, we have optimized Whisper for higher throughput. As a result, we are able to offer access to the Whisper model at a price that is 40% lower than what Open AI offers.

3. Enhanced features for Contact Centers & Meetings.

Voicegain also offers critical features for contact centers and meetings. Our APIs support two-channel stereo audio - which is common in contact center recording systems. Word-level timestamps is another important feature that our API offers which is needed to map audio to text. There is another feature that we have for the Voicegain models - enhanced diarization models - which is a required feature for contact center and meeting use-cases - will soon be made available on Whisper.

4. Premium Support and uptime SLAs.

We also offer premium support and uptime SLAs for our multi-tenant cloud offering. These APIs today process over 60 millions minutes of audio every month for our enterprise and startup customers.

About OpenAI-Whisper Model

OpenAI Whisper is an open-source automatic speech recognition (ASR) system trained on 680,000 hours of multilingual and multitask supervised data collected from the web. The architecture of the model is based on encoder-decoder transformers system and has shown significant performance improvement compared to previous models because it has been trained on various speech processing tasks, including multilingual speech recognition, speech translation, spoken language identification, and voice activity detection.

OpenAI Whisper model encoder-decoder transformer architecture

Source

Getting Started with Voicegain Whisper

Learn more about Voicegain Whisper by clicking here. Any developer - whether a one person startup or a large enterprise - can access Voicegain Whisper model by signing up for a free developer account. We offer 15,000 mins of free credits when you sign up today.

There are two ways to test Voicegain Whisper. They are outlined here. If you would like more information or if you have any questions, please drop us an email support@voicegain.ai

Read more → 
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
New Languages Available in Voicegain Speech-to-Text
Languages
New Languages Available in Voicegain Speech-to-Text

[Updated: 5/27/2022]

In addition to the current support for English, Spanish, Hindi, and German languages in its Speech-to-Text platform, Voicegain is releasing support for many new languages over the next couple of months.

Languages Generally Available right now
  • English - blend of mainly US, UK, Irish accents - includes punctuation and digit/time/currency formatting support
  • Spanish - focused on Latin America accents - includes punctuation and digit/time/currency formatting support
  • Hindi
  • German

You can access these languages right now from the Web Console or via our Transcribe App or via the API

Languages available right now in Alpha early access program

Upon request we will make these languages available for your testing. Generally, they can be available within hours from receiving a request. Please contact us at support@voicegain.ai

  • Portuguese - blend of European and Brazilian Portuguese
  • Polish
  • Korean
  • Dutch
  • Ukrainian
What does "Alpha early access" mean?

The Alpha early access models differ from full-featured production models in the following ways:

  • They are not good at rejecting background noise, music, etc.
  • The  vocabulary may be limited - they may not be good at recognizing names of products, people, places, etc. Generally the vocabulary is the core every day vocabulary of a given language.
  • They will not be good at recognizing heavy or unusual accents.
  • Punctuation and capitalization is not available.
  • Formatting of digits, time, dates, currencies is not available.
  • For languages not using Latin alphabet, there could be occasional glitches in the characters in the transcript.
  • Initially most of those models are available in offline/batch mode only. We are working on training the real-time/streaming models.

As alpha models are being trained on additional data, their accuracy will improve. We are also working on punctuation, capitalization, and formatting of each of those models.

Languages that will be available in the first half of June
  • French - blend of European French (Metropolitan) and Canadian French (Quebec)

We will update this post as soon as these languages are available in the Alpha early access program.

Languages available by end of June
  • Arabic
  • Italian
Do not see a language that you need?

Since our language models are created exclusively with End-to-End Deep Learning, we can perform transfer learning from one language to another, and quickly support new languages and dialects to better meet your use case. Don’t see your language listed below? Contact us at support@voicegain.ai, as new languages and dialects are released frequently.


Read more → 
Acoustic Model Training delivers big gains in ASR Accuracy
Model Training
Acoustic Model Training delivers big gains in ASR Accuracy

This is a Case Study of training the acoustic model of Deep learning based Speech-to-Text/ASR engine for a Voice Bot that could take orders for Indian Food.

The Problem

The client approached Voicegain as they experienced very low accuracy of speech recognition for a specific telephony based voice bot for food ordering.

The voice bot had to recognize Indian food dishes with acceptable accuracy, so that the dialog could be conducted in a natural conversational manner rather than having to fallback to rigid call flows like e.g. enumerating through a list.

The spoken response would be provided by provided by speakers of South Asian Indian origin. This meant that in addition to having to recognize unique names,  the accent would be a problem too.

The out-of-the box accuracy of Voicegain and other prominent ASR engines was considered too low. Our accuracy was particularly low because our training datasets did not have any examples of Indian Dish names spoken with heavy Indian accents.

With the use of Hints, the results improved significantly and we achieved an accuracy of over 30%. However, 30% was far from being good enough.

The Approach

Voicegain first collected relevant training data (audio and transcripts) and trained the acoustic model of our deep learning based ASR. We have had good success with it in the past, in particular with our latest DNN architecture, see e.g. post about recognition of UK postcodes.

We used a third party data generation service to initially collect over 11,000 samples of Indian Food utterances - 75 utterances per participant. The quality varied widely, but that is good because we think it reflected well the quality of the audio that would be encountered in a real application. Later we collected additional 4600 samples.

We trained two models:

  • A "balanced" model - where the Food Dish training data was combine with our complete training set to train the model.
  • A "focused" model - there the Food Dish data was combined with just a small subset of our other training data set.

We also first trained on the 10k set, collected the benchmark results, and then trained on the additional 5k data.

We randomly selected 12 sets of 75 utterances (total 894 after some bad recordings were removed) for a benchmark set and used the remaining 10k+ for training. We plan to share a link to the test data set here in a few days.

The Results - A 75% improvement in accuracy!

We compared our accuracy against Google and Amazon AWS both before and after training and the results are presented in a chart below. The accuracy presented here is the accuracy of recognizing the whole dish name correctly. If one word of several in a dish name was mis-recognized, then it was counted as a failure to recognize the dish name. We applied the same methodology if one extra word was recognized, except for additional words that can easily be ignored, e.g., "a", "the", etc. We also allowed for reasonable variances in spelling that would not introduce ambiguity, e.g. "biryani" was considered a match to "biriyani".

Note that the tests on Voicegain recognizer were ran with various audio encodings:

  • PCMU 8kHz - is a telephony quality audio
  • L16 16kHz - is closer to the audio quality you would expect from most webrtc applications and delivers better accuracy

Also, the AWS test was done in offline mode (which generally delivers better accuracy), while Google and Voicegain tests were done in streaming (real-time) mode.

We did a similar set of tests with the use of hints (we did not include AWS because our test script did not support AWS hints at that time).



This shows that huge benefits can be achieved by targeted model training for speech recognition. For this domain, that was new to our model, we increased accuracy by over 75% (10.18% to 86.24%) as result of training.

As you can see, after training we exceeded the Speech-to-Text accuracy of Google by over 45% (86.24% vs 40.38%) if no hints were used. With the use of hints we were better than Google STT by about 36% (87.58% vs 61.30%).

We examined cases where mistakes were still made and they fell into 3 broad categories:

  • Recordings missing an end part of the last word. That is because the stop record button was pressed while the last word was still being spoken. The recorded part of the last word is generally recognized ok, e.g., instead of "curry" we recognize "cu". (We plan to manually review the benchmarks set and modify the expected values according to what is being said and then recompute the accuracy numbers.)
  • Really bad quality recordings - where the volume of the audio is barely over the background noise level. In this case we usually missed some words or parts of words. This also explains why the hints do not give more improvement - there are no sufficient quality partial hypotheses that the hints could boost.
  • Loud background speech noise. In this case we usually recognized additional words beyond what was expected.

The first type of problems we think can be overcome by training on additional data and that is what we are planning to do, hoping to eventually get accuracy close to 85% (for L16 16kHz audio). The second type could be potentially resolved by post-processing in the application logic if we return the dB values of the recognized words.

Interested?

If your speech application also suffers from low accuracy and using hints or text-based language models is not working well enough, then acoustic model training could be the answer. Send us an email at info@voicegain.ai and we could discuss doing a project to show how Voicegain trained model can achieve best accuracy on your domain.

Read more → 
Getting high Speech Recognition Accuracy on Alphanumeric Sequences: A Case Study with UK Zip Codes
Benchmark
Getting high Speech Recognition Accuracy on Alphanumeric Sequences: A Case Study with UK Zip Codes

It is a common knowledge for AI/ML developers working with speech recognizers and ASR software that getting high accuracy in real-world applications on sequences of alphanumerics is a very difficult task. Examples of alphanumeric sequences are  serial numbers of various products, policy numbers, case numbers or postcodes (e.g. UK and Canadian).

Some reasons why ASRs have a hard time recognizing alphanumerics are:

  • some letters sound very similar, e.g. P and B, T and D
  • A and 8 sound very similar
  • combinations of letters and digits sound like words, e.g. "E Z" sounds like "easy", "B 9" sounds like "benign", etc.

Another reason why the overall accuracy is bad is simply that the errors compound - the longer the sequences the more likely it is that at least one symbol will be misrecognized and thus the whole sequence will be wrong. If accuracy of a single symbol is 90% then the accuracy of a number consisting of 6 symbols will be only 53% (assuming that the errors are independent). Because of that, major recognizers, deliver poor results on alphanumerics. In our interaction with customers and prospects, we have consistently heard about the challenges they have encountered with getting good accuracy on alphanumeric sequences. Some of them use post-processing of the large  vocabulary results, in particular, if a set of hypotheses is returned. We used such approaches back when we built IVR systems as Resolvity and had to use 3rd party ASR. In fact, we were awarded with a patent for one of such postprocessing approaches.

Case Study: British Postcodes

While working on a project aiming to improve recognition of UK postcodes we collected over 9000 sample recordings of various people speaking randomly selected valid UK postcodes. About 1/3 of speakers had British accent, while the remaining had a variety of other accents, e.g. Indian, Chinese, Nigerian, etc.

Out of that data set we reserved some for testing. The results reported here are from a 250 postcode test set (we will soon provide a link to this test set on our Github). As of the date of this blog post, Google Speech-to-Text achieved only 43% accuracy and Amazon 58% on this test set.

At Voicegain we use two approaches to help us achieve high accuracy on the alphahumerics: (a) training the recognizer on realistic data sets containing sample alphanumeric sequences, (b) using grammars to constrain the possible recognitions. In a specific scenario, we can use one or the other or even both approaches.

Here is a summary of the results that we achieved on the UK postcodes set.


Improving Recognition with Acoustic Model Training

We used the data set described above in our most recent training round for our English Model and have achieved significant improvement in accuracy when testing on a set of 250 UK postcodes which were not used in training.

  • For unconstrained large vocabulary recognition the accuracy improved from 51.60% to 63.60% (a gain of 12%). The training helped both the acoustic part of our model (e.g. letters which were skipped in the base recognizer because they were not enunciated well enough were picked after training - 8 was recognized correctly instead of H, etc.) and the language part of our model (e.g. correctly recognizing "two" instead of "to" because of the context)
  • For grammar-based recognition (more about it in the section below) the accuracy improved from  79.31% to 84.03% (a gain of 4.72%). Because in grammar based recognition the language model is fully defined by the grammar the gain here was from being able to distinguish more acoustic nuances between various letters and numbers (e.g. someone's long R is no longer recognized as "A R", "L P" is now correctly recognized instead of "A P", etc).

Improving Recognition with the use of Grammars

Voicegain DNN recognizer has ability to use grammars for speech recognition, a somewhat unique feature among modern speech recognizers. We support GRXML and JSGF grammar format. Grammars are used during the search - they are not merely applied to the result of the large vocabulary recognition - this gives us best possible results. (BTW, we can also combine grammar-based recognition with large vocabulary recognition, see this blog post for more details.)

For UK postcode recognition we defined a grammar which captures all ways in which valid UK postcodes can be said. You can see the exact grammar that we used here.


Grammar based UK postcode recognition gives significantly better results than large vocabulary recognition.

  • On our base model, before training, the difference was 27.71% (79.31% vs 51.60%)
  • On the trained model the difference was smaller, but still very large 20.43%  (84.03% vs 63.60%)
  • Compared to Amazon recognizer we were 25.62% better after training (84.03% vs 58.40%)
What if the possible set of alphanumeric sequences cannot be defined using a grammar?

We have come across scenarios where the alphanumeric sequences are difficult to define exhaustively using grammars, e.g. some Serial Numbers. In those cases our recognizer supports the following approach:

  • Define a grammar that matches a superset of valid sequences,
  • Use a lookup table to match know list of valid and likely to occur sequences. For example, if these are serial numbers and the application deals with warranty registration, we can narrow down a set of possible SN that we may have to recognize.

Want to test your alphanumeric use case?

We are always ready to help prospective customers with solving their challenges with speech recognition. If your current recognizer does not deliver satisfactory results recognizing sequences of alphanumerics, start a conversation over email at info@voicegain.ai. We are always interested in accuracy.

Read more → 
Voicegain as a single ASR for both Speech IVRs & Voice Bots
Voice Bot
Voicegain as a single ASR for both Speech IVRs & Voice Bots

This post highlights how Voicegain's deep learning based ASR supports both speech-enabled IVRs and conversational Voice Bots.

This can help Enterprise IT organizations simplify their transition from directed dialog telephony IVR to a modern conversational Voice Bot.

This is because of a very important feature of Voicegain. Voicegain's ASR can be accessed in two ways

1) MRCP ASR for Speech IVR - the traditional way: Voicegain ASR can be invoked over MRCP from a VoiceXML IVR application developed using Speech grammars. Voicegain is a "drop-in" replacement for the ASR used in most of these IVRs.

2) Speech-to-Text/ASR for Bots -  the modern way: Voicegain offers APIs integrate with (a) SIP telephony or CPaaS platforms and (b) Bot Frameworks that present a REST endpoint. Examples of bot frameworks supported include Google Dialogflow, RASA and Azure Bot Service.

Directed Dialog IVRs are not going away any time soon!

When it comes to voice self service, enterprises understand that they would need to maintain and operate traditional Speech IVRs for many years.

This is because existing users have been trained over the years and have become proficient with these speech enabled IVRs. They would prefer not having to learn  new user interface like Voice Bots if they can avoid it. Also enterprises have made substantial investments in developing these IVRs and they would like to continue to support these IVRs as long as they generate adequate usage.

However an increasing "digital-native" segment of customers demand Alexa-like conversational experiences as it provides a much better user experience compared to IVRs. This is driving substantial interest by enterprises to develop Voice Bots as a long term replacement for IVRs.

Net-net, even as enterprises develop new conversational Voice Bots for the long term; in the near term, they would need to support and operate these IVRs .

Bots & IVRs use different ASRs, protocols and App tech stacks

ASR: While both Voice bots & IVRs require ASR/Speech-to-Text, the ASRs that support conversational voice bots are different from the ASRs used in directed dialog IVRs. The ASRs that support IVRs are based on HMMs (Hidden Markov models) and and the apps use speech grammars when invoking the ASR. On the other hand, voice bots work with large vocabulary deep learning based STT models.

Protocol: The communication protocols between the ASR & the app are also very different. An IVR App, usually written in VoiceXML, communicates with the ASR over MRCP; modern Bot Frameworks communicate with ASRs over modern web-based protocols like WebSockets and gRPC.

App Stack: The app logic of a directed dialog IVRs is built on VoiceXML compliant application IDE. Popular vendors in this space Avaya Aura Experience Portal (AAEP), Cisco Voice Portal (CVP) and Genesys Voice Portal or Genesys Engage. This article explores this in more detail.

On the other hand, modern Voice bots require Bot frameworks like Google Dialogflow, Kore.ai, RASA, AWS Lex and others. They use modern NLU technology to can extract intent from transcribed text. Bot Frameworks also offer sophisticated dialog management to dynamically determine conversation turns. They also allow integration with other enterprise systems like CRM and Billing.

When it comes to Voice Bots, most enterprises want to "voice-enable" the chatbot interaction logic which is also developed on the same Bot Framework and then integrate with telephony. - so use a phone number to  "dial" the chatbot and interact using Speech-to-Text and Text-to-Speech.

The Solution: Use Voicegain ASR to support both IVRs & Bots

The Voicegain platform is the first and currently the only ASR/ Speech-to-Text platform in the market that can support both a directed dialog Speech IVR and a Conversational voice bot using a single acoustic and language model.

Cloud Speech-to-Text APIs from Google, Amazon and Microsoft support large vocabulary speech recognition and can support voice bots. However they cannot be a "drop-in" replacement for the MRCP ASR functionality in directed dialog IVR.

And  traditional MRCP ASRs that supported directed dialog IVRs (e.g. Nuance,  Lumenvox etc) do not support large vocabulary transcription.

Integration with Bot Frameworks and Telephony

Voicegain offers Telephony Bot APIs to support Bots developers with providing the "mouth" and the "ear" of the Bot.

These APIs are Callback style APIs that an enterprise can can use along with a Bot Framework of its choice.

In addition to the actual ASR, Voicegain also embeds a telephony/PSTN interface. There are 3 possibilities:

1. Integration with modern CPaaS platforms like Twilio, SignalWire and Telnyx  With such an integration, callers can now have  "dial and talk" to their chatbots over a phone number.

2. SIP INVITE from CCaaS or CPaaS Platform: The Bot Developer can transfer the call control to Voicegain using a SIP INVITE. After the call has been transferred, the Bot Framework can interact using above mentioned APIs. At the end of the bot interaction, you can end the Bot session and continue the live conversation on the CCaaS/CPaaS platform.

3. Voicegain embedded CPaaS:  Voicegain has also embedded the Amazon Chime CPaaS; so developers can actually purchase a phone number and start building their voice bot in a matter of minutes.

Essentially, by using Telephony Bot APIs alongside any Bot Framework, an Enteprise can have a Bot framework and an ASR that serves all 3 self service mediums - Chatbots, Voicebots and Directed Dialog IVRs.

To explore this idea further, please send us an email at info@voicegain.ai

Read more → 
Speech-to-Text Accuracy Benchmark - October 2021
Benchmark
Speech-to-Text Accuracy Benchmark - October 2021

[UPDATE 1/23/22: After training on additional data, the Voicegain recognizer now achieves an average WER of 11.89% (an improvement of 0.35%) and a median WER of 10.82% (an improvement of 0.21%) on this benchmark.

Voicegain is now better than Google Enhanced on 44 files (previously 39).

Voicegain is now the most accurate recognizer on 12 of the files (previously 10).

We have additional data on which we will be training soon and will then provide a complete new set of results and comparison.]

It has been over 4 months since we published our last speech recognition accuracy benchmark. Back then the results were as follows (from most accurate to least): Amazon and Microsoft (close 2nd), then Google Enhanced and Voicegain (also close 4th) and then, far behind, IBM Watson and Google Standard.

Since then we have tweaked the architecture of our model and trained it on more  data. This resulted in a further increase in the accuracy of our model. As far as the other recognizers are concerned, Microsoft improved the accuracy of their model the most, while the accuracy of others stayed more or less the same.

Methodology

We have repeated the test using similar methodology as before: used 44 files from the Jason Kincaid data set and 20 files published by rev.ai and removed all files where the best recognizer could not achieve a Word Error Rate (WER) lower than 25%. Note: previously, we used 20% as the threshold, but this time we decided to keep more files with low accuracy to illustrate the differences on that type of files between recognizers.  

Only three files were so difficult that none of the recognizers could achieve 25% WER. The two removed files were both radio phone interviews with bad quality of the recording.

Voicegain now better than Google Enhanced

As you can see in the results chart above, Voicegain is now better than Google Enhanced, both on average and median WER. Looking at the individual files the results also show the Voicegain accuracy is in most of the case better than Google:

  • Voicegain was better than Google Enhanced on 39 files
  • Google Enhanced was better on 20 files
  • They were tied on 2 files.

Other results

Key observations about other results:

  • When you consider the average and median WER then Voicegain looks tied with Amazon having the median value better by 0.07% but the average value worse by 0.76%
  • When you consider the average and median WER then Microsoft recognizer is better than Amazon with average better by 0.49% and median better by 0.69%
  • When you look at the individual audio files the best scoring recognizers were:
  • Amazon - was best on 29 files
  • Microsoft - was best on 20 files
  • Voicegain - was best on 10 files
  • Google Enhanced - was best on 2 files

As you can see the field is very close and you get different results on different files (the average and median do not paint the whole picture). As always, we invite you to review our apps, sign-up and test our accuracy with your  data.

Out-of-the-box accuracy is not everything

When you have to select speech recognition/ASR software, there are other factors beyond out-of-the-box recognition accuracy. These factors are, for example:

  • Ability to customize the Acoustic Model - Voicegain model may be trained on your audio data - we have demonstrated improvement in accuracy of 7-10%. In fact for one of our customers with adequate training data and good quality audio we were able achieve a WER of 0.5% (99.5% accuracy)
  • Ease of integration - Many Speech-to-Text providers offer limited APIs especially for developers building applications that require interfacing with  telephony or on-premise contact center platforms.
  • Price - Voicegain is 60%-75% less expensive compared to other Speech-to-Text/ASR software providers while offering almost comparable accuracy. This makes it affordable to transcribe and analyze speech in large volumes.
  • Support for On-Premise/Edge Deployment - The cloud Speech-to-Text service providers offer limited support to deploy their speech-to-text software in client data-centers or on the private clouds of other providers. On the other hand, Voicegain can be installed on any Kubernetes cluster - whether managed by a large cloud provider or by the client.

Take Voicegain for a test drive!

1. Click here for instructions to access our live demo site.

2. If you are building a cool voice app and you are looking to test our APIs, click here to sign up for a developer account  and receive $50 in free credits

3. If you want to take Voicegain as your own AI Transcription Assistant to meetings, click here.

Read more → 
How to build a Voicebot using Voicegain, Twilio, RASA, and AWS Lambda
Voice Bot
How to build a Voicebot using Voicegain, Twilio, RASA, and AWS Lambda

You can find the complete code (minus the RASA logic - you will have to supply your own) at our github repository.

What does it do ?

The setup allows you to call a phone number and then interact with a Voicebot that uses RASA as the dialog logic engine.

How does it work ?

The Components

  • Twilio Programmable Voice - We configure a Twilio phone number to point to a TwiML App that has the AWS Lambda function as the callback URL.
  • AWS Lambda function - a single Node.js function with an API Gateway trigger (simple HTTP API type).
  • Voicegain STT API - we are using /asr/transcribe/async api with input via websocket stream and output via a callback. Callback is to the same AWS Lambda function but Voicegain callback is POST while Twilio callback is GET.
  • RASA - dialog logic is provided by RASA NLU Dialog server which is accessible over RestInput API.
  • AWS S3 for storing the transcription results at each dialog turn.

November 2021 Update: We do not recommend S3 and AWS Lambda for a production setup. A more up to date review of various options to build a Voice Bot is described here. You should consider replacing the functionality of S3 and AWS Lambda with a web server that is able to maintain state - like Node.js or Python Flask.

The Steps

The sequence diagram is provided below. Basically, the sequence of operations is as follows:

  1. Call a Twilio phone number
  2. Twilio makes an initial callback to the Lambda function
  3. Lambda function sends "Hi" RASA and RASA responds with the initial dialog prompt
  4. Lambda function calls Voicegain to start an async transcription session. Voicegain responds with a url of a websocket for audio streaming
  5. Lambda function responds to Twilio with a TwiML command <Connect><Stream> to open a Media Stream to Voicegain. The command will also contain the text of the question prompt.
  6. Voicegain uses TTS to generate from the text of the RASA question an audio prompt and streams it via websocket to Twilio for playback
  7. The Caller hears the prompt and says something in response
  8. Twilio streams caller audio to Voicegain ASR for speech recognition
  9. Voicegain ASR transcribes the speech to text and makes a callback with the result of transcription to Lambda function
  10. Lambda function stores the transcription result in S3
  11. Voicegain closes the websocket session with Twilio
  12. Twilio notices end of session with ASR and makes a callback to Lambda function to find out what to do next
  13. Lambda function retrieves result of recognition from S3 and passes it to RASA.
  14. RASA processes the answer and generates next question in the dialogue
  15. We continue next turn same as in step 4.



Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Category 1
This is some text inside of a div block.
by Jacek Jarmulak • 10 min read

Donec sagittis sagittis ex, nec consequat sapien fermentum ut. Sed eget varius mauris. Etiam sed mi erat. Duis at porta metus, ac luctus neque.

Read more → 
Sign up for an app today
* No credit card required.

Enterprise

Interested in customizing the ASR or deploying Voicegain on your infrastructure?

Contact Us → 
Voicegain - Speech-to-Text
Under Your Control