• Jacek Jarmulak

Unique feature: RTP streaming support


Voicegain speech-to-text platform has supported RTP streaming from the very beginning. One of our first applications, several years ago, was live transcription with ffmpeg utility used to capture audio from a device and to stream it to the Voicegain platform using RTP. Over time we added more robust protocols and RTP was rarely used. However, recently in one of our deployments we came across a use case where RTP streaming allowed our customer to do integration in a very straightforward way within a call-center telephony stack.


Voicegain platform does support more advanced streaming protocols for call-center use like SIPREC or SIP/RTP (SIP Invite). However, in this particular use we were able to stream from Cisco CUBE directly to Voicegain using plain RTP. Upon receiving an incoming call a script is triggered which uses HTTP to establish new Voicegain transcription session. In the session response, ip:port parameters for the RTP receiver specific to the session are returned and these are passed to the CUBE to establish a direct RTP connection.


RTP used like this provides no authentication and security which would make it generally unsuitable for use over Internet. However, in this particular use case our customer benefits from the fact that the entire Voicegain stack can be deployed on-prem. Because of being on the same isolated network as the CUBE there are no issues with security and/or packet loss.


An example

You can visit out github to see a python code example which shows how to establish the speech-to-text session, how to point the RTP sender to the receiver endpoint, and how to receive real-time transcription result via a websocket.


The command to establish the session is as simple as this:

{
 "sessions": [{
      "asyncMode": "REAL-TIME",
      "websocket": { 
        "adHoc": 'true', 
        "useSTOMP" : 'false',
        "minimumDelay": 0 
      },
      "content" : {
        "incremental" : ['words'],
        "full" : []
      }
 }],
 "audio": {
    "source": { "stream": { "protocol": "RTP" } },
    "format": "PCMU",
    "channel" : "mono",
    "rate": 8000
 },
 "settings": {
    "asr": { "noInputTimeout": 60000 }
  }
}

Audio section defines the RTP streaming part, and the websocket section defines how the results will be sent back over a websocket.


The response looks like this:

{
  "sessions":[{
    "sessionId":"0-0kkg07ha811xfmazivibswr3yt56",
    "asyncMode":"REAL-TIME",
    "websocket":{
      "url":"wss://api.ascalon.ai/v1/0/plain/0-0kkg07ha811xfmazivibswr3yt56"
    },
    "content":{
      "full":[],
      "incremental":["Words"]
    }
  }],
  "audio": {
    "stream":{"ip":"35.243.129.143","port":5102}}
}

In the github example the stream.ip and stream.port are passed to ffmpeg that is used as the RTP streaming client. The example further illustrates how to process the messages with incremental transcription results sent real-time over the websocket.

19 views0 comments