Skip to content

Deepspeech server using websocket thanks to Starlette

Notifications You must be signed in to change notification settings

nefastosaturo/WS-DeepSpeech

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeepSpeech over Websocket

A DeepSpeech webserver with resampling capabilities that run over websockets through Starlette.

Requirements

Installation

I'm using miniconda so:

$ conda create --name wsDSpeech python=3.6 # or 3.7
$ conda activate wsDSpeech
$ pip install -r requirements.txt

Run

First, set up the correct INPUT_SAMPLE_RATE in appConfig according to the audio file that will be streamed to the server.

Then, if you want to use the Dockerfile just run:

$ docker build -t deepspeech .
$ docker run -t deepspeech

or locally:

$ cp appConfig app/appConfig
$ gunicorn -c guConfig.py app:app

Now you can reach the server at:

http://127.0.0.1:5000

for a cutie hello world

Meanwhile a thread starts and downloads DeepSpeech models if not present.

Client example

In the example folder there are two simple clients.

Java

  • Choose how many connections:
     nrClient=2

Python (work in progress..)

  • Just run it

     cd examples/python
     python simpleClient.py

Settings

  • appConfig contains all the settings used.

  • On first run, DeepSpeech models will be downloaded under the DEEPSPEECH_ROOT_PATH.

  • If you want more verbosity, flag VERBOSE/DEBUG to True

  • guConfig contains all the settings related to GUnicorn, including the deepspeech model downloading part.

TODOs

  • Python client example with more connections
  • Manage memory leaking (avoid to force workers restart)
  • Handle async/future exceptions
  • Use a lighter docker as base image (Alpine?)

About

Deepspeech server using websocket thanks to Starlette

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published