You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Piping one hour long audio stream to the python client, and partial results are coming in via websockets, it works for quite some time, however, eventually it crashes, worse part even the server is down after its done . Here's the worker log just when the crash occurs :
2020-05-22 10:50:37 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Postprocessing (final=False) result..
2020-05-22 10:50:37 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Postprocessing done.
2020-05-22 10:50:37 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:38 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:39 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:40 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:41 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:42 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:43 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:44 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:45 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:46 - DEBUG: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Checking that decoder hasn't been silent for more than 10 seconds
2020-05-22 10:50:47 - WARNING: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: More than 10 seconds from last decoder hypothesis update, cancelling
2020-05-22 10:50:47 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Master disconnected before decoder reached EOS?
2020-05-22 10:50:47 - INFO: decoder2: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Sending EOS to pipeline in order to cancel processing
2020-05-22 10:50:47 - INFO: decoder2: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Cancelled pipeline
2020-05-22 10:50:47 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:48 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:49 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:50 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:51 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:52 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:53 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:54 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:55 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
LOG ([5.4.176~1-be967]:RebuildRepository():determinize-lattice-pruned.cc:283) Rebuilding repository.
WARNING ([5.4.176~1-be967]:CheckMemoryUsage():determinize-lattice-pruned.cc:316) Did not reach requested beam in determinize-lattice: size exceeds maximum 50000000 bytes; (repo,arcs,elems) = (15895776,3918176,30272592), after rebuilding, repo size was 11398464, effective beam was 2.7939 vs. requested beam 6
WARNING ([5.4.176~1-be967]:DeterminizeLatticePruned():determinize-lattice-pruned.cc:1281) Effective beam 2.7939 was less than beam 6 * cutoff 0.5, pruning raw lattice with new beam 4.09431 and retrying.
2020-05-22 10:50:56 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:57 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:58 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:50:59 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:51:00 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:51:01 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:51:02 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:51:03 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:51:04 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:51:05 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
2020-05-22 10:51:06 - INFO: __main__: 05f1d9b4-8bdb-4cc1-8134-5f6f2ef30cd4: Waiting for EOS from decoder
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Any help on possibly what is causing this and where to correct to prevent this ? Thanks .
The text was updated successfully, but these errors were encountered:
You have to be careful with piping long audio files. You should pipe them using a data rate that is similar to real-time. Otherwise, the whole file is sent to server as fast as the network allows and it has to be buffered by GStreamer buffers. It's possible that at some point the buffers become full and then it crashes.
Piping one hour long audio stream to the python client, and partial results are coming in via websockets, it works for quite some time, however, eventually it crashes, worse part even the server is down after its done . Here's the worker log just when the crash occurs :
Any help on possibly what is causing this and where to correct to prevent this ? Thanks .
The text was updated successfully, but these errors were encountered: