-
-
Notifications
You must be signed in to change notification settings - Fork 797
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible to use reference speaker embeddings in Pyannote diarization pipeline? #1750
Comments
The diarization pipeline has a pyannote-audio/pyannote/audio/pipelines/speaker_diarization.py Lines 103 to 106 in 0ea4c02
|
@hbredin Omg, can't believe, I got an answer from Mr. Pyannote himself. I will try out your suggested approach and report back. Thanks a lot! :) |
@hbredin is there a way to load these returned embeddings back into the pipeline for next inference? the main goal being to preserve the state of the pipeline across multiple audio files. |
Hey everyone,
I am trying to use Pyannote with Whisper for transcribing meetings between my business partner and me, but the result hasn't been that great, since about 50% of the times, the wrong speaker is assigned.
So, I thought about ways to enhance the accuracy of the diarization and found the Pyannote API docs for creating Voiceprints from reference audios and then using them in the diarization pipeline.
But since I want to do everything locally, I searched for the open-source Pyannote equivalent of the Voiceprint feature, which seems to be https://huggingface.co/pyannote/embedding
The problem: While I was able to extract embeddings from reference audios of my business partner and me, I have no idea how to use them in the diarization pipeline.
I didn't find any docs about this approach and was wondering, if it's even possible or only available in the Pyannote API.
I would greatly appreciate any kind of help/clarification :)
The text was updated successfully, but these errors were encountered: