Omar Sayed
Omar Sayed
Same issue
> Depends on how much data you've got! I've gotten good results with Japanese with about 66 hours of labeled data. @cryptowooser Did you fine-tuned the segmentation model or the...
> I just did the segmentation model, but if there's a guide to finetuning the pipeline or the embedding model somewhere I'd love to see it! I'd love to improve...
> Yes, results were a MASSIVE improvement. > […](#) > On Mon, Aug 28, 2023 at 6:13 PM Omar Sayed ***@***.***> wrote: I just did the segmentation model, but if...
> pyannote/pyannote-audio (github.com) I see you used the notebook `adapting_pretrained_pipeline.ipynb` and not `training_a_model.ipynb` In `training_a_model.ipynb` He evaluated the pretrained segmentation model with `DiscreateDiarizationErrorRate`  I thought you used the same...
> it's I think you may visit Speechbrain, and fine-tune the speaker embedding model and use it in the diarization pipeline and see the improvement.
Yes, Actually the current version of the diarization pipeline used the model from speechbrain https://huggingface.co/speechbrain/spkrec-ecapa-voxceleb see their documentation it's easy to follow
how can i generate .bag file ??