pyannote-audio icon indicating copy to clipboard operation
pyannote-audio copied to clipboard

These voice can not split correctly

Open lucasjinreal opened this issue 2 years ago • 4 comments

asr_res_240-243_1_audio.zip

the output time and label

start=0.0s stop=2.4s speaker_SPEAKER_00
start=0.4s stop=1.4s speaker_SPEAKER_01

but the speaker are clearly 2 speaker first and later, how to precisely get the splitter time in the middle?

lucasjinreal avatar Nov 02 '23 08:11 lucasjinreal

Thank you for your issue.You might want to check the FAQ if you haven't done so already.

Feel free to close this issue if you found an answer in the FAQ.

If your issue is a feature request, please read this first and update your request accordingly, if needed.

If your issue is a bug report, please provide a minimum reproducible example as a link to a self-contained Google Colab notebook containing everthing needed to reproduce the bug:

  • installation
  • data preparation
  • model download
  • etc.

Providing an MRE will increase your chance of getting an answer from the community (either maintainers or other power users).

Companies relying on pyannote.audio in production may contact me via email regarding:

  • paid scientific consulting around speaker diarization and speech processing in general;
  • custom models and tailored features (via the local tech transfer office).

This is an automated reply, generated by FAQtory

github-actions[bot] avatar Nov 02 '23 08:11 github-actions[bot]

Without providing details about the code you tried, it is kind of difficult to tell. Here are my 2 cents applying the pretrained pyannote/segmentation-3.0 model: it looks like it does manage to do the job...

AUDIO = "asr_res_240-243_1_audio.mp3"

from pyannote.audio import Audio
io = Audio(mono="downmix", sample_rate=16000)
waveform, sample_rate = io(AUDIO)
audio = {"waveform": waveform, "sample_rate": sample_rate}

from pyannote.audio import Inference
inference = Inference("pyannote/segmentation-3.0", window="whole")
prediction = inference(audio)

from matplotlib import pyplot as plt
plt.plot(prediction)
plt.legend(['speaker#1', 'speaker#2', 'speaker#3'])

output

hbredin avatar Nov 02 '23 09:11 hbredin

@hbredin hi, the audio acutially only have 2 people, the first period is person1, and rest is a man voice.

the cliff of speaker3 seems detected the later man voice, but how can i tell, (i actually just need split 2 person), this cliff is exactly what I want?

lucasjinreal avatar Nov 02 '23 11:11 lucasjinreal

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

stale[bot] avatar May 02 '24 01:05 stale[bot]