pyannote-whisper icon indicating copy to clipboard operation
pyannote-whisper copied to clipboard

invalid str2bool value

Open ItsMe-TJ opened this issue 2 years ago • 13 comments

I'm getting "transcribe.py: error: argument --diarization: invalid str2bool value: 'true'".

How do I fix this?

Oh and I have a question, how would I go about splitting the audio into individual files by speaker? maybe a feature you could add?

Thanks!

ItsMe-TJ avatar Nov 14 '22 21:11 ItsMe-TJ

I'm getting "transcribe.py: error: argument --diarization: invalid str2bool value: 'true'".

How do I fix this?

true -> True. I fixed readme

Oh and I have a question, how would I go about splitting the audio into individual files by speaker? maybe a feature you could add?

See Readme

Thanks!

yinruiqing avatar Nov 15 '22 15:11 yinruiqing

Thank you so much! It would be cool If you could add a function to remove overlapped speech, the reason I'm asking is that I'm making datasets for training Text to Speech models. Using audio from podcasts etc where two people are talking, I know Pyannote has this function, but I'm honestly not savvy enough to implement it myself. Removing people talking over each other first before diarization could help in making really clean datasets. Either way, thank you so much.

ItsMe-TJ avatar Nov 15 '22 15:11 ItsMe-TJ

Thank you so much! It would be cool If you could add a function to remove overlapped speech

I will do it this weekend.

yinruiqing avatar Nov 15 '22 16:11 yinruiqing

Thank you so much! It would be cool If you could add a function to remove overlapped speech

I will do it this weekend.

Thank you!

ItsMe-TJ avatar Nov 15 '22 16:11 ItsMe-TJ

@ItsMe-TJ I what you want is the following function:

def remove_overlap_part(ann):
    overlap = to_overlap(ann).get_timeline()
    if len(overlap) == 0:
        return ann
    else:
        overlap_start = overlap[0].start
        overlap_end = overlap[-1].end
        ann_start = ann.get_timeline()[0].start
        ann_end = ann.get_timeline()[-1].end
        non_overlap = overlap.gaps()
        if overlap_start > ann_start:
            non_overlap.add(Segment(ann_start, overlap_start))
        if ann_end > overlap_end:
            non_overlap.add(Segment(overlap_end, ann_end))            
        return ann.crop(non_overlap)

By the way, I also have an open-source tts project deepaudio-tts. I need someone else to work together with me on that. Are you interested in it?

yinruiqing avatar Nov 19 '22 03:11 yinruiqing

I am always interested in TTS stuff! So absolutely! Though when you say "Work together with me" I don't know how to code or anything lol, but I'm happy to help in any way I can!

ItsMe-TJ avatar Nov 19 '22 11:11 ItsMe-TJ

Okay so I have an idea, and since you're doing the whole TTS thing you might benefit from it as well!

So let's say I have a podcast episode where 2 people are talking, I want to specify the number of speakers, and then remove overlapped speech, output the non-overlapped speech into a new audio file. Then take that audio file and do diarization, splitting the audio by speaker and outputting those new audio files into folders. Speaker 1, Speaker 2 etc. This would in my mind, make clean datasets for training TTS.

So that's my idea, hopefully you understand why I asked about removing overlapped speech and splitting the audio.

idk If you're interested in taking a crack at it! As I said, I don't code - though I really REALLY wish I knew how, I've tried learning many times but I think I'm just too dumb lol

ItsMe-TJ avatar Nov 19 '22 11:11 ItsMe-TJ

Good idea. I will implement it soon and put it to deepaudio-tts.

yinruiqing avatar Nov 20 '22 09:11 yinruiqing

Good idea. I will implement it soon and put it to deepaudio-tts.

Great! It will be really helpful, and I can't wait it to try it!

ItsMe-TJ avatar Nov 21 '22 00:11 ItsMe-TJ

How's it going? I know It's probably not easy, just curious on the progress!

ItsMe-TJ avatar Dec 01 '22 19:12 ItsMe-TJ

How's it going? I know It's probably not easy, just curious on the progress!

I gonna finish in one week. I need someone familiar with frontend skills to help me with the interaction.

yinruiqing avatar Dec 03 '22 12:12 yinruiqing

@ItsMe-TJ You can use the following code.

import numpy as np
from scipy.io.wavfile import write
def save_wave_from_numpy(data, f, rate=16000):
    scaled = np.int16(data / np.max(np.abs(data)) * 32767)
    write(f, rate, scaled)

import whisper
from pyannote.audio import Pipeline
from pyannote.audio import Audio
from pyannote_whisper.utils import diarize_text
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization",
                                    use_auth_token="your/token")
model = whisper.load_model("tiny.en")
diarization_result = pipeline("data/afjiv.wav")

from pyannote.audio import Audio
audio = Audio(sample_rate=16000, mono=True)
audio_file = "data/afjiv.wav"
result_without_overlap = remove_overlap_part(diarization_result)
for segment, _, speaker in diarization_result.itertracks(yield_label=True):
    waveform, sample_rate = audio.crop(audio_file, segment)
    filename = f"{segment.start:.2f}s_{segment.end:.2f}s_{speaker}.wav"
    save_wave_from_numpy(waveform.squeeze().numpy(), filename)
    text = model.transcribe(waveform.squeeze().numpy())["text"]
    print(f"{segment.start:.2f}s {segment.end:.2f}s {speaker}: {text}")

yinruiqing avatar Dec 03 '22 14:12 yinruiqing

@ItsMe-TJ I am working on audio-annotation. It will provide an easy way to export audio segments for a single speaker.

yinruiqing avatar Jan 07 '23 10:01 yinruiqing