makaveli
makaveli
@zadamg We havent tried to build this on Windows yet, we will do that and get back to you.
@zadamg we pushed an image for 3090 as well which should work on windows ``` docker run --gpus all --shm-size 64G -p 6006:6006 -p 8888:8888 -it ghcr.io/collabora/whisperfusion-3090:latest ```
@zadamg Great that you got the initial issue sorted out. So, we are running the TTS model with `torch.compile` optmisation to make the inference faster. In order to do that...
Sure, I’ll dig it up and share tomorrow … it is only transcribing for now but with timestamps.
@nyadla-sys https://colab.research.google.com/drive/1qXcgILcA-HPEYqAYPrxQQ1TRwXerErDk?usp=sharing
@ArthurZucker I was testing out if I get the timestamps with TF model with your ```tf-timestamps-whisper``` branch on colab but I see this: ``` [/content/transformers/src/transformers/models/whisper/tokenization_whisper.py](https://localhost:8080/#) in decode(self, token_ids, skip_special_tokens, clean_up_tokenization_spaces,...
@ArthurZucker Thanks for the response. I got the issue resolved with ``` timestamp = f"" ``` i.e. changing ```token - timestamp_begin``` to ```float(token - timestamp_begin)```
Look at [this](https://github.com/makaveli10/reinforcementLearning/blob/master/PolicyGradients/cliffwalk_reinforce.ipynb) if you want to see the high variance results of Vanilla reinforce
@alpcansoydas we havent tested it on a T4 yet. Consider using a smaller size model and see if the issue persists.
@alanturing-bluemirrors @mavihs7 are you guys still working on this ? This looks good to me. thanks for putting this together