pajowu
pajowu
This happend when self.tensor.size is less than (self.batch_size \* self.seq_length). You can avoid that by either reducing sec_lenght or batch_size or increasing the amount of input data.
🫶 thanks for digging into this and creating a PR. I’ll try to test it as soon as possible, but since i’m at ccc-camp, this might not be until next...
This raises a good point, can we detect older files and use an old version. PS: I’m still following the discussion but didn’t get to test yet
Hey, thanks for opening an issue. To be honest I'm not quite sure what the difference is right now. We will need to investigate this 🕵️♂️ . If we find...
Thanks for the ping, I just downloaded the files. Also as a quick reply, the currently allowed list of extensions can be found here in the code https://github.com/audapolis/audapolis/blob/460285cd2afef74dc276b1809af1458e54312855/app/src/state/transcribe.ts#L50
Oh no /o\\. Thank you for reporting this bug. I'm currently working on fixing another issue (#395). Once that is done, I can release a version with better debugging tools...
I just released [Version 0.2.2-pre3](https://github.com/audapolis/audapolis/releases/tag/v0.2.2-pre3), which includes the fixes I mentioned. Please try again with that version. If the issue persists, please send me the debug log. You can export...
Thank you for the debug log. From the log it seems as if the backend successfully loaded the model and even started transcribing. Are you sure it was stuck at...
Okay, that might be another issue. I opened #404 for that as well. Seems like we should warn users if the free memory is less than what the models need...
This looks great. I wanted to do that initially but couldn't figure it out properly. I will test this in the next days and hope we can merge that. If...