bitbyteboom
bitbyteboom
Getting these seg faults within a minute of running nearly every time. If the binary method works better, definitely the way to go.
Just got that up and running, and espeak_binary branch is brilliant. Works! Small issues to fix when possible for that branch and espeak to work right out of the box:...
Would love to become official contributor, but despite a CS background, it's been years out of the loop and frankly I'm still stumbling and grabbing in the dark even figuring...
So the project ran locally at first, but whisper kept going to CPU instead of GPU, slowing everything down a lot. Trying to fix this with the usual nvidia cuda/cudnn...
Thank you for that, but although it is running now, locally or in docker this line is the bane: /usr/local/lib/python3.11/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py:69: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available...
> @bitbyteboom looks like fix is to change onnxruntime to onnxruntime-gpu in requirements.txt Yes, saw that and already had onnxruntime-gpu in requirements.txt. Could there be an issue that for the...
I think I've mixed the two up. Just starting fresh and going with running local for linux and will try docker in morning for windows.
"and she will usually sppoloyfir glitching! " haha :) Yep, totally understand there are much bigger fish to fry at this stage, was just curious if there was something obvious...
adding to the cleanup code in glados.py got rid of the end token stuff making it to TTS, in case that helps someone. It may still be in the context,...
8B 8bit quant. The patch above has improved output quite a bit. I also added it to the context, but GlaDOS gets consistently weird after a dozen or so interactions....