l33tkr3w
l33tkr3w
I added "import whisper" into glados.py and it moved beyond the ImportError: Could not load whisper. This is with Python 3.9.19 Now im stuck at: (glados) PS E:\test\glados> python .\glados.py...
By default its using llama.cpp as the LLM backend. You can adjust what model is called via the the **glados_config.yml**. At the bottom you can change the model, As long...
I resolved this issue by adding import whisper to the top of glados.py. I also made sure whisper.py was in GlaDOS/glados directory. (root of glados.py). **_The project is now running._**...
@pjbaron I also used the same instructions and also copied all the .dll's to the working directory. To get passed the libc.so.6 issue I had to find windows equivelents using...
Voidmesmer on the original subreddit posted a working video. He used a subprocess and used espeak-ng directly.
I find that the main issue with **ImportError: Could not load whisper.** Inside your whisper_cpp_wrapper.py you point to your whisper.py ``` add_library_search_dirs(["D:\\GlaDOS"]) # Begin libraries _libs["whisper"] = load_library("whisper") ```
Tried the windows branch. Fired up CMD.exe as admin, executed the installer script and was presented with a python venv. tried running "python glados.py". Numpy not installed. Appears pip install...
Im still troubleshooting. Seems like everything is running on cudda except for onnxruntime. When I check my device info with ``` >> import onnxruntime as rt >> rt.get_device() 'GPU' ```
> Make sure you have uninstalled onnxruntime before installing onnxruntime-gpu. while in venv I did pip uninstall onnxruntime pip uninstall onnxruntime-gpu pip cache purge pip install onnxruntime-gpu Still sits at...