ricperry
ricperry
Yes, it exceeded my system ram + swap, then the process just quits. My VRAM was only at ~60%. I guess I'll have to expand the size of my swap...
ROCm version because?? Is it faster than ZLUDA? There have been numerous compatibility issues with pytorch-rocm so that if we could use the standard version, we might be able to...
If you don't want or can't downgrade torch, you can edit venv/lib/python3.10/site-packages/wavmark/models/my_model.py. Change the def istft(): function to read as follows: def istft(self, signal_wmd_fft): window = torch.hann_window(self.n_fft).to(signal_wmd_fft.device) signal_wmd_fft = torch.view_as_complex(signal_wmd_fft)...
I got the same error, although before I got to that point, I had to edit hashlib.cpp and include . ``` collect2: error: ld returned 1 exit status make[2]: ***...
Downgrading to 4.36.2 isn't a very good solution as that means other programs/libraries break in the venv. Is there a tweak to the modeling_phi.py file that would fix this?
I get a similar error when running "sudo pipupgrade": ``` Do you wish to update 95 packages? [Y/n/q]: y Updating 1 of 95 packages: breezy Traceback (most recent call last):...
Same error: ``` Do you wish to update 95 packages? [Y/n/q]: y Updating 1 of 95 packages: breezy Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/pipupgrade/commands/__init__.py", line 79, in command return...
> change the llama-cpp-agent version inside `cpp_agent_req.txt` to llama-cpp-agent==0.0.17 This doesn't work on Linux + ROCm Is there a way you can hook in to the ollama api?