Andrzej Janik
Andrzej Janik
Usually this happens because you have an old version of Adrenalin driver. Closing because it no longer applies
@alosslessdev some of the issues are because you don't have ROCm installed, some come from a new GCC. We use an older version of LLVM that does not build on...
Try the most recent build ([Version 6-preview.4](https://github.com/vosen/ZLUDA/releases/tag/v6-preview.4)), this should resolve CUDA 13 failures. As for libcudart.so.13 you should just use the NVIDIA binary, it's provided by `cuda-cudart-13-0` on Ubuntu, I...
Thanks, the problem is 100% on the ZLUDA side. We do not implement the `mma.sync.aligned.m16n8k32.row.col.s32.s8.s8.s32` instruction. Good news is that the support for `mma.` family of instructions is what we...
As of #571 this should work correctly. With a caveat: I recommend building your llama.cpp with `GGML_CUDA_FORCE_CUBLAS=1`. `GGML_CUDA_FORCE_CUBLAS=0` works just fine but cublas path is much faster. I did not...
It doesn't support Intel GPUs right now. It could, but it'd require someone to contribute. We are focusing on AMD GPU support
PyTorch support is still a work in progress, when we will have pytorch support you will be able to run various pytorch-based software
An example. From: ``` .version 6.5 .target sm_30 .address_size 32 .visible .entry add( .param .u32 input, .param .u32 output ) { .reg .u32 in_addr; .reg .u32 out_addr; .reg .u32 temp;...
> So, is it possible that after the 32-bit (x86) program is converted to 64-bit (x64), I'll be able to run it correctly on my NVIDIA GPU? I'd like to...
Hi, ZLUDA will not work for your setup because ZLUDA does not support GPUs older than RDNA. I'm leaving this open because I want to eventually look into whisper anyway