chiragkrishna

Results 24 comments of chiragkrishna

i am using linux mint with 6750xt. pytorch always defaults to rocm5.4.2. is this way good for detecting amd gpus? ```bash # Check if lspci command is available if !...

this way the rocm version can be chosen by the user ```bash # Check if lspci command is available if ! command -v lspci &>/dev/null; then echo "lspci command not...

i am using 6750xt, works almost similar with pytorch latest 5.7 and preview 6.0 also.

for the initial generation problem, do this ```bash wget https://raw.githubusercontent.com/wiki/ROCmSoftwarePlatform/pytorch/files/install_kdb_files_for_pytorch_wheels.sh ``` activate your venv ```bash #Optional; replace 'gfx90a' with your architecture and 5.6 with your preferred ROCm version export GFX_ARCH=gfx1030...

i did a quick test with rocm5.4.2, rocm5.7 and rocm 6.0 GPU= 6750xt OS= linux mint 21.3 rocm driver version= 6.0.2 here are the results # torch2.0.1 rocm5.4.2 ![rocm5 4...

it is slow on both cases. 1) try installing HWE kernel ``` sudo apt install linux-generic-hwe-22.04 ``` 2) use only the rocm from amd stack. dont install graphics drivers ```...

i have 2 systems. Ryzen 5500U system always gets stuck here. ive allotted 4gb vram for it in the bios. its the max. export HSA_OVERRIDE_GFX_VERSION=9.0.0 export HCC_AMDGPU_TARGETS=gfx900 ``` llm_load_tensors: offloading...

i added this "-DLLAMA_HIP_UMA=ON" to "ollama/llm/generate/gen_linux.sh" ``` CMAKE_DEFS="${COMMON_CMAKE_DEFS} ${CMAKE_DEFS} -DLLAMA_HIPBLAS=on -DLLAMA_HIP_UMA=ON -DCMAKE_C_COMPILER=$ROCM_PATH/llvm/bin/clang -DCMAKE_CXX_COMPILER=$ROCM_PATH/llvm/bin/clang++ -DAMDGPU_TARGETS=$(amdGPUs) -DGPU_TARGETS=$(amdGPUs)" ``` now its stuck here ``` llm_load_tensors: offloading 22 repeating layers to GPU llm_load_tensors: offloading...

llama.cpp supports it. thats what i was trying to do in my previous post. [Support AMD Ryzen Unified Memory Architecture (UMA)](https://github.com/pytorch/pytorch/issues/107605)

Definitely a +1 from my side. Moved from pfsense to opnsense missing telegram notification..