juliomap

Results 2 comments of juliomap

load_models.py at line 58: return LlamaCpp(**kwargs) returns an exception **kwargs seems correct, with all of its parameters with acceptable values. I have tried this both on Ubuntu 22.04 and in...

In my case, I have solved it by installing export CMAKE_ARGS="-DLLAMA_CUBLAS=on -DCMAKE_CUDA_ARCHITECTURES=native" export FORCE_CMAKE=1 export PATH=$PATH:/usr/local/cuda/bin pip install llama-cpp-python Latest version (of llama-cpp-python)