Nicolas Pereira

Results 87 comments of Nicolas Pereira

> @hqnicolas Thanks, It'll have to wait as my drive died and I'm waiting on a replacement. > No way! ASRock?

@PYU224 @TerryDigitalSafari @bibucar Just run the Old Casaos!!!! ############################ wget -qO- https://get.casaos.io/v0.4.7 | sudo bash ################################

@start-life @ARajgor I have a Devika for Ollama on that [REPO ](https://github.com/hqnicolas/devika) if it works, dont forgot the start on it

@Wladastic can you try deepseek-coder:33b-instruct-q5_K_M or deepseek-coder:6.7b-instruct

gemma:7b-text-q8_0 (5% stuck on Json problems) (best experience) gemma:7b-instruct-q8_0 ( 50% Stuck on Json problems) codellama:7b-instruct-fp16 (40% Stuck on Json problems) codellama:7b-instruct-q5_K_M (skip) neural-chat:7b-v3.3-q5_K_M (30% Stuck on Json problems) My...

@Wladastic i'm using: **nous-hermes2:10.7b-solar-q6_K** with OpenDevin it works fine!

@Wladastic my model was running on AMD ROCm RX7800XT it's an 16GB Card I will try to emulate this values on my Ollama will use you parameters based on this...

Hello! i'm using RX7800XT with 32GB and when I load 24GB model, the ollama uses 16gb Vram + 20gb RAM, so, you will need allocate first 80% of the model...

@dhiltgen > It sounds like we're mistakenly trying to load too many layers. # num_gpu (60) The number of layers to send to the GPU(s). On macOS it defaults to...

> can't load a 14G model into 16G VRAM @oemsysadm Buy an Apple 192gb M3 bro