Igor Schlumberger

Results 126 comments of Igor Schlumberger
trafficstars

Hi @thinkverse, sonnyjlewis, I think that speed of loading LLMs (Large Language Models) is important since files are very large. When you do this, do you see a difference when...

@jsrcode is the issue solve on your side with latest version of Ollama and VPN Settings as explain by sunnysisbaster.

MacOS use a max of 2/3 of the memory to GPU. 80GB LLM cannot be loaded in GPU on a 96GB Mac unless you change settings as explain here: https://techobsessed.net/2023/12/increasing-ram-available-to-gpu-on-apple-silicon-macs-for-running-large-language-models/

@KangInKoo did you try with a newer version of Ollama? We are at version 0.3.9 now.

@pdevine and news on this issue?

@wszgrcy good point. I checked different tags for yi models on ollama.com and params are not consistent. All quantizations have missing params. Even the Q4_0 witch should be the same...

Are you sur? Here the params are still missing: https://ollama.com/library/yi:9b-v1.5-q8_0

Hi @Guest-615695028, if you're having trouble downloading from GitHub, it's likely a network issue. GitHub is generally quite reliable. Are you trying to download from a university network or from...

hi @jorgetrejo36 I would like to run you code to see if I can replicate the issue on MacOS, but some pieces are missing. Can you provide them?

It is possible for someone who done it to upload it on ollama on his account and share the link? Best