Justin Rackliffe

Results 81 comments of Justin Rackliffe

Got it. So for round 1 of iGPU it's 14g with the Xe-LPG gpu that should offload correctly. I know the initial threads on llama.cpp for sycl were thinking 11g...

Yup been watching all the SYCL work in llama.cpp and testing some of the releases as they come out. Can't do magic, but hopefully it can beat the CPU based...

> Hi all, 2 questions for ya: > > * With this push, would it still be possible to force iGPU use, even if using the CPU would be faster?...

Tried to give the 0.0.2 a run on a 13g using the `OLLAMA_FORCE_ENABLE_INTEL_IGPU` and `OLLAMA_INTEL_GPU` which got oneapi loaded with 50% of addressable memory (31.8GiB noted). Tried to use llama3:8b...

> > Unable to load oneAPI management library > > what's the platform? windows? linux? Both two platforms need to install GPU Driver, Linux platform is quite complex, you can...

> > zesInit err: 78000001 > > looks like zesInit fail, could you please upgrade the [Intel GPU driver](https://www.intel.com/content/www/us/en/download/785597/intel-arc-iris-xe-graphics-windows.html)? Installed the latest and rebooted. No more random error code, but...

Figured out a couple more things. And just as a baseline the out of box for us is ``` [2025-03-07T16:51:46.501Z] [info] [GitHubCopilot] [92223] window/logMessage: { "type" : 1, "message" :...

Yeah followed the guidance on using remote's bin files and staged to %LOCALAPPDATA%\Programs\podman and added a User PATH entry. %LOCALAPPDATA%\Microsoft\WindowsApps as a /bin option on Windows still feels a bit...

Yeah the binaries are fine, but they claim that viewing discussions or issues also infringes. Now I have some opinions there, but clearly the project is going in a shared...

Make sure to update /etc/rc.conf rc_env_allow="*" #so that k3s and other daemons can get your proxy configs