Cromefire_

Results 217 comments of Cromefire_

> The other PR didn't add any doc but I tried the command in documentation of this PR but couldn't get it to work, is there a trick to it...

You'll probably need to remove your iGPU from the container, ROCm sometimes has some problems with that (alternatively I think in the llama.cpp docs it also called out an environment...

So updating this from main starts to get complicated now, because there's conflicts in a lot of places, so it would be great to get this in like some time...

Also the llama.cpp fork should probably be updated sooner than later, as there's some fixes for AMD GPU performance in there.

Sure. I'll extract that. The changes to support more accelerators are the hardest to maintain though. PS: Also keep in mind that it won't only be ROCm device info soon,...

Just removed the conflicts and maybe another friendly reminder that these changes suck up a lot of time the longer they lie around and I constantly have to resolve the...

I guess the most promising methods are 2, 3 and 4 in a helper method, because "../../some.proto" does not seem work with relative imports.

That should work: https://gist.github.com/cromefire/1677b4944458c83cbe566653232eaac9

Method 4 (and 3) would work out of the box with no additional work required

Is there any way to override it manually as a stop gap?