fix: gpu fetch device info
Description
This PR fixes #2401
It seems PCI device info from the ghw library relies on local filesystem by default. I am not sure why this isn't working properly inside a container. This PR allows fetching device info from the network. https://github.com/jaypipes/pcidb/blob/d9773c605ac44c478e0ee7e322f31eaa32615010/README.md?plain=1#L21-L26
Notes for Reviewers
I have not tested this with LocalAI container but this works for AIKit https://github.com/sozercan/aikit/actions/runs/9231349393/job/25401086746#step:12:11
Signed commits
- [x] Yes, I signed my commits.
Deploy Preview for localai canceled.
| Name | Link |
|---|---|
| Latest commit | a925e8834b3c01092e3573d53dd4a61763736272 |
| Latest deploy log | https://app.netlify.com/sites/localai/deploys/665132c606ebf400088aec2d |
Deploy Preview for localai canceled.
| Name | Link |
|---|---|
| Latest commit | 71724ed40d10bdfb1f2c2b09aee1a9a4399a8014 |
| Latest deploy log | https://app.netlify.com/sites/localai/deploys/6652655d25322a00082a985d |
I wonder if we really need to detect GPU on containers with CUDA as we do build containers directly with the BUILD_TYPE=cublas: all the binaries produced should already be ready to offload to GPU, and actually the llama-cpp-cuda binary should be even missing.
Besides, that would leave airgap environment in the cold - what would happen if there is no network?
Found a better way to handle this with pciutils package
Found a better way to handle this with
pciutilspackage
nice! good catch @sozercan !