LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

feat: unify and propagate CMAKE_ARGS to GGML-based backends

Open mudler opened this issue 11 months ago • 1 comments

Description

This pull request centralizes CMAKE_ARGS composition as such is shared between backends that are ggml-based. llama.cpp cmake args can be used for instance with bark.cpp and stable-diffusion.cpp (ggml based). This aims to enable cuda and hipblas support on bark.cpp and stablediffusion.cpp (ggml variant).

For now this doesn't aim to be smart and share this in a common way (maybe using cmake, or a makefile that is called by both to generate the cmake args). The attempt of this pr is to understand any changes that might be required by enabling the flags for the respective backends. I'm not sure that bark and stablediffusion currently linking processes are correct in term of GPU support.

Notes for Reviewers

Signed commits

  • [ ] Yes, I signed my commits.

mudler avatar Dec 11 '24 21:12 mudler

Deploy Preview for localai ready!

Name Link
Latest commit 894a30296a14c1c14549db05a0177f0b2a448c65
Latest deploy log https://app.netlify.com/sites/localai/deploys/6759fe760a6baa000818c441
Deploy Preview https://deploy-preview-4367--localai.netlify.app
Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify site configuration.

netlify[bot] avatar Dec 11 '24 21:12 netlify[bot]

This PR is stale because it has been open 90 days with no activity. Remove stale label or comment or this will be closed in 10 days.

github-actions[bot] avatar Aug 26 '25 02:08 github-actions[bot]

This PR was closed because it has been stalled for 10 days with no activity.

github-actions[bot] avatar Sep 09 '25 02:09 github-actions[bot]