LocalAI icon indicating copy to clipboard operation
LocalAI copied to clipboard

[vllm backend] Problem loading a model from a local weights cached dir

Open mrceresa opened this issue 1 month ago • 0 comments

LocalAI version: 2.24.2

Environment, CPU architecture, OS, and Version: win11 wsl ubuntu

Describe the bug

Thanks a lot for the wonderful product that is localai!

I have a small problem and I could not find any clear documentation on how to solve it, so I made some tests and share here the results in case they are useful for somebody else.

The problem is that for a big model that I don't want to re-download I copied the weights in the localai container into /build/models/mymodel, but I am not able to load it. The format of the model is HF with safetensors and the model is loadable from normal python and vllm outside of localai.

I tried to load the model from the folder using this config:

backend: vllm
name: mymodel
parameters:
     model: /build/models/mymodel

but, when I re-launched local-ai run, I got the error: 10:55PM ERR config is not valid

which is caused by the c.Validate() line 173 in core/config/backend_config_loader.go

looking at the Validate function at line 422 of core/config/backend_config.go it seems that this line:

		if strings.HasPrefix(n, string(os.PathSeparator)) ||
			strings.Contains(n, "..") {
			return false
		}

invalidates the config if there is a path separator in the Model, and the whole config is ignored.

But then I have no way to load from my folder because if I use this configuration instead:

backend: vllm
name: mymodel
parameters:
     model: mymodel

the vllm backend cannot find the cached mymodel in the directory /build/models/mymodel and thinks it has to go to HF, which fails as well with:

11:23PM DBG GRPC(llama-3.2-11B-Vision-Instruct-127.0.0.1:41395): stderr Unexpected err=ValueError('No supported config format found in mymodel'), type(err)=<class 'ValueError'>

For me the fix is to modify LoadModel function in backend/python/backend.py to check if the combination of modelpath and model are a real directory, and pass this to the engineargs:

    async def LoadModel(self, request, context):
       [...]
        model_local_dir = os.path.join(str(request.ModelPath), str(request.Model))

        if os.path.isdir(model_local_dir):
            engine_args = AsyncEngineArgs(
                model=model_local_dir,
            )
        else:
            engine_args = AsyncEngineArgs(
                model=request.Model,
            )
       [...]

but it would probably be nicer to have the possibility to set a model path in the localai model config directly, as it is more intuitive and customizable.

Any thoughts on that? Maybe there is a simple other config option that I missed? I also tried download_dir but it was not useful.

mrceresa avatar Jan 04 '25 23:01 mrceresa