mlx-swift-examples icon indicating copy to clipboard operation
mlx-swift-examples copied to clipboard

LLMEval not loading Qwen1.5 -0.5B model in to memory

Open mobile-appz opened this issue 4 months ago • 17 comments

When trying to load Qwen1.5, the model downloads fully but doesn't appear to load in to memory on MacOS or iOS. After typing a prompt, the error output is "Failed: unhandledKeys(base: "Embedding", keys: ["biases", "scales"])

Using MLX 0.11.0

Other linked models work as per the repo code but this is the smallest, which looks like best one for older devices with less RAM and would be great to get it working.

mobile-appz avatar Apr 23 '24 07:04 mobile-appz

Right, we changed quantization in MLX core so now the embedding layer is quantized. We'll need to update Swift to do the same.

awni avatar Apr 23 '24 13:04 awni

Right, we changed quantization in MLX core so now the embedding layer is quantized. We'll need to update Swift to do the same.

Thanks for the info. I was totally unsure as to the cause of this error message. To update, I tried to load this in LLM tool in mlx-swift-examples and that failed with the same error. I then tried to run the python code in the mlx-examples and the model did load and process a prompt. However, the output wasn't worthwhile for anything apparently useful, probably because the model is so small.

mobile-appz avatar Apr 23 '24 19:04 mobile-appz

I think these are the commits in question:

  • https://github.com/ml-explore/mlx/pull/994/files
  • https://github.com/ml-explore/mlx-examples/pull/680/files

davidkoski avatar Apr 25 '24 20:04 davidkoski

Those are the commits. Sorry that broke more stuff than I was expecting. Basically the embeddings are default quantized now. So when we quantize for MLX in python the model is not usable in Swift because it doesn't support quantized embeddings.

The medium term solution is to update Swift to quantize embeddings (this is a swift only change, don't need anything from core). But as a temporary patch, we could also upload models without embedding layers quantized.

awni avatar Apr 25 '24 20:04 awni

At least 1 small model, that can run on an older iOS 17 compatible iPhone, without embedding layers quantized would be really useful for experimentation purposes. Thanks.

mobile-appz avatar Apr 25 '24 21:04 mobile-appz

Those are the commits. Sorry that broke more stuff than I was expecting. Basically the embeddings are default quantized now. So when we quantize for MLX in python the model is not usable in Swift because it doesn't support quantized embeddings.

The medium term solution is to update Swift to quantize embeddings (this is a swift only change, don't need anything from core). But as a temporary patch, we could also upload models without embedding layers quantized.

If we make this change will it break other models that don't have the quantized embeddings (all the models we have been using to date)? I wonder if we need some way to detect and switch between these modes?

davidkoski avatar Apr 25 '24 22:04 davidkoski

Right, so this is what solves that problem in MLX: https://github.com/ml-explore/mlx-examples/blob/main/llms/mlx_lm/utils.py#L336-L346

It's actually really useful because it handles heterogeneously quantized models very cleanly which is a problem we've had in the past (e.g. old models with unquantized gate matrices or unquantized LM heads prior to when we supported more sizes).

awni avatar Apr 25 '24 22:04 awni

Aha, I didn't implement that -- we have just been using the load safetensors function and the update parameters method.

  • [ ] implement load_model with quantization support (here)
  • [ ] implement embedding quantization (here)
  • [ ] adopt in mlx-swift-examples (here)

davidkoski avatar Apr 25 '24 22:04 davidkoski

we have just been using the load safetensors function and the update parameters method.

But how do you know if it's a quantized model or not? Presumably there are some loc somewhere that quantizes the model based on the config? (prior to loading the safetensors)

awni avatar Apr 25 '24 22:04 awni

we have just been using the load safetensors function and the update parameters method.

But how do you know if it's a quantized model or not? Presumably there are some loc somewhere that quantizes the model based on the config? (prior to loading the safetensors)

The config file indicates it -- I am pretty sure this is how the mlx_lm code (or maybe the predecessor) worked and I just copied that, but perhaps that has moved forward.

davidkoski avatar Apr 25 '24 22:04 davidkoski

This is what I'm referring to:

https://github.com/ml-explore/mlx-swift-examples/blob/main/Libraries/LLM/Load.swift#L58-L60

MLX LM has always had something like that. It builds the quantized model based on the config. The premise didn't change much. Only two things really:

  1. Quantize all Linear and Embedding models by default
  2. Of those only quantize modules which have a "scales" parameter in their weights

awni avatar Apr 25 '24 22:04 awni

It looks like you added some edge case handling already in there (e.g. https://github.com/ml-explore/mlx-swift-examples/blob/main/Libraries/LLM/Load.swift#L97-L108). The update to MLX LM simplified that kind of stuff a bit.

awni avatar Apr 25 '24 22:04 awni

It looks like you added some edge case handling already in there (e.g. https://github.com/ml-explore/mlx-swift-examples/blob/main/Libraries/LLM/Load.swift#L97-L108). The update to MLX LM simplified that kind of stuff a bit.

Yeah, that is actually a port of the python code, so I must have got things in the middle.

The load_model method probably should have been implemented from the start but I never used it and it just got lost.

Now I think we have a good idea of what needs to be done here.

davidkoski avatar Apr 25 '24 23:04 davidkoski