mergekit icon indicating copy to clipboard operation
mergekit copied to clipboard

Qwen2.5 14B models are ... sometimes? ... having their token vocabulary truncated down to 'actual'?

Open ann-brown opened this issue 1 year ago • 6 comments

Actual example of a merge that produced this issue:

models:
  - model: Qwen/Qwen2.5-14B-Instruct
    parameters:
      weight: 0.3
      density: 0.4
merge_method: della
base_model: <base model path>
parameters:
  epsilon: 0.05
  lambda: 1
dtype: bfloat16
tokenizer_source: base

Additional relevant information is that if I get the tokenizer vocab size with tokenizer_vocab_size = len(tokenizer) from ... any Qwen 2.5 14B model, I get the 151665 number rather than the 152064 number that's in the config.json.

I don't fully understand why it's trimming the vocabulary size and embedding layer down in this merge method but none of the others, but it's annoying for compatibility and specifying the tokenizer_source doesn't seem to address the issue (presumably because the tokenizer doesn't actually have 152064 worth of vocabulary)

ann-brown avatar Sep 27 '24 14:09 ann-brown

When using tokenizer_source/tokenizer new tensors are created for embeddings and LM heads that exactly match the output vocabulary size.

I can look at adding an option for padding the size up to the nearest multiple of 32 if that's causing an issue.

cg123 avatar Oct 26 '24 10:10 cg123

Would be a helpful option -- it's causing some downstream effects in other paradigms (like getting into unsloth patching that isn't fully calibrated to the model type, for some reason) and preventing merges with other Qwen 2.5 models.

ann-brown avatar Nov 06 '24 16:11 ann-brown

I've added this option in #465 - for Qwen2.5 models setting pad_to_multiple_of: 512 will output a model of the exact same size. Hopefully this helps - do let me know!

cg123 avatar Dec 01 '24 00:12 cg123

Early indications are that it's working! Merging two models that were at the truncated size brought it back up to 152064 and it evaluates well. If those were just padding in the first place it should be fine.

ann-brown avatar Dec 01 '24 20:12 ann-brown

Would be a helpful option -- it's causing some downstream effects in other paradigms (like getting into unsloth patching that isn't fully calibrated to the model type, for some reason) and preventing merges with other Qwen 2.5 models.

有合并过glm4模型吗

chenchen333-dev avatar Dec 18 '24 05:12 chenchen333-dev

Would be a helpful option -- it's causing some downstream effects in other paradigms (like getting into unsloth patching that isn't fully calibrated to the model type, for some reason) and preventing merges with other Qwen 2.5 models.

有合并过glm4模型吗

I have not tried merging any glm4 models. It looks like they have a padded_vocab_size rather than a vocab_size?

ann-brown avatar Dec 18 '24 11:12 ann-brown