vllm icon indicating copy to clipboard operation
vllm copied to clipboard

[CI] Add mteb testing to test the accuracy of the embedding model

Open noooop opened this issue 7 months ago • 8 comments

Summary:

  1. Add mteb STS12 testing to test the accuracy of the embedding model (tests/entrypoints/openai/correctness/test_mteb.py)
  2. test snowflake_arctic_embed using mteb STS12. (tests/models/embedding/language/test_snowflake_arctic_embed.py)
  3. Snowflake/snowflake-arctic-embed-m-long

If during inference it is found that seqlen > max_trained_positions, it will automatically change from NomicBertRotaryEmbedding to NomicBertDynamicNTKRotaryEmbedding.

https://huggingface.co/Snowflake/snowflake-arctic-embed-m-long/blob/main/modeling_hf_nomic_bert.py#L639 https://huggingface.co/nomic-ai/nomic-bert-2048/blob/main/modeling_hf_nomic_bert.py#L1413

        if seqlen > self.max_position_embeddings:
            base = self.base * (
                (self.rotary_scaling_factor * seqlen / self.max_position_embeddings) - (self.rotary_scaling_factor - 1)
            ) ** (self.dim / (self.dim - 2))
            inv_freq = self._compute_inv_freq(base=base, device=device)
            self.register_buffer("inv_freq", inv_freq, persistent=False)

It might lead to hard-to-detect bugs.

we ignore config.rotary_scaling_factor so that for datasets shorter than max_trained_positions 2048, the results are consistent with SentenceTransformer.

The context extension uses vllm style rope_theta and rope_scaling.

Task selection

Although mteb v2 significantly speeds up the testing process, it still requires several hours to complete all tests.

Here choose a small test task in mteb: STS12.

  1. Running time on 4090 is approximately 26.95s

  2. The score for this test set is strongly numerical stability ( <1e-4 ) with respect to minor variations in the model implementation and tensor data types.

  3. The score differences between different models are also quite noticeable ( >1e-3).

This makes mteb STS12 a great embedding model test set.

numerical stability

float16
VLLM main score:  0.787317856459396
SentenceTransformer main score:  0.7873427091972599
Difference:  2.4852737863900742e-05  

bfloat16
VLLM main score:  0.7873663350672234
SentenceTransformer main score:  0.7873427091972599
Difference:  -2.3625869963517232e-05

The difference is very subtle (<1e-4) at least on this test set.

Ten rounds:

float32 2.2034500413159464e-07 2.2674275951409703e-06
float16 -1.2960828871366736e-05 6.329514177900761e-06
bfloat16 -0.0001093438180336248 3.559915712887334e-05

The results of ten iterations seem to show that converting float32 to float16 yields better results than bfloat16 (vllm defaults to converting float32 to float16).

more about numerical stability

Most models exhibit excellent numerical stability

model name st_main_score Difference std
BAAI/bge-m3 0.7873424632849964 -4.014647728589615e-06 1.266416587059263e-05
BAAI/bge-base-en-v1.5 0.7802846624612514 1.1294266950234721e-05 6.865350381034025e-06
Snowflake/snowflake-arctic-embed-xs 0.7149276890717804 1.5002530987628937e-05 5.132361246049283e-06
Snowflake/snowflake-arctic-embed-s 0.7408120447186094 1.2957674633273797e-05 5.364178900440517e-06
Snowflake/snowflake-arctic-embed-m 0.6467522411844727 -3.727433978584216e-06 8.904071772230203e-06
Snowflake/snowflake-arctic-embed-l 0.6362746289758823 9.515755331335196e-05 2.023830079795977e-05
Snowflake/snowflake-arctic-embed-m-long 0.6811445157066163 3.396798716037708e-05 1.224356222837439e-05
Snowflake/snowflake-arctic-embed-m-v1.5 0.6490882209298032 1.8871733633019083e-05 6.591107037250243e-06
Snowflake/snowflake-arctic-embed-l-v2.0 0.7122583106737259 1.074976228776503e-05 1.3400689624215418e-05
Snowflake/snowflake-arctic-embed-m-v2.0 0.7066229164460937 1.5418442692483048e-05 9.792523972420118e-06
Alibaba-NLP/gte-Qwen2-1.5B-instruct 0.7280529229028553 5.124313459714536e-05 1.6385524234026275e-05

slightly numerically unstable model

  • intfloat/multilingual-e5-small shows a significant drop when using fp16 , and fp32 needs to be used.

fp16: | intfloat/multilingual-e5-small | 0.7805425596252846 | -0.2749311085815237 | 0.006216913108536066 |

fp32: | intfloat/multilingual-e5-small | 0.7805425596252846 | -1.6403316041024851e-06 | 7.53539269543218e-06 |

  • intfloat/multilingual-e5-large-instruct shows a significant drop when using fp16 , and fp32 needs to be used.

pooling_type="MEAN" + fp16 (default) intfloat/multilingual-e5-large-instruct 0.8224491209469045 -0.28623335791513993 0.007169234312147499

pooling_type="MEAN" + fp32 intfloat/multilingual-e5-large-instruct 0.8224491209469045 -2.3497119421289625e-06 7.898194995699927e-06

  • jinaai/jina-embeddings-v3 shows a slight drop when using fp16.

fp16: | jinaai/jina-embeddings-v3 | 0.7834129787836271 | -0.0709833671361465 | 0.004834963031278825 | fp32: | jinaai/jina-embeddings-v3 | 0.8243646209061513 | -3.119267999662778e-05 | 6.651161140301139e-06 |

noooop avatar Apr 25 '25 09:04 noooop

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

github-actions[bot] avatar Apr 25 '25 09:04 github-actions[bot]

Thanks for adding this, can you fix pre-commit?

DarkLight1337 avatar Apr 25 '25 09:04 DarkLight1337

@DarkLight1337

Delete the benchmarks/eval/test_mteb.py finally, or change it into a more general testing script.

noooop avatar Apr 25 '25 10:04 noooop

cc @mgoin @comaniac do you think we can incorporate this eval script into our existing scripts? Or would it be better to keep them separate?

DarkLight1337 avatar Apr 25 '25 10:04 DarkLight1337

Hmm I'm not sure we want to have benchmark/evals. For correctness checking in the CI, we should be able to just test 2-3 cases to keep the stability.

comaniac avatar Apr 26 '25 07:04 comaniac

@DarkLight1337

I tested more models, and most of them showed strong numerical stability ( <1e-4 ), even better than I imagined.

The score differences between different models are also quite noticeable ( >1e-3).

This makes mteb STS12 a great embedding model test set.

noooop avatar Apr 28 '25 07:04 noooop

Thanks for your patience!

DarkLight1337 avatar Apr 30 '25 06:04 DarkLight1337

Thanks for reviewing.

By carefully studying the code, I indeed learned a lot of weird stuff.

noooop avatar Apr 30 '25 06:04 noooop

@DarkLight1337

What should I do to make the merge go more smoothly?

noooop avatar May 01 '25 03:05 noooop

The failing doc build seems related to this PR. Maybe because you changed the dependencies

DarkLight1337 avatar May 01 '25 04:05 DarkLight1337

need manually merge #16859 first

noooop avatar May 01 '25 04:05 noooop

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @noooop.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar May 01 '25 06:05 mergify[bot]

Doc build still fails

DarkLight1337 avatar May 01 '25 08:05 DarkLight1337

It seems that awscli isn't used anywhere in vLLM. Let me open another PR to remove it first

DarkLight1337 avatar May 01 '25 12:05 DarkLight1337

Opened #17532

DarkLight1337 avatar May 01 '25 12:05 DarkLight1337

Is pulling a model from s3 dependent on awscli?

noooop avatar May 01 '25 12:05 noooop

I think the issue is not with awscli; I need to set up a local environment for the docs and check it out.

noooop avatar May 01 '25 12:05 noooop

Is pulling a model from s3 dependent on awscli?

No, we use boto3 for that

DarkLight1337 avatar May 01 '25 12:05 DarkLight1337

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @noooop.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar May 01 '25 12:05 mergify[bot]

mainly, mteb depends on "datasets>=2.19.0,<3.0.0" is too old

noooop avatar May 01 '25 12:05 noooop

To avoid dependency conflicts maybe you can just pip install mteb for entrypoints tests only

DarkLight1337 avatar May 01 '25 12:05 DarkLight1337

wtf

Uninstalling mteb-1.38.3: 
Would remove:   
/share/anaconda3/bin/mteb 
/share/anaconda3/lib/python3.12/site-packages/docs/*  
/share/anaconda3/lib/python3.12/site-packages/mteb-1.38.3.dist-info/* 
/share/anaconda3/lib/python3.12/site-packages/mteb/*     

mteb installed a library named docs for python, which caused:

Traceback
=========

      File "/share/PycharmProjects/noooop_vllm/docs/source/conf.py", line 137, in setup
        from docs.source.generate_examples import generate_examples
    ModuleNotFoundError: No module named 'docs.source'

noooop avatar May 01 '25 13:05 noooop

Wouldn't this also remove the "correct" docs installation?

DarkLight1337 avatar May 01 '25 14:05 DarkLight1337

normal there shouldn't be a library called docs, otherwise it would cause many problems

I first tried to see if it could pass the test, very sorry for using a lot of CI resources

noooop avatar May 01 '25 14:05 noooop

Let's try the CI again

DarkLight1337 avatar May 02 '25 06:05 DarkLight1337

Let's try the CI again

QvQ

noooop avatar May 02 '25 06:05 noooop

Need to install mteb for model tests as well.

DarkLight1337 avatar May 02 '25 08:05 DarkLight1337

@DarkLight1337

https://buildkite.com/vllm/ci/builds/19198

Last check, are the errors for V1 Test and Speculative decoding tests unrelated to this pr?

noooop avatar May 02 '25 09:05 noooop

Yeah they are unrelated

DarkLight1337 avatar May 02 '25 09:05 DarkLight1337

This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @noooop.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

mergify[bot] avatar May 19 '25 08:05 mergify[bot]