vllm
vllm copied to clipboard
[1/n][CI] Load models in CI from S3 instead of HF
- Load some models from S3 path with
runai-model-streamerinstead of HF by default (only a few test jobs so far, listed below) - Add
runai-model-streamerand...-s3into CI dependencies - Allow to pull more files from S3 than just
*config.json - Strip
/from file path when determining destination file path so it doesn't default to/...which machine doesn't have access to write into - Add
/into model S3 path if there's no/at the end, mainly to prevent cases where 2 models on S3 bucket match the pattern and confuse the model loader - Don't look for files in HF repo if the model starts with
/, which means the model path is a S3 path and model name is already converted to S3Model().dir which looks like/tmp/tmp3123..
Test jobs with models loaded with S3 (not all test files, just as many as I can):
- Entrypoints llm/ (I haven't done so for
openai/ones yet since they are set up with the remote server and the S3 model path somehow messed things up) - Basic correctness
- Basic models
- Metrics & Tracing
- Async Engine, Inputs, Utils, Worker Test
👋 Hi! Thank you for contributing to the vLLM project.
💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.
Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.
To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.
🚀
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @khluu.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork
Instead of hard-coding S3 paths, what if we used an environment variable (VLLM_CI or something) which, if set, will prepend s3://vllm-ci-model-weights/ to the model and set load_format="runai_streamer"?
Instead of hard-coding S3 paths, what if we used an environment variable (
VLLM_CIor something) which, if set, will prepends3://vllm-ci-model-weights/to themodeland setload_format="runai_streamer"?
hmm we can probably do it inside LLM class?
This pull request has merge conflicts that must be resolved before it can be merged. Please rebase the PR, @khluu.
https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork