Clay
Clay
Hi @qgallouedec, I'd love to contribute to this enhancement! I noticed that this issue has been open for three months, and I'd like to help bring it forward. Is there...
I apologize if I made a mistake. I am using vLLM 0.5.4 and I set the environment variable `VLLM_ALLOW_LONG_MAX_MODEL_LEN=1`, and now my vLLM is working.
I encountered the same problem when I loading gemma-2, my GPU is V100
Thanks, you are correct. I reviewed my deployment environment, I found I allocate not enough memory.
Hi, I’d like to help with this issue if it's still open! I’m happy to contribute and will do my best to handle it smoothly. Thanks!
Hi @qgallouedec, thank you so much for taking the time to review my PR. I really appreciate your suggestions. I'll replace `pytest.raises(...)` with `self.assertRaises(...)` as you recommended, and will also...
Hi @qgallouedec, I've noticed that the `tests (3.11, windows-latest)` failed due to the following error: ``` FAILED tests/test_nash_md_trainer.py::TestNashMDTrainer::test_nash_md_trainer_judge_training_0_standard_prompt_only - ValueError: Cannot find pytorch_model.bin or model.safetensors in C:\Users\runneradmin\.cache\huggingface\hub\llm-blender\PairRM FAILED tests/test_nash_md_trainer.py::TestNashMDTrainer::test_nash_md_trainer_judge_training_1_conversational_prompt_only -...