verl icon indicating copy to clipboard operation
verl copied to clipboard

Question regarding custom inference backend support (non-vLLM/SGLang)

Open yukiumi13 opened this issue 1 month ago • 0 comments

Thank the community for contributing to such a mature RL framework.

I noticed in the documentation that VeRL currently relies on vLLM or SGLang as the inference backend. However, I am working on custom LLM architectures (e.g., dLLM-like models or variants with custom attention mechanisms) that are incompatible with standard vLLM/SGLang implementations.

Is it possible to decouple the framework from these specific inference backends? I wounder if I can run with simple generate() or custom inference logics so that I can take advantage of other fancy features of VeRL?

yukiumi13 avatar Nov 26 '25 10:11 yukiumi13