jupyter-ai
jupyter-ai copied to clipboard
add VLLM support?
Problem/Solution
It has been great to see Ollama added as a first-class option in https://github.com/jupyterlab/jupyter-ai/pull/646, this has made it easy to access a huge variety of models and has been working very well us.
I increasingly see groups and university providers using VLLM for this as well. I'm out of my depth but I understand VLLM is considered better suited when a group is serving a local model to multiple users (e.g. from a local GPU cluster, rather than everyone running an independent Ollama). It gets passing mention in some threads here as well. I think supporting more providers is all to the good and would love to see support for this as a backend similar to the existing Ollama support, though maybe I'm not understanding the details and that is unnecessary? (i.e. it looks like it might be possible to simply use the OpenAI configuration with alternative endpoint to access a VLLM server?)