Updated docs for new features.
Hello everyone.
Looks like new features had been added for this lib, like using separate LLM server backend for generation (correct my if I'm wrong). However, I can't find any documentations about this.
Are there going to be any documentation updates?
Currently, the documentation is the notebook:
https://github.com/guidance-ai/guidance/blob/main/notebooks/server_anachronism.ipynb
(and also test_server.py). This is code which demonstrates what is possible, but it probably not what you would use directly in production.
@riedgar-ms
I've looked at this code and it rises even more questions. Is that only possible to make server by using only this libs built in functionality? Is it possible to use different remote backends like VLLM, Ollama or Text Generation Web UI?
You can hook the library into whichever server you want.