Create documentation of the guidance features supported by various models / APIs
Is your feature request related to a problem? Please describe. This library looks amazing but I'm having trouble understanding which features I can expect to benefit from depending on the LLM model (or model provider) I use. I've looked through the issues but I'm still not clear.
Describe the solution you'd like
An addition to the readme: a table listing guidance features as rows, model / model providers as columns, and a ✅ or ❌ as values. For example:
| Feature | OpenAI chat models (gpt-3.5-turbo, gpt-4) |
OpenAI other models (text-davinci-003) |
Hugging Face models |
|---|---|---|---|
Partial completions in assistant role |
❌ | N/A | ✅ |
Or some other way of documenting what is & isn't supported depending on the model used 🙂
Describe alternatives you've considered
lmql / no alternative
Thank you for your great work!
Thanks, this is a good idea, we'll do something like it when we revamp the readme