Rémi Louf
Rémi Louf
> * **Important:** In the resolved issues, the incorrect outputs are often caused by not applying a chat template. Let's help users get great completions. Examples should include chat templates,...
What do you mean by full response?
Couldn't you just implement a \[custom solver\]([https://inspect.ai-safety-institute.org.uk/solvers.html](https://inspect.ai-safety-institute.org.uk/solvers.html)) for InspectAI?
Is this good to review?
First thoughts on how this could be achieved. Let's consider the prompt `This is a prompt `. 1. We loop over the entire vocabulary and use partial matching to determine...
Here is a quick outline of a solution that uses `outlines.text.parsing.find_partial_matches` and only loops through the vocabulary once: ```python from outlines.text.parsing import find_partial_matches from transformers import AutoTokenizer import interegular import...
Another fun one: ```python import outlines.models as models import outlines.text.generate as generate model = models.transformers("gpt2") prompt = "Is regex-guided generation useful? " unguided = generate.continuation(model, max_tokens=30)(prompt) guided = generate.regex(model, r"(Yes|No)",...
I don't think that Pydantic exposes the kind of conditional logic that Json Schema supports. However, we can try a different approach by unrolling the conditional dependency using the [Regex...
1. Yes sorry for not specifying this 2. Ideally you would add one or two examples in the prompt
With the latest release you should be able to write: ```python from outlines.types import either information_query = "Enough information to categorize the post? " conclusion = "Final coding of post...