Nicholas Chammas
Nicholas Chammas
> Actaully, the examples is not even assigning a grammar to match to the tool. This is surprising me: how is Guidance matching the `add` tool to occurrences of "add(x,...
I don't know if infix notation is supported by Guidance. For example, I fixed your original code by adding an explicit prefix call to `number()` to show the model how...
Could we update the relevant docs for this feature as part of this PR? Or are the existing docs valid as-is?
@grundprinzip - Is this documentation no longer relevant?
Downgrading `llama-cpp-python` from 0.3.7 to 0.3.6 avoids the segmentation fault but results in another error: ```sh $ python example.py llama_new_context_with_model: n_ctx_per_seq (4096) < n_ctx_train (32768) -- the full capacity of...
> for me 0.2.90 worked @adityaprakash-work Can you clarify what you mean by "worked"? Did you run exactly [the repro script I posted][1] with guidance 0.2.0 and llama-cpp-python 0.2.90? [1]:...
OK, I got this to work by downgrading both `guidance` _and_ `llama-cpp-python`. If I use the latest version of either library, there is a problem. Works: ``` guidance 0.1.16 llama_cpp_python...
The fact that the 0.2.0 release is so broken suggests that there is some notable gap in the continuous integration tests for this project, as I assume the team behind...
> The error message you reported indicates that this is due to (1) not running this code in an `if __name__ == '__main__':` guard and (2) the default behavior of...
> Do you have ideas on how to either 1. use this model more efficiently (for inference only) or 2. have recommendations on smaller models to use instead? There are...