M0eJay
M0eJay
> Is there an easy way to test all of the functionality in one go or there is quite a lot of chained merges below? not that I can think...
Looks like the same change was merged a bit later [here](https://github.com/abetlen/llama-cpp-python/commit/cdf59768f52cbf3e54bfe2877d0e5cd3049c04a6#diff-ed675dab7bf6bd468dd59d2b195d999021715f673252e51122d25edd8b9ade1e) So, this should now work with the latest llama-cpp-bindings
Am having the same issue, works for gpt 3.5 model