Jati H

Results 7 comments of Jati H

Hi @marcklingen, reporting in that I do see the same duplication of traces when using the observe decorator with the openai sdk wrapper. Is this expected behaviour? Anyhow, simple enough...

@ibehnam you know this is inspiring me to try collaborative output of multiple models. My thinking is beyond maker-checker interactions, for a multi step generation we can have llm1 and...

@wjn0 have you figured out how to do this yet? Not sure if this is the same thing as you're asking for, but I found a hacky way that works...

I'm seeing the same error after reinstalling my llama-cpp-python library, no issues before.

@Harsha-Nori guidance 0.1.10 llama_cpp_python 0.2.52

@Fahmie23 @Harsha-Nori I finally got this to work. I had to make a few changes to `_llama_cpp.py` class _LlamaBatchContext: def __init__(self, n_batch, n_ctx): self._llama_batch_free = llama_cpp.llama_batch_free `self.batch = llama_cpp.llama_batch_init(n_batch, 0,...

good call on identifying the compatible llama-cpp version @danielvarab, you were correct I eventually ran into issues if loading the chat model (seems tokenizer not set correctly). ended up reinstalling...