Giorgio Robino
Giorgio Robino
please take open the issue
> It seems like there are a number of issues with many of the DX7II / TX802 files: > > **alienazi.syx** is a bundle of sysexs. I extracted the two...
To count token maybe you could use add to LLM the `size` method. BTW the token count depends on the LLM implementation; for Openai models maybe you could use their...
BTW: https://langchain.readthedocs.io/en/latest/modules/llms/integrations/huggingface_hub.html
@dosubot your feedback have been useful. Good news! The solution was to just add explicitly the method `invoke` for the LLM completion: ```python res = llm.invoke(messages) ``` I solved rereading...
I kept an eye on tracing debug system documentation, nevertheless a minimal requirement could be to just have some "very verbose" flag for llms and/or chains, to print out the...
Thanks Harrison, - I set the LLM verbose flag to true, but I don't see the LLM prompt+completion printed out. - Also the LLMChain verbose flag seems doing nothing ?!...
Thanks. The workaround works, but yes I think it's a bug.
Thanks! I'd add a note on the functional meaning of "verbose" - when referred to a LLM instance, the expected behavior (in my mind) is to show ALL interactions (prompt+completion)...
Well, it could be a way, but currently, when you set `verbose==True` I see these different cases: - `llm` => does nothing (a bug ?) - `chain` => does nothing...