Wingston SHaron

Results 19 comments of Wingston SHaron

Did you manage to get this working? Im having an error while installing the cuda kernel om that branch

Id be interested to see if this https://huggingface.co/chavinlo/toolpaca works well with the agent tooling prompts on langchain. Im not sure though, was trying to read up on toolformer papers last...

so I did manage to do some testing with `python server.py --model chavinlo_toolpaca --listen --load-in-8bit --agent` here is the logs - https://pastebin.com/nQC1ptxB 1 - it did manage to end up...

yea hmm i see. yea though the langchain agent kinda expects the ReAct format.. i think the parsing and injection of content is more complicated in toolformer.. im going to...

will try def tmrw, was investigating the looping thing - can you try passing `max_iterations=2, early_stopping_method="generate")` to the initialize_agent? regarding the looping behaviour, its a thing in langchain agent code.....

Yea i was thinking about it too but there are so many projects that do something similar. The agent framework is really powerful to be able to do more complex...

Im not so concerned about the llm breaking out to be honest. Well maybe the llama.cpp ones.. But the larger ones that need a Gpu i think will try their...

more testing results! it works quite well actually the vicuna agent (event when not using vicuna) very easy to customize the prompts - i trained a small lora with the...

Yea https://huggingface.co/Wingie/lora_tbyw_v6/blob/main/datasets/react.txt this is what i trained the lora with on the training tab here

We have some basic langchain integration here https://github.com/seijihariki/text-generation-webui/blob/langchain/modules/lcagent/vicunaagent.py The problem im facing is that the 13/7b models find it really hard to follow the prompt for langchain.. I keep getting...