woheller69

Results 417 comments of woheller69

try with TotalCommander

then maybe it is not possible. Maybe some day someone has an idea.

I don' t think I will implement this anytime soon, but we can keep it as an idea for future enhancement... Is there a possibility to extend the scheduling for...

you get the current day if you switch on debug. so you can compare at the end of the day

https://github.com/Maximilian-Winter/llama-cpp-agent/issues/54 Probably that is related to my findings that llama-cpp-python with llama-cpp-agent is slower than gpt4all on the follow-up prompts. First prompt is fast.

I also notice that the second prompt is slower than e.g. in gpt4all with otherwise same setup

maybe the \n (s) are stripped off. See https://github.com/Maximilian-Winter/llama-cpp-agent/pull/73

> +, I have the same issue for version 0.2.82 Check your prompt template

Why not add it now and improve if there is a better solution. For now this would work in most cases.