yuisheaven
yuisheaven
I also consider time tracking a very important feature and would like to contribute if possible. I am very sad as I feel like the Nextcloud team does not really...
As far as I can see, it is also the only reason why Ollama cannot be used as the local ai provider for Nextcloud... I really hope this is soon...
I did not have any problems at first but when re-initializing my devices because I was swapping to a multi-account environment, the Tapo and Kasa Hubs also updated their firmware...
same here, even though if I do spawn a powershell window and type 'claude', it works out of the box..
I think the better option would be to add an api_base param to the openai provider so you can just use LMStudios local inference endpoint but you can also use...