Srikanth Srungarapu
Srikanth Srungarapu
I'll try debugging this on my end and post solution here if I solve the issue. Thanks for quick reply!
Currently, benchllama supports evaluating FIM models. Will try to add eval support for instruct fine-tuned models soon. Hope this clarifies the confusion.
Hey, Could you please check the logs by using the 'Privy: Show Logs' option in the command palette? Check to see if the requests are being sent to the Ollama...
For auto-completion, can you please try with base models as shown in the image.
I wasn't able to reproduce this issue. A few things to check are: 1. Are there any other installed extensions that also provide autocompletion. If yes, please try disabling them....
Hi, Thank you for trying the extension. You can troubleshoot this by using the manual mode available for the autocomplete feature. Once you enable the manual mode by choosing "manual"...
One of our users faced the same issue. It happened as this picked the wrong model variant for autocompletion. Please use `deepseek-coder:{1.3b or 6.7b or 33b }-base` or `codellama:{7b or...
Thanks a lot for the detailed write-up. This helps us assist other users too! As for autocomplete suggestions, please use only `base` models like `deepseek-coder:1.3b-base`, `deepseek-coder:6.7b-base` etc. The model `deepseek-coder:latest`...
> I guess these LLM models can't really digest huge Perl files, and also, I guess, Privy hangs while waiting for the LLM, thus blocking VSCodium's autocomplete suggestions box. This...
We ran benchmarks for `gemma-7b` model against the existing ones, but results weren't encouraging :( For autocompletion, the recommended model is `deepseek-coder:{1.3b or 6.7b or 33b }-base`. I'm currently (16GB...