llama-coder icon indicating copy to clipboard operation
llama-coder copied to clipboard

Replace Copilot local AI

Results 35 llama-coder issues
Sort by recently updated
recently updated
newest added

I'm running Arch Linux and spent a bit trying to install the extension, but it would never appear in the extensions list nor could I get the command `ext install...

Implement CodeGemma and FIM using the definition on [https://ollama.com/library/codegemma:latest](https://ollama.com/library/codegemma:latest)

It would be nice to be able to actively tell the plugin "I want an autocompletion for the current line" instead of "do autocomplete all the time" with a shortcut....

Can you help me run llama3.1 with ollama . How can I do this ? ![image](https://github.com/user-attachments/assets/655a8e75-7bee-4a48-b123-08eba5b9bab4)

When I run a model like `codellama:34b-code-q6_K` it does seem to spin up my GPUs but then I end up with unusable output. Running latest ollama and extension version on...

I saw Ollama does not support Windows yet

Hi! Do you plan to create a version of llama-coder for the JetBrains IDE?

Hello, I hope this message finds you well. I am the maintainer of [llama-github](https://github.com/JetXu-LLM/llama-github), an open-source Python library designed to empower LLM Chatbots, AI Agents, and Auto-dev Solutions by providing...

Please create a Visual Studio version of the extension. Thanks.

i have downloaded the base model but it's not working