Jan Badertscher
Jan Badertscher
> same for me on ubuntu WSL2: I reinstalled the webui, and forgot to [build GPTQ-for-LLaMa](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model): ``` sudo apt install build-essential conda activate textgen conda install -c conda-forge cudatoolkit-dev mkdir...
I have the exact same problem just running `docker compose up -d` without any config changes. The default redis port in all config files `conf/service_conf.yaml`, `docker/.env`, `docker/docker-compose.yml`, `docker/dockr-compose-base.yml`, `docker/service_conf.yaml` is...
The only maintained fork I found that implemented cool fixes and improvements: https://github.com/drudilorenzo/generative_agents
@lramos15 Thanks for answering. 1. `We are getting the capabilities from the endpoint.` Does this mean, GitHub Copilot currently fetches capabilities and only exposes models with tool calling capabilities to...
@lramos15 Thanks for answering my previous questions—really helpful! 1. I installed the latest Windows Insiders Build, checked for updates, added one Ollama model and one OpenRouter model. Neither model shows...
@lramos15 Sorry, I ninja edited my previous answer: > 1. I installed the latest Windows Insiders Build, checked for updates, added one Ollama model and one OpenRouter model. Neither model...
Oh, I get it now, you said it in the beginning: GitHub Copilot actually respects the capabilities and only allows you to use models with properly set capabilities. If we...
In summary, the experience could be improved. I can think of the following improvements: - When adding custom LLM Providers / Models, ask the user if the model supports Tool...
Relevant: https://github.com/microsoft/vscode-copilot-release/issues/7289
myconfig.toml ```toml [completion] provider = "litellm" concurrent_request_limit = 16 [completion.generation_config] model = "openai/llama3.2" #add your model name here temperature = 0.1 top_p = 1 max_tokens_to_sample = 1_024 stream = true...