refact
refact copied to clipboard
AI Agent that handles engineering tasks end-to-end: integrates with developers’ tools, plans, executes, and iterates until it achieves a successful result.
Llama3 models are gated, so we need to add some tricks to run it on this server: 1. HF Token in UI and backend: #427 2. refact-lsp support
Done: * added hf token to credentials page * for each model we're checking it's status and returning repo_status in model_info (see /tab-host-models-get): it can be open, gated, not_found or...
As for now all passthrough models are capped by 16_000 tokens of context size. We could add a slider to adjust this value in GUI
Press "Run Filter" on Finetune tab. **Expected result:** Filtering is working **Actual result:**  Filtering not working. In logs: `REJECTED FILTER refact/code_contrast/format_2023q2/el_file.py Boolean value of Tensor with more than one...
stablelm models are conversed to use a normal tokenizer, and have tokenizer.json in their directories on huggingface. We can use them now https://huggingface.co/stabilityai/stablelm-2-12b-chat