SWE-agent
SWE-agent copied to clipboard
Local LLM inference support
I want to ask if SWE support use local llm inference backend? because gpt4 will be expensive in multiturn comversation, maybe we can use the local llm to instead of online llm invoke