JasonHonKL
JasonHonKL
Same issue here. Gonna take a look with the source code later and see if I could solve.
@webbeef yep that's what i meant like having F12 and the dev tool will pop up. It would be really useful if we have that.
@LaurentBerder I don't think the repo is being maintained. Do you want to collaborate, fork and produce one that can handle with local model support like llama.cpp and Ollama ?...
I guess you can add the _score function inside the llm/openai.py. I will try to make a pull request later. The repo idea is really good to be honest :(...
I think Ollama is a backend focusing on allowing user to host their LLM but not the part how developer use it. So I don't think this feature should be...
@BruceMacD I also want this feature. Noticing you have self assigned, mind to ask if anything I could help ?
@ultmaster PTAL thanks
@genji970 i think the issue is "sys_platform == 'linux'" line
I see. However, from my personal understanding, there won’t be a huge difference between Mac and Linux systems, and the UV lock isn’t important as long as the TOML file...
@KellySit Great work but seems you have to resolve the conflict before the CI can run !