mmike87
mmike87
Is this post for real? If you don't like it - write your own LLM and provide your own example. My God, the things people complain about ... especially when...
I see those with some of my training files, too - I just ignore them for now and the model still seems to answer inquiries.
I watched my GPU usage and it was not touched.
I am having the same issue. I am not sure what the issue is ... but I have several hours into this with no progress.
I am not using a docker container. I am running Ollama native Windows, and Verba in WSL. Maybe that is an issue? I will check if I can hit Ollama...
I think my situation with WSL is similar - it's running behind Hyper-V and by default, port forwarding is not enabled from WSL -> Windows but it IS the other...