alex siu
alex siu
It seems like llama index only support ollama as an open source multimodal. But when I encounter some error after running the objective,like navigation error, is there any clues to...
Is there any way to test multi-modal llm with huggingface model? The choice of multi-modal llm on ollama is limited. It would be great to have multi-modal llm with huggingface...
> Thank you so much, it worked now. > > "And you can just set CHROME_USER_DATA empty if you want to use own browser” This actually means this don't have...
I pull the main again and use python 3.11.8. docker image is running using the command "docker run -it --name opendevin ghcr.io/opendevin/sandbox:latest". Frontend shows Initializing agent but it stills cannot...
> did you rerun `make build`? Yes. I pull the main,activate python environment and reinstall using make build
> Can you look in the ./logs directory, there should be a backend log. Can you paste the latest errors in that file? > > Alternatively, you can run separately...
> > Please check Docker is running using docker ps. > > What is the output of _docker ps_? CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e44b248a1e1f ghcr.io/opendevin/sandbox:latest "/bin/bash"...
Same issue found on windows. Ubuntu machine was working fine. Proxy may be the cause of the issue