Hasan Uddin

Results 1 comments of Hasan Uddin

I am using these settings in my config.toml: ``` LLM_API_KEY="11111111111111111111" LLM_BASE_URL="http://localhost:11434" LLM_MODEL= "ollama/llama2" LLM_EMBEDDING_MODEL="llama2" WORKSPACE_DIR="./workspace" ``` I also chose llama2 as Model and MonologueAgent as Agent on the front end....