Alok Saboo

Results 243 comments of Alok Saboo

@assafelovic Can you confirm your config? I tried with the following config (note that `OPENAI_API_KEY` is not optional): ```yaml LLM_PROVIDER=ollama OLLAMA_BASE_URL=http://192.168.2.162:11434 #also tried with HTTP://host.docker.internal:11434 OPENAI_API_KEY=OLLAMA EMBEDDING_PROVIDER=ollama FAST_LLM_MODEL=llama3:8b-instruct-q5_0 SMART_LLM_MODEL=llama3-chatqa:70b OLLAMA_EMBEDDING_MODEL=snowflake-arctic-embed:l...

I'm using the docker container on Mac OS. Any changes required to the docker compose? I'm using the default one in the repo.

Here's the docker compose: ```yaml version: '3' services: gpt-researcher: image: kramer1346/gpt-researcher build: ./ environment: OPENAI_API_KEY: ${OPENAI_API_KEY} TAVILY_API_KEY: ${TAVILY_API_KEY} LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY} OPENAI_BASE_URL: ${OPENAI_BASE_URL} ports: - 8001:8000 restart: always extra_hosts: - "host.docker.internal:host-gateway"...

@gschmutz I updated my docker compose file to include the additional env variables: ```yaml services: gpt-researcher: image: kramer1346/gpt-researcher build: ./ environment: OPENAI_API_KEY: ${OPENAI_API_KEY} TAVILY_API_KEY: ${TAVILY_API_KEY} LANGCHAIN_API_KEY: ${LANGCHAIN_API_KEY} OPENAI_BASE_URL: ${OPENAI_BASE_URL} LLM_PROVIDER:...

Yes, I'm building the images myself using the docker compose up command. That's how I've used it in the past

@gschmutz Just so you know, I have Ollama running on the Mac as a regular app and GPT Researcher on the same Mac in a docker container.

Instead of printing the output (which just scrolls by), how would you modify the code to say, get the updated quote for 'key': 'SPXW 240617C05440000'?

I don't think this is a Morphic problem. The underlying search engine may not have scraped the latest content.

It looks like the language is not set for deep search. A simple search returns the response in English.