Jobs_Applier_AI_Agent
Jobs_Applier_AI_Agent copied to clipboard
[BUG]: Uses Gpt4o when set to run Ollama with Llama 3.2 in config.yaml file
Describe the bug
The program seems to have a bug where it doesn't know which model you've stated you want to use. I've tried Llama 3.2 with Ollama and Gemini, but when trying to run either of these AI hawk insists to use GPT 4o. You can not omit a API key, or otherwise the program will not run. Even if you try using Ollama which requires no API key and only URL. Inputting a Gemini key does not work as any Gemini key seems to be invalid according to the script. However, I would like to use ollama when possible, and I would like to figure out a fix to get it to run and not GPT 4o.
Steps to reproduce
- Modify config.yaml file with the following:
llm_model_type: ollama llm_model: 'llama3.2' llm_api_url: http://127.0.0.1:11434/
- Save and Run AI Hawk
Expected behavior
Running Llama 3.2 with Ollama
Actual behavior
Running GPT 4o mini
Branch
None
Branch name
No response
Python version
No response
LLM Used
No response
Model used
No response
Additional context
No response