Has anyone tried with open source LLM or gemini free usage?
Is there is anyway we can use hugging face model or open source LLM or quantized Open sources LLM for the purpose or we can use gemini free usage quota?
I've tried it many times, it seems it only works properly with GPT4o for the moment. I assume a finetune of the local LLMs, seems to be the way to go from here.
I'm trying, but my localization of the large model has been calling unsuccessfully
try llama3.1 api from groq its free, Added support for Groq
@Isaadahmed2 @LMJOK Big thanks to the authors of this repository, I just have created a repository that accommodates OSS local models to back up the AI Scientist! Please check it out! URL: https://github.com/Masao-Taketani/AI-Scientist-with-Local-LLMs
It currently supports Hugging Face or Ollama platforms including quantized models, and it even enables you to use the recently-announced DeepSeek R1 models!
May the AI Scientist Locally be with You!