Cong Lu
Cong Lu
Hey! This isn't necessary for performance, you could convert these to full precision floats! :)
Why not! You might find it more useful to edit the prompts to the writeup script.
Hey! The LLM example is nanogpt so I would defs recommend checking out the original repo: https://github.com/karpathy/nanoGPT for the requirements in full.
If you use Ollama, you can modify the existing OpenAI API code like so: https://ollama.com/blog/openai-compatibility
You should comment out try except so that the real errors show up. Also, we highly do not advise you try using Llama2 models, any model weaker than the original...
Of course, only needs an edit here: https://github.com/SakanaAI/AI-Scientist/blob/main/ai_scientist/llm.py
This should be routable to the OpenAI API, so whatever model name and creation of OpenAI client that supports that model
I would recommend debugging which part fails with your integration! E.g. print debug logs from each LLM call.
Thanks so much, we made adding new models much easier with the llm.py file, could you adopt the new standard? Cheers!
Please see the community templates for lots of new examples, including new fields!