Nicholas Tindle
Nicholas Tindle
Can’t repro, no new details
Sorry for the long wait, this needs updated to component based. Most of it should move over pretty easily. @kcze can answer any questions you have
Do you have a quick start on this? Haven’t looked into core ML at all
We don't package any models with our code. Is it possible to use tools like Llamafile to do this?
   Doesn't seem to work for me when using ./scripts/llamafile/serve.py ```powershell PS C:\Users\nicka\code\AutoGPTNew\autogpt> python3 .\scripts\llamafile\serve.py Downloading mistral-7b-instruct-v0.2.Q5_K_M.llamafile.exe... Downloading: [########################################] 100% - 5166.9/5166.9 MB Traceback (most recent call last):...
https://github.com/Mozilla-Ocho/llamafile/issues/257#issuecomment-1953146662 TLDR you need to download and execute llamafile.exe with some params because of sizes
I also get this after using the workaround above ``` 2024-06-10 15:40:50,164 ERROR Please set your OpenAI API key in .env or as an environment variable. 2024-06-10 15:40:51,339 INFO You...
Can't get it to run without an openai key set ``` (agpt-py3.11) C:\Users\nicka\code\AutoGPTNew\autogpt>python -m autogpt 2024-06-15 19:18:02,550 WARNING You don't have access to mistral-7b-instruct-v0.2. Setting fast_llm to OpenAIModelName.GPT3_ROLLING. 2024-06-15 19:18:02,552...
Also this should try mattching better. For example just `mistral` should work if its the only one or `mistral-7b` if there's two. Adding parts should only be required rarely
My counter to that is we don’t require gpt-4-0611 we just require gpt-4 and match the best we can to a rolling