amygbAI
amygbAI
Hi @uzairlatif90 i did exactly what you are/were thinking of doing. Just deployed Llama3 ( instruct-8B ) ..now im working on quantizing it via llama cpp ..but llama3 is still...
@uzairlatif90 the llama cpp thing is a WIP :) ..like i mentioned in my comment, i am still working on it .. once im able to get it up and...
@uzairlatif90 it was really simple in the end .. install ollama https://github.com/ollama/ollama/tree/main and ```ollama pull llama3:instruct``` and start their server using ```ollama serve``` ..ideally it starts by default ..then you...