llama2.rs
llama2.rs copied to clipboard
How to run baby llama?
Hi. Thanks for this port.
I was trying to inference the babyllama, but seems that this port doesn't support it anymore?
Like the original llama2.c
compatible stories*.bin
model.
How can I do this ?