Kava
Kava
I downloaded Llama3 model to hf-files directory and then trying to use AutoModelForCausalLM to load the model, and then convert the transformer portion to MLIR. > huggingface-cli download meta-llama/Meta-Llama-3-8B --local-dir...
I am running Llama'3 attention layer's Torch mlir to TOSA mlir conversion pipeline (command below), but seeing torch.aten.scaled_dot_product_attention as illegal function. Can someone help me figure out a pass for...
I am running torch mlir to tosa mlir conversion pipeline (command below), but keep seeing torch.aten.mm as illegal fn. Can someone help me figure out a pass for tosa conversion...
When generating Torch MLIR, is there a way to write weights and model in separate file? My model weights are huge for Llama3 layers, and this is leading to giant...