llama.cpp
llama.cpp copied to clipboard
convert-pth-to-ggml.py how to handle torch.view_as_complex
llama code block include view_as_real: https://github.com/facebookresearch/llama/blob/main/llama/model.py#L68
how to convert-pth-to-ggml.py handle this part of weight
Please improve your question with more text and examples so it is easier to understand what you are asking.
If you are asking about applying rotary embeddings, then that is done in the llama.cpp file and not during conversion
@nullhook ths for u info