llama.cpp
llama.cpp copied to clipboard
gguf-py: handle numpy 2.0 byte-ordering changes
this commit fixes this error:
Traceback (most recent call last):
File "/home/poweruser/python-goddamn-venv/bin/gguf-dump", line 8, in <module>
sys.exit(gguf_dump_entrypoint())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/poweruser/python-goddamn-venv/lib/python3.12/site-packages/gguf/scripts/gguf_dump.py", line 450, in main
dump_metadata(reader, args)
File "/home/poweruser/python-goddamn-venv/lib/python3.12/site-packages/gguf/scripts/gguf_dump.py", line 35, in dump_metadata
host_endian, file_endian = get_file_host_endian(reader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/poweruser/python-goddamn-venv/lib/python3.12/site-packages/gguf/scripts/gguf_dump.py", line 24, in get_file_host_endian
host_endian = 'LITTLE' if np.uint32(1) == np.uint32(1).newbyteorder("<") else 'BIG'
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: newbyteorder was removed from scalar types in NumPy 2.0. Use sc.view(sc.dtype.newbyteorder(order)) instead.
AFAIK the same code is also present in gguf_new_metadata and other python scripts. So probably better to do a full pass to change it in other places.
And btw these scripts are not actually part of gguf-py, they are more like "examples"
Hey, sorry, this PR reminded me I had a lot of unsubmitted code locally, made #11909 that supercedes this...
And btw these scripts are not actually part of gguf-py, they are more like "examples"
i have a bunch of guff models downloaded with ollama which their file doesn't have the name of the model, instead they are like: sha256-170370233dd5c5415250a2ecd5c71586352850729062ccef1496385647293868 sha256-22a849aafe3ded20e9b6551b02684d8fa911537c35895dd2a1bf9eb70da8f69e
and i was looking for a tool that i could give a .guff file as input and get info about the model, something like the UNIX file or ffprobe command, and i stumbled upon this.
Closing the issue in favor of #11909