endomorphosis
endomorphosis
I don't know about what hashing method is running under the hood, but this was the questions that I had when contemplating it. Normally the entire file is passed through...
it looks like: Mostly large language models mostly between 7GB to 70G using badgerds on huggingface repositories, IPFS daemon is running online and i use -r to archive the entire...
ipfs@workstation:/storage/cloudkit-models/Mixtral-8x7B-v0.1-GGUF-Q8_0$ time ipfs add mixtral-8x7b-v0.1.Q8_0.gguf added QmXVg3Ae6wRwbvkVqMwySyx6qdcVdjEy1iu8xHnwT9dAoB mixtral-8x7b-v0.1.Q8_0.gguf 46.22 GiB / 46.22 GiB [=============================================================================================================================================================================] 100.00% real 15m32.782s user 0m22.393s sys 1m54.226s (copy from zfs to ssd, then zfs to zfs)...
devel@workstation:/tmp$ time ipfs add mixtral-8x7b-v0.1.Q8_0.gguf added QmXVg3Ae6wRwbvkVqMwySyx6qdcVdjEy1iu8xHnwT9dAoB mixtral-8x7b-v0.1.Q8_0.gguf 46.22 GiB / 46.22 GiB [===========================================================================================================================================================================================================] 100.00% real 14m8.824s user 12m19.404s sys 1m31.919s running offline fregg@workstation:/tmp$ time ipfs add mixtral-8x7b-v0.1.Q8_0.gguf added QmXVg3Ae6wRwbvkVqMwySyx6qdcVdjEy1iu8xHnwT9dAoB...
I do want to mention that I am writing a wrapper to import datasets into IPFS as well, where the number of files will be on the order of 7...
> Per #9678 I have an old branch with a bunch of optimizations for data import that I could rescue/cleanup. (edit: not for merging though as it hardcodes some stuff,...
I was unable to run this headless, I made some changes to fix the dependency issues in ubuntu 20.04, ubuntu22.04 https://github.com/hallucinate-ai/RealtimeTTS/tree/master @KoljaB