safetensors
safetensors copied to clipboard
Simple, safe way to store and distribute tensors
Opened Temporary File in Binary Mode: The NamedTemporaryFile is now opened in binary mode ("wb") to ensure compatibility when writing binary data. Using time.time() for Time Comparison: time.time() is used...
# What does this PR do? This PR fixes an incorrect Tensor's data swap in BF16 on a big-endian machine. It uses `Storage.byteswap(datatype)` in PyTorch instead of `numpy.byteswap()`. This is...
# What does this PR do? Reinstates the checks "temporarily reverted" in https://github.com/huggingface/safetensors/pull/336 (that doesn't describe what this mysterious "big endian breakage" is that was fixed by this change 🤷)....
### Feature request I'm interested in streaming the tensors in a model key by key without having to hold all keys at the same time in memory. Something like this:...
# What does this PR do? This PR adds support for saving tensors of type `torch.complex64`. Fixes #450
### System Info safetensors v0.4.2 huggingface_hub v0.22.0.dev0 ### Information - [ ] The official example scripts - [X] My own modified scripts ### Reproduction We recently switched to leveraging Safetensors...
`safetensors.torch.load_file` has a "device" parameter to load the tensors directly to the correct device. This PR adds support for this parameter in `safetensors.torch.load_model` too. (also fix typo in docstring)
### System Info ``` >>> safetensors.__version__ '0.4.2' ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Reproduction When I executed transformer models...
### Feature request Are there still plans to support saving individual tensors to a safetensors file? Specifically, for my use case I would want to update a tensor in-place, meaning...
The code in get_tensor assumes that torch.asarray will always load the tensor in cpu, and then the tensor is copied to self.device in the "to" call at the end of...