pytorch_block_sparse
pytorch_block_sparse copied to clipboard
Fast Block Sparse Matrices for Pytorch
When I set the blockshape to (64, 64) or (128, 128), the result is quite wrong
Does not seem like any dtypes other than `torch.float32` are supported.
Hi, First of all thanks for setting up this package :) It's super helpful, thanks I'm wondering, is there a way to use a smaller block size ? I tried...
If so, could you provide an example? Thanks.
Hi! I am learning SGEMM and find in dispatch_policies.h has a "Custom", "CustomBack". Not sure what does this mean? Thank you!!!
https://github.com/huggingface/pytorch_block_sparse/blob/0985083851a5708cfb3adf50da19860f467e51ae/pytorch_block_sparse/block_sparse_linear.py#L141
Hi I tried to install the package using the following command but I got OSError. ```shell python -m pip install pytorch_block_sparse ``` The error is the following: ```shell Collecting pytorch_block_sparse...
Hi, First, thanks for this code! ;) From my understanding the sparsity pattern for the block is fully random. This is concerning since it leads to non-full rank matrices when...
Hi, I experiment with the following codes: ``` import torch from pytorch_block_sparse import BlockSparseLinear import time import sys iter = int(sys.argv[1]) dsty = float(sys.argv[2]) fc = BlockSparseLinear(1024, 256, density=dsty) fc_dense...
this is the code: ``` import torch from pytorch_block_sparse import BlockSparseLinear x = torch.randn(32, 128).to('cuda') y = torch.randn(32, 64).to('cuda') model = torch.nn.Sequential( BlockSparseLinear(128, 64) ).to('cuda') y_pred = model(x) loss =...