Pytorch-CapsuleNet
Pytorch-CapsuleNet copied to clipboard
changed the matrix multiplication in line 60 for better performance
Instead of creating the matrix W, a large matrix, I have calculated the matrix multiplication in an iterative way. So for batch size 100, the memory allocation has reduced from 562 MiB to 141 MiB
There exists a trade-off between computational efficiency and memory usage when multiplying two matrices together. In many cases, performing matrix multiplication in parallel can result in significantly quicker results compared to sequentially applying each element of one or both vectors individually. So I would make your approach optional, not forced.