Sil van de Leemput

Results 21 comments of Sil van de Leemput

@djiajunustc Hi, thanks for your interest in MemCNN. The settings for the implementation_fwd and implementation_bwd have been around since the very beginning of the library, but have been deprecated for...

Hi @cubicgate, thanks for your interest in MemCNN. Your question is very broad, do you want it for a specific application? Are you interested in inference, training, or both? You...

https://discuss.pytorch.org/t/measuring-peak-memory-usage-tracemalloc-for-pytorch/34067

@ClashLuke Thanks for the suggestion! However, my experience with psutil based solutions is that they aren't sufficiently fine grained for the memory tasks at hand, unless you use large memory...

@xuedue Hi, thanks for using MemCNN. Whereas the `memcnn.AdditiveCoupling` expects `Fm` and `Gm` to have a single input x and a single output y of the same shape, `memcnn.AffineCoupling` expects...

> Fm and Gm need to have input x and output y of the same shape. If I want to implement a reversible MLP with different input and output channels,...

> If I change the output to 100 dimensions and only take two of them,It doesn't make sense. Why? Could you elaborate? Doesn't this work for your use case? Alternatively,...

> Thank you for your reply, I modified it according to your description, but it brought another error. > ![image](https://user-images.githubusercontent.com/37701943/127422192-99ab1a94-fc61-4e61-9642-d2ce4104add2.png) > ![image](https://user-images.githubusercontent.com/37701943/127422213-73e6240e-0499-4444-a4bd-a37b0258e3b9.png) > > If I use AffineAdapterNaive instead of...

Ok, thanks for clarifying your question. First, I would suggest making layers 1-6 invertible. This should be simple (the `in_features`/`out_features` ratio is 1:1, which is what memcnn supports very well)...

> Can the network be reversible if the input must be a 9216-dimensional vector and the output is a 1024-dimensional vector? For as far as I know this can't be...