research-contributions icon indicating copy to clipboard operation
research-contributions copied to clipboard

Effect of flattening and reshaping

Open lweitkamp opened this issue 3 years ago • 2 comments

My question is regarding the following code snippet:

https://github.com/Project-MONAI/research-contributions/blob/1b93b892f31390e98c4faa68e5660c7fcfb22083/SwinUNETR/Pretrain/models/ssl_head.py#L96-L97

What is the use of this? When running the code, the following shapes are present:

x_out: torch.Size([1, 768, 3, 3, 3])
x_rec_flatten: torch.Size([1, 768, 27])
x_rec_view: torch.Size([1, 768, 3, 3, 3])

It does not seem to change anything, but I might be mistaken.

As an additional question, for the contrastive loss and rotation heads, do you only use one of the 3*3*3 channels for each? That probably makes a lot of sense, just wanting to get some confirmation.

lweitkamp avatar Aug 01 '22 14:08 lweitkamp

Hi @lweitkamp thanks for reporting this issue, let me confirm the original implementation. For the contrastive and rotation head, yes. Use one of the channels is the a simple choice here. Thanks again for the comment.

tangy5 avatar Aug 23 '22 00:08 tangy5

Hi @lweitkamp thanks for reporting this issue, let me confirm the original implementation. For the contrastive and rotation head, yes. Use one of the channels is the a simple choice here. Thanks again for the comment.

I stumbled over the same question. Do you have any answer for the flattening and reshaping?

marvnmtz avatar Dec 13 '22 10:12 marvnmtz