Samit
Samit
line 29 in Original_CapsNet.py, FCCapsuleLayer is missing.
Hi, Hongyang, thanks for sharing the code. Since the code for transductive tasks is not available, could you please share the hyper-parameters such as num of hidden layers in the...
Although you mentioned the pre-processing steps on WSIs to extract patches briefly, lots of detail are not clear. Sharing the generation code will make it more convincing!
val_acc is always zeros since "correct" is a tensor with int type. You should "correct" into a float type before computing "correct / len(test_loader.dataset)". For example, add this in line...
Thanks for the awesome work! Will the code for Camelyon 17 dataset released? Looking forward to it.
There is a bug when reshaping the spatial-temporal input for computing spatial attention, [here](https://github.com/PKU-YuanGroup/Open-Sora-Plan/blob/main/opensora/models/ae/videobase/modules/attention.py#L59) ``` python b, c, t, h, w = q.shape q = q.reshape(b * t, c, h...
Due to the first frame is excluded from interpolation in TimeUpsamle2x, as follows in code, ```python x,x_= x[:,:,:1],x[:,:,1:] x_= F.interpolate(x_, scale_factor=(2,1,1), mode='trilinear') x = torch.concat([x, x_], dim=2) ``` decoder output...
Thanks for the nice work. Could you please release the code for MDNSal? "The readout architecture con- sists of a convolutional layer to reduce the number of channels followed by...
If this is your first time, please read our contributor guidelines: https://github.com/mindspore-lab/mindcv/blob/main/CONTRIBUTING.md **Describe the bug/ 问题描述 (Mandatory / 必填)** https://github.com/mindspore-lab/mindcv/blob/main/mindcv/utils/train_step.py#L81 当前gradient clip是在gradient reduction(多卡求平均)之前做的,按理应先做gradient reduction使梯度的方向和magtitude更稳定准确(此处可能已经消除了梯度爆炸),再做gradient clip防止梯度爆炸。 - **Hardware Environment(`Ascend`/`GPU`/`CPU`) / 硬件环境**:...