UltraLight-VM-UNet icon indicating copy to clipboard operation
UltraLight-VM-UNet copied to clipboard

[arXiv] The official code for "UltraLight VM-UNet: Parallel Vision Mamba Significantly Reduces Parameters for Skin Lesion Segmentation".

Results 50 UltraLight-VM-UNet issues
Sort by recently updated
recently updated
newest added

作者您好,在我测试数据集isic2017过程中,拿以前的模型权重却发现和以前达到的效果不一样,我发现您放在了cpu上,我去掉这个,效果有提升,但还是不如之前 首先我检查了模型参数是否匹配 odict_keys(['module.encoder1.0.weight', 'module.encoder1.0.bias', 'module.encoder2.0.weight', 'module.encoder2.0.bias', 'module.encoder3.0.weight', 'module.encoder3.0.bias', 'module.encoder4.0.skip_scale', 'module.encoder4.0.norm.weight', 'module.encoder4.0.norm.bias', 'module.encoder4.0.mamba.A_log', 'module.encoder4.0.mamba.D', 'module.encoder4.0.mamba.in_proj.weight', 'module.encoder4.0.mamba.conv1d.weight', 'module.encoder4.0.mamba.conv1d.bias', 'module.encoder4.0.mamba.x_proj.weight', 'module.encoder4.0.mamba.dt_proj.weight', 'module.encoder4.0.mamba.dt_proj.bias', 'module.encoder4.0.mamba.out_proj.weight', 'module.encoder4.0.proj.weight', 'module.encoder4.0.proj.bias', 'module.encoder5.0.skip_scale', 'module.encoder5.0.norm.weight', 'module.encoder5.0.norm.bias', 'module.encoder5.0.mamba.A_log', 'module.encoder5.0.mamba.D', 'module.encoder5.0.mamba.in_proj.weight', 'module.encoder5.0.mamba.conv1d.weight', 'module.encoder5.0.mamba.conv1d.bias',...

Hello! I hope this message finds you well. I'm encountering some challenges while training my own dataset, and the results are not as satisfactory as I hoped. I was wondering...

Download the ISIC 2017 train dataset from [this](https://challenge.isic-archive.com/data) link and extract both training dataset and ground truth folders inside the /data/dataset_isic17/. 请问这个是要怎么操作的,请详细一点告诉我

请问这个要怎么解决呢,要改代码吗,如果要改的话,怎么改呢

我按照您分享的方法实现我自己的数据集,发现训练得很慢,9个epoch大概用了40分钟,而我的数据集只有ISIC的差不多四分之一。这是为什么呢?求解答! ![image](https://github.com/wurenkai/UltraLight-VM-UNet/assets/67263797/46a315fb-ab4f-483a-bbe6-eaf4cf69e544)

TypeError: causal_conv1d_fwd(): incompatible function arguments. The following argument types are supported: 1. (arg0: at::Tensor, arg1: at::Tensor, arg2: Optional[at::Tensor], arg3: Optional[at::Tensor], arg4: bool) -> at::Tensor 请问什么问题呢

作者您好,我看到你是基于VM-Unet这个项目,并且在前面的多类别问题中的也是用的VM-Unet。我想问一下,你是否能成功使用VM-Unet训练多类别分割呢?因为我使用那个项目训练发现loss并没有下降,最终的效果也很差。再请问 #54 中把train_synapse.py中的模型换成UltraLight_VM_UNet,再进行一系列的修改吗?感谢!

I want to ask how to adapt it for 2D multi-class classification tasks. What do I need to modify?

Hello, thank you so much for your outstanding work! I encountered some problems in the training, that is, I could not reach the index described in the paper. In ISIC...

我已经把PH2数据集下载好了,也把里面的.bmp改为了.jpg,接下来我该怎样改Prepare_PH2.py里面的一些参数和路径等,然后运行它呢 ![image](https://github.com/user-attachments/assets/ee26b20e-bb07-4a62-a971-b2b0efd813b5)