keras-unet-collection icon indicating copy to clipboard operation
keras-unet-collection copied to clipboard

The Tensorflow, Keras implementation of U-net, V-net, U-net++, UNET 3+, Attention U-net, R2U-net, ResUnet-a, U^2-Net, TransUNET, and Swin-UNET with optional ImageNet-trained backbones.

Results 29 keras-unet-collection issues
Sort by recently updated
recently updated
newest added

The figure below shows that the value of a decoder neuron is based on 5 other values: ![image](https://user-images.githubusercontent.com/9789042/182028198-ae7494db-5d7c-4d13-a6d7-c72487e951da.png) When trying to reproduce this in the code, I get the following:...

Hello, I tried using your model it works perfectly fine with GPU but when I try to use it with TPU in Colab and also GCP VM with TPU same...

I am trying to use the unet_2d model from collection . However the curve shows that data is overfitting. I want to add regularizer and droput to the layers ....

How can I add an activation function like tanh for the "output_activation" parameter? @yingkaisha Thanks!

I tested all of the models and they all worked fine, except the vnet_2d: model.vnet_2d((128, 128, 3), filter_num=[64, 128, 256, 512], n_labels=7, res_num_ini=1, res_num_max=3, activation='ReLU', output_activation="Softmax", batch_norm=True, pool=True, unpool=True, name='vnet')...

Hi, Please, I want the closest configuration to the original paper **Swin Unet** ``` outputs = base.swin_unet_2d_base(input_img, filter_num_begin=64, depth=4, stack_num_down=2, stack_num_up=2, patch_size=(2, 2), num_heads=[4, 8, 8, 8], window_size=[4, 2, 2,...

Hi, I want to test Unet3+ with the same configuration in the original paper. Is this right? `base.unet_3plus_2d_base(input_img,filter_num_down=[64, 128, 256, 512,1024],filter_num_skip=[64,64,64,64],filter_num_aggregate=16,weights=None)` And I want to know what `filter_num_aggregate `is and...

I got below error when tried to use "transunet_2d" model = models.transunet_2d((128, 128, 3), filter_num=[64, 128, 256, 512], n_labels=3, stack_num_down=2, stack_num_up=2, embed_dim=768, num_mlp=3072, num_heads=12, num_transformer=12, activation='ReLU', mlp_activation='GELU', output_activation='Softmax', batch_norm=True, pool=True,...

base_model = models.att_unet_2d((128, 128, 3), filter_num=[64, 128, 256, 512], n_labels=1, stack_num_down=2, stack_num_up=2, activation='ReLU', atten_activation='ReLU', attention='add', output_activation='Sigmoid', batch_norm=True, pool=False, unpool=False, backbone='VGG16', weights='imagenet', freeze_backbone=True, freeze_batch_norm=True, name='attunet') print(base_model.summary()) out put params =10M but...

Hi; Any reason you didn;t use "model.fit_generator" in your example? I tried to use "model.fit_generator" with "transunet_2d", I got below error: ``` Epoch 1/10 --------------------------------------------------------------------------- InvalidArgumentError Traceback (most recent call...