3DUnet-Tensorflow-Brats18
3DUnet-Tensorflow-Brats18 copied to clipboard
3D Unet biomedical segmentation model powered by tensorpack with fast io speed
How to show segmentation result(different colours) on Original image like in your Readme?I tried findcontours, but it not work,thank you!
Traceback (most recent call last): File "D:\1_Deep_learning\codes\Segmentation Deep learing\3DUnet-Tensorflow-Brats18-master\train.py", line 211, in data=QueueInput(get_train_dataflow()), File "D:\1_Deep_learning\codes\Segmentation Deep learing\3DUnet-Tensorflow-Brats18-master\data_sampler.py", line 240, in get_train_dataflow ds = BatchData(MapData(ds, preprocess), config.BATCH_SIZE) File "D:\1_Deep_learning\codes\Segmentation Deep learing\3DUnet-Tensorflow-Brats18-master\data_sampler.py",...
Thanks for your open source First, I have a bug, Your code had killed or hanging program when finished epoch 1, not continues training with epoch 2... Would you like...
I run the code with patch '128x128x128', and cuda out of memory. So I change patch size to '20x128x128',and it says that: ValueError: Dimension 0 in both shapes must be...
Hello! I am trying to save the model generated once the _train.py_ has been completed succesfully. I am not familiar with tensorflow nor keras, so I really don't know where...
Traceback (most recent call last): File "train.py", line 211, in data=QueueInput(get_train_dataflow()), File "C:\Users\idir.hired\Desktop\algoSeg\3DUnet-Tensorflow-Brats18-master\data_sampler.py", line 242, in get_train_dataflow ds = PrefetchDataZMQ(ds, 6) File "C:\Users\idir.hired\AppData\Local\Continuum\anaconda3\envs\table1\lib\site-packages\tensorpack\dataflow\parallel.py", line 274, in __init__ super(PrefetchDataZMQ, self).__init__() File...
To Set up environment kindly provide the requirements.txt
I train with my own data set, don't use 5 fold cross validation. Though I set PATCH_SIZE = [24, 24, 128], BATCH_SIZE = 1. Always reporting mistakes as follow: Could...
I'm running this model on single 1080ti, and the nvidia-smi show that i've already dumped 8G data into GPU while the utilization rate of GPU remain 0% during most time....