Entity
Entity copied to clipboard
can't test "High Quality Segmentation for Ultra High-resolution Images"
I run test.py but i met this error, miss "_seg.png" file
python test.py --dir ./data/DUTS-TE --model ./model_10000 --output ./output --clear /home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/torchvision/io/image.py:11: UserWarning: Failed to load image Python extension: /home/hoang/anaconda3/envs/tranSOD/lib/python3.6/site-packages/torchvision/image.so: undefined symbol: _ZNK2at10TensorBase21__dispatch_contiguousEN3c1012MemoryFormatE warn(f"Failed to load image Python extension: {e}")
before_Parser_time: 1659253874.6776164 Hyperparameters: {'dir': './data/DUTS-TE', 'model': './model_10000', 'output': './output', 'global_only': False, 'L': 900, 'stride': 450, 'clear': True, 'ade': False} ASPP_4level 12 images found
before_for_time: 1659253881.0989463 ; before_for_time - before_Parser_time: 6.421329975128174
Traceback (most recent call last):
File "test.py", line 106, in
I suggest you insert PDB or Print before line 138 of dataset/offline_dataset_crm_pad32.py to see if you can load the image manually. Thanks.
I suggest you insert PDB or Print before line 138 of dataset/offline_dataset_crm_pad32.py to see if you can load the image manually. Thanks.
I loaded img variable by image file.png but in seg variable, dataset miss "_seg.png" file I used --dir ./data/DUTS-TE. but there is no "*_seg.png" file
So, if you don't want to modify the dataset code, please change your dataset form. Or keep your dataset, and modify the code for loading seg.png. Thanks.
So, if you don't want to modify the dataset code, please change your dataset form. Or keep your dataset, and modify the code for loading seg.png. Thanks.
Thank you for your answer, What is the meaning of seg variable and "_seg.png" image, i want to modify the code for generate seg variable to produce "coord" variable and "cell "varibale
I try to test by using model_10000 and take seg by gray image but the result is so weird
seg = Image.open(im[:-4]+'.png').convert('L')
seg = self.resize_bi(crop_lambda(Image.open(im).convert('L')))
Hi, have you run through the ENTITY coarse partition network yet? I am getting the following error when running this coarse partition network with instances2017 dataset:
ERROR [08/05 15:18:06 d2.engine.train_loop]: Exception during training: Traceback (most recent call last): File "/home/hndx/detectron2-main/detectron2/engine/train_loop.py", line 149, in train self.run_step() File "/home/hndx/detectron2-main/detectron2/engine/defaults.py", line 494, in run_step self._trainer.run_step() File "/home/hndx/detectron2-main/detectron2/engine/train_loop.py", line 268, in run_step data = next(self._data_loader_iter) File "/home/hndx/detectron2-main/detectron2/data/common.py", line 234, in iter for d in self.dataset: File "/home/hndx/anaconda3/envs/llz0/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 521, in next data = self._next_data() File "/home/hndx/anaconda3/envs/llz0/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data return self._process_data(data) File "/home/hndx/anaconda3/envs/llz0/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data data.reraise() File "/home/hndx/anaconda3/envs/llz0/lib/python3.7/site-packages/torch/_utils.py", line 434, in reraise raise exception AssertionError: Caught AssertionError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/hndx/anaconda3/envs/llz0/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop data = fetcher.fetch(index) File "/home/hndx/anaconda3/envs/llz0/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 32, in fetch data.append(next(self.dataset_iter)) File "/home/hndx/detectron2-main/detectron2/data/common.py", line 201, in iter yield self.dataset[idx] File "/home/hndx/detectron2-main/detectron2/data/common.py", line 90, in getitem data = self._map_func(self._dataset[cur_idx]) File "/home/hndx/detectron2-main/detectron2/utils/serialize.py", line 26, in call return self._obj(*args, **kwargs) File "/home/hndx/detectron2-main/detectron2/projects/EntitySeg/entityseg/data/dataset_mapper.py", line 197, in call instances.instanceid = instance_id_list File "/home/hndx/detectron2-main/detectron2/structures/instances.py", line 66, in setattr self.set(name, val) File "/home/hndx/detectron2-main/detectron2/structures/instances.py", line 84, in set ), "Adding a field of length {} to a Instances of length {}".format(data_len, len(self)) ##lizhi long AssertionError: Adding a field of length 0 to a Instances of length 2
detectron2-main
I've seen it a lot in some issues, but I don't know what it is meaning of detectron2-main ?, what does it do?
This is the download of detectron2 according to the readme of the entity network
This is the download of detectron2 according to the readme of the entity network
![]()
I think this repo has 2 projects. one is entity, the other is High-Quality-Segmention. Which one are you working with? .Im working with High-Quality-Segmention
Im working with entity,The first picture shows the readme file of the entity network. Which one has you worked with?
“High Quality Segmentation for Ultra High-resolution Images” doesn't need detectron2. Thanks.
I got it, Thanks
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年8月11日(星期四) 下午4:53 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [dvlab-research/Entity] can't test "High Quality Segmentation for Ultra High-resolution Images" (Issue #21)
“High Quality Segmentation for Ultra High-resolution Images” doesn't need detectron2. Thanks.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
I try to test by using model_10000 and take seg by gray image but the result is so weird seg = Image.open(im[:-4]+'.png').convert('L') seg = self.resize_bi(crop_lambda(Image.open(im).convert('L')))
So, what's your problem? The coarse mask from segmentation model is needed. Thanks.
I used the coarse mask from pspnet
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年8月11日(星期四) 下午4:55 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [dvlab-research/Entity] can't test "High Quality Segmentation for Ultra High-resolution Images" (Issue #21)
I try to test by using model_10000 and take seg by gray image but the result is so weird seg = Image.open(im[:-4]+'.png').convert('L') seg = self.resize_bi(crop_lambda(Image.open(im).convert('L')))
So, what's your problem? The coarse mask from segmentation model is needed. Thanks.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
coarse mask
I understood seg.png as raw mask of another segmentation neural network . So "High Quality Segmentation for Ultra High Resolution Images" is a post-processing right?
yean!
------------------ 原始邮件 ------------------ 发件人: @.>; 发送时间: 2022年8月11日(星期四) 下午5:01 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [dvlab-research/Entity] can't test "High Quality Segmentation for Ultra High-resolution Images" (Issue #21)
coarse mask
I understood seg.png as raw mask of another segmentation neural network . So "High Quality Segmentation for Ultra High Resolution Images" is a post-processing right?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>