chainer-pose-proposal-net icon indicating copy to clipboard operation
chainer-pose-proposal-net copied to clipboard

evaluation

Open terasakisatoshi opened this issue 5 years ago • 8 comments

terasakisatoshi avatar Dec 19 '18 03:12 terasakisatoshi

@lxy5513

Your latest metrics.txt attached in #6 works on my PC. It is helpful for me.

the total pck is: 0.66
INFO:__main__:head: 0.85	shoulder: 0.52	ankle: 0.51	elbow: 0.66	wrist: 0.53	hip: 0.73

what's more, I add several lines codes in get_example in dataset.py for fit the metrics.py

Did you mean add item of is_visible to call like below?

is_visible = test_set.get_example(idx)['is_visible'] 

terasakisatoshi avatar Dec 19 '18 05:12 terasakisatoshi

@terasakisatoshi

Did you mean add item of is_visible to call like below?

Yes. it is.

In my model, it total PCK can get 0.84, as follow:

image

lxy5513 avatar Dec 19 '18 06:12 lxy5513

I should have said the result I mentioned above is mv2 insize 224x224 On the other hand, the result of ResNet18 is as follow.

# ResNet18, insize 224x224
INFO:__main__:the total pck is: 0.76
INFO:__main__:head: 0.89	shoulder: 0.59	ankle: 0.6	elbow: 0.78	wrist: 0.7	hip: 0.78

terasakisatoshi avatar Dec 19 '18 07:12 terasakisatoshi

@terasakisatoshi @lxy5513 Have you evaluated in COCO dataset?

FoolishMao avatar Jan 04 '19 02:01 FoolishMao

@FoolishMao

Well...I'm trying, but still WIP. :sob:

I will have a long bussiness trip, so I can't take much time to do for this repository.

If you have any experience, your help is wanted. Thank you.

terasakisatoshi avatar Jan 04 '19 02:01 terasakisatoshi

@terasakisatoshi Thanks for your works! and I will try too.

FoolishMao avatar Jan 04 '19 05:01 FoolishMao

I will share a notebook for an evaluation model trained with COCO. This is still WIP, so may contain some bugs.

evalationcoco.zip

terasakisatoshi avatar Jan 04 '19 10:01 terasakisatoshi

i have evaluated in COCO dataset, but the result is very bad, like below `CUDA_VISIBLE_DEVICES=1 python3 val_resnet34.py result/resnet34_224x224_coco/ Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.063 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.178 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.034 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.040 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.103 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.156 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.330 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.129 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.052 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.298

CUDA_VISIBLE_DEVICES=1 python3 val.py result/resnet18_384x384_coco/ Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.053 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.174 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.021 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.061 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.070 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.140 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.326 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.102 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.073 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.232

CUDA_VISIBLE_DEVICES=1 python3 val.py result/resnet18_224x224_coco/ Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.048 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.152 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.020 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.031 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.082 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.133 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.300 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.099 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.040 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.258

CUDA_VISIBLE_DEVICES=1 python3 val_mobilenetv2.py result/mobilenetv2_224x224_1.0_coco/ Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.034 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.112 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.014 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.023 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.062 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.103 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.252 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.073 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.028 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.206`

val.zip

also, i add a new class about CocoValDataset: `class CocoValDataset(DatasetMixin): def init(self, insize, labels, images_folder): with open(labels, 'r') as f: self._labels = json.load(f) self._images_folder = images_folder self.insize = insize

def get_example(self, idx):
    w, h = self.insize

    file_name = self._labels['images'][idx]['file_name']
    path = os.path.join(self._images_folder, file_name)
    image = utils.read_image(path, dtype=np.float32, color=True)
    image = image.copy()
    _, image_height, image_width = image.shape
    image = transforms.resize(image, (h, w))

    return {
        'img': image,
        'file_name': file_name,
        'image_height': image_height,
        'image_width': image_width
    }

def __len__(self):
    return len(self._labels['images'])`

lunalulu avatar Sep 12 '19 07:09 lunalulu