Rabeea Jaffari

Results 22 comments of Rabeea Jaffari

> The in_channels of the keypoint_head is not correct. > > ``` > keypoint_head=dict( > type='TopDownSimpleHead', > in_channels=3, # this is not correct. > out_channels=3, > num_deconv_layers=0, > extra=dict(final_conv_kernel=1), >...

> @rubeea You can refer to `tools/dataset/parse_macaquepose_dataset.py` to prepare your dataset. @jin-s13 I just realized that deepposekit only supports single object keypoint annotations and does not facilitate maultiple objects in...

> @rubeea Please check the official [homepage](http://www.pri.kyoto-u.ac.jp/datasets/macaquepose/index.html). @jin-s13 thanks I have downloaded the dataset and the annotations.csv file. According to the .csv, MacaquePose supports multiple objects annotations per image. However,...

> Yes, it is! > > We will enter only once in the `for` loop: > > https://github.com/pvigier/perlin-numpy/blob/6f077f811f5708e504732c26dee8f2015b95da0c/perlin_numpy/perlin2d.py#L87-L96 Hi, Noted with thanks. I have one more question regarding other parameters...

> Hey, run your code inside [this ](https://hub.docker.com/r/bvlc/caffe/dockerfile) Docker container. Worked for me. Hi, I am new to Docker. The container you suggested is for running caffe I suppose? How...

use FLAGS.eval_crop_size instead because there is no parameter named train_crop_size in eval.py. This parameter is in train.py

Try printing out the error specifically by changing the exception code to: except Exception as e: print("ERROR:: " + str(e))

The ground-truth is already within the PASCAL VOC dataset. Go to VOC2012> Segmentation Class folder. You can see all the ground-truth masks over there.

Why is batch normalization applied after each conv layer of the encoder and decoder when the training is based on independent images and not batches?

Yeah you are right. Any leads on solving the matter so far? I didn't run create_labels.py to generate enhanced labels for training. Instead I trained with the ground-truth labels of...