Pramesh Gautam
Pramesh Gautam
I want to compute where the point (x,y) in the image is mapped to after applying the transformation. How can this be done in airlab?
I was going through the code of BSplineTransformation. I was trying to relate is to this [paper](https://ieeexplore.ieee.org/abstract/document/796284/) as stated in airlab's paper. However, I am not able to relate how...
In the paper, Figure 5b (Analysis for in-between class samples) is explained as **the probability to predict neither two classes by varying the combination ratio lambda**. Does it mean that...
There are 75 images that have 5 lanes but the architecture is to detect 5 categories (background + 4 lanes). I have removed those images from training and used the...
Hi In standard resnet18 from [Pytorch](https://pytorch.org/vision/main/generated/torchvision.models.resnet18.html), when `3,224,224` input is fed, `layer4` should output feature map of size `512x7x7`. However, in your repo, the final layer feature size is `512x28x28`....
I'm not able to understand how the masked classification score is computed. In the paper it says **input images are masked by the corresponding segmentations**. How can the images be...
I tried passing a 2D image of the plane from PASCAL VOC dataset to replicate the results shown in Fig. 4a. The image attached below was passed. However, while running...
In the codebase, rotary embeddings are applied only to queries and keys but not to values. Can someone point out to reasons/papers behind this design? Thank you in advance!!! 