chowkamlee81
chowkamlee81
Pretrained model along with inference for one sample video? Are you releasing pretrained model Thanks
Any updates on the code release
**Describe** Model I am using (UniLM, MiniLM, LayoutLM ...): Iam using LayoutXML to train on each language separately for Relation Extraction module on XFUND dataset. I want to multiple-languages combinedly...
Currently iam using YOLOV5 detection and following with https://jacobgil.github.io/pytorch-gradcam-book/EigenCAM%20for%20YOLO5.html. Now i would like to see how different objects are getting the same regions of activation colors and how Gradcam is...
**Description:** Hi. Tried to build on a fresh Ubuntu 18.04 machine, with a fresh docker install. **Command:** ``` sudo docker build --rm -t kimera_vio -f ./scripts/docker/Dockerfile . ``` **Console output:**...
@SubMishMar @amirx96 How to know estimated calibration parameters are correct? Any reprojection errors or any visualization codes would be really helpful for community. Kindly do the needful
can u please help us in finding package **camera_models**
Do you have any smaple code to display 2d bounding box on lidar bird eye view data. Kindly help
@erikbohnsack Evaluation dataset is different from training set? For training set, you have : KITTI tracking data For evaluation? What kind of dataset you used...Kindly reply
@erikbohnsack How to organnsiez dataset? Kindly help