Dong-Jin Kim
Dong-Jin Kim
Hello. Thank you for your interest in our work. After running `preprocess.py`, you will get VG-regions-dicts_R2longv3.json and VG-regions_R2longv3.h5 that can be used for training. Best regards, Dong-Jin.
Hello. Thank you for your interest in our work. I have just added the code "run_model.lua" in our repository. The instruction can be found in the updated README.md file. Best...
First, I would like to note that the relational captioning task itself is a challenging task (mAP score). The good news is that we updated our code with our newest...
Hello. Thank you for your interest in our code. When training, the GPU consumption was about 12GB, and it took about 3~4 days for training. I think you would need...
Hello. The VG website doesn't contain our ''relational_caption.json'' file. This file can only be downloaded from our link https://drive.google.com/file/d/1cCN36poslxe7cCMkLnhYK0a-Y3vO4Rfn/view
I am not sure but I found out that our code is running on Tesla K40 properly as well. I guess this might be the problem of cuda version or...
Hello. Thank you for your interest in our research. Unfortunately, we don't have plans to make our code runnable for Win10 environment. However, we are currently working on modifying our...
Hello. We don't have a python version of code. We have conducted all the experiments only with this Lua based code. If the python version is ready, I will let...
For now, in order to run the modified baseline, you have to change the code of (exp/hoi_classifier/models.hoi_classifier_model.py). These are the 2 lines of code you have to change. [L27] self.classifier...
I somehow had a problem with tensorboard, so I removed the code for tensorboard for now. You can use the tensorboard by replacing the code L147-157 of exp/hoi_classifier/train.py with the...