fhaghighi

Results 18 comments of fhaghighi

Hi, thanks for your interest in our work. Any type of data can be passed as the input to our network. Our self-discovery process can extract anatomical visual words from...

You do not need any prev_coordinates the first time you run the code (its default value is set to None). This argument helps the user to generate coordinates at different...

You just need to run the pattern_generator.py with the number of the coordinates that you want. If you do not have the previ_coordinates, that is fine.

Hi, please use the updated link to download our Pytorch model. Also, here is the download [link](https://zenodo.org/record/4625321/files/TransVW_chest_ct.pt?download=1) for your convenience.

Hi, the previous work is related to our MICCAI 2020 paper. This repo is related to our IEEE TMI paper, which is a journal paper extension of our conference paper....

Hi, Thanks for your interest in our work. The link to the self-discovered data has been updated; so, you should be able to download it now from the Keras and...

Hi, please use the updated link to download our Pytorch model. Also, here is the download [link](https://zenodo.org/record/4625321/files/TransVW_chest_ct.pt?download=1) for your convenience.

In general, any pre-trained network can be used as the feature extractor. The train_autoencoder.py is parametric; for Auto-encoder, the option --arch Vnet should be used. The network can be trained...