christianpayer

Results 28 comments of christianpayer

The values themselves should be close. Both ours and their values are averages, but with different weighting factors. However, individual landmark outliers should have a larger influence in our calculation...

@GoodMan0 sorry for the delayed response. Regarding your experiments for the spine localization: the number you see with the training script (main_spine_localization.py) are not the reported ones. As we are...

@zengchan The id_rate tells you how many of the groundtruth vertebrae are correctly identified. If you do not predict a landmark, which is annotated in the groundtruth, it won't be...

@zhuo-CHENG Sorry for the delayed response. It seems that there is some problem with the `SpinePostprocessing`. The code of this class is not the cleanest and could make some problems....

Hi, for our papers, we created our own landmark annotations and cross validation setup. You can find it under [bin/experiments/localization/hand_xray/hand_xray_dataset/setup](https://github.com/christianpayer/MedicalDataAugmentationTool/tree/master/bin/experiments/localization/hand_xray/hand_xray_dataset/setup). The file all.csv contains the annotations of the 37 landmarks....

For training and evaluating on other datasets, you would need to update the dataset.py files for the specific training and inference scripts. However, this could be quite difficult depending on...

Thanks for your interest in our papers and the framework! Regarding your observed error message when running the code on the CPU, many parts of the framework are not tested...

Hi, we used the function as it is in the code for our final submission to the VerSe challenge. If I remember correctly, we used this factor 3 to give...