Harsha
Harsha
this line https://github.com/neuronets/nobrainer/blob/ed0d609333f9aea724c5fd45962c36dfd8a13d88/nobrainer/dataset.py#L122 is a huge bottleneck in the code. For context, please refer to https://stackoverflow.com/questions/70992022/how-to-get-the-correct-cardinality-of-a-tensorflow-dataset-after-filtering
this is resolved by calculating the number of files within each shard. This case doesn't account for the situation where the last shard may have fewer files but that shouldn't...
- Code for paper https://github.com/ozan-oktay/Attention-Gated-Networks (in pytorch) - the model as proposed in paper (with gating attention) is at https://github.com/ozan-oktay/Attention-Gated-Networks/blob/master/models/networks/unet_grid_attention_3D.py - print summary as follows ``` from torchinfo import summary...
- [ ] add tests and update `__init__.py`
Everything coinstac-ssr-* can go. If @spanta28 and @rssk are okay, I will go ahead and archive (not delete) them myself.
@satra Topofit uses freesurfer for preprocessing the inputs before inference. Should we include freesurfer as a requirement to enable preprocessing or should we warn the user that the data has...
built docker image successfully
worked fine with tesla-v100 (11.2) but not A100 (11.7)
Makes sense. Thank you.