Vladislav Kim
Vladislav Kim
The easiest solution would be to save before closing and load upon start as you suggest. This definitely has to be configurable, since as you mention .RData can be quite...
Perhaps, somewhat related to this: skimage (v14.0) throws a loss-of-precision warning when `equalize_adapthist` is applied to a float image, since the first step in `equalize_adapthist` is conversion to `uint16` data...
Same problem here. How bad is this bug, i.e. can I simply ignore the "Triton Error"?
Hi @rbareja25, I am experiencing similar issues, i.e. I am struggling to reproduce DINO results using DINOv2. In our case we used exactly the same hyper parameters as in DINO....
Hi @alexaatm, thank you for pointing this out. @qasfb @patricklabatut I believe this is an implementation issue. We set KoLeo and iBOT loss weights to zero and we could not...
@qasfb Sure! We are using NVIDIA A100 GPUs (80 GB GPU memory). For the ViT-S backbone, we only need 1 GPU, but we also tried FSDP with 2 GPUs. We...
@qasfb The teacher momentum is from the original DINO and we used the value that produced the best results. Indeed, layer scale was not part of the original DINO and...
@usryokousha Thanks! I managed to get it running without the `--use-env` flag: ``` export CUDA_VISIBLE_DEVICES=0,1 && python -m torch.distributed.launch --nproc_per_node=2 dinov2/train/train.py --config-file=myconfig.yaml --output-dir=my_outputdir ``` In fact, feeding `--use-env` resulted in...