Aradhye Agarwal
Aradhye Agarwal
@Zhenyu001225 any idea why this happens? An extreme case is for transformers 4.40.0 which gave me gibberish output as mentioned in [this](https://github.com/AGI-Edgerunners/LLM-Adapters/issues/63) issue. Thanks
Hi @yongmayer, thanks for appreciating our work. So we used a per device batch size of 4 resulting in a total batch size of 32 with 8 GPUs. The speed...
Hi @yongmayer. While we do use the stable diffusion backbone, we do not use it as a diffusion model per se. Instead we use the UNet (or backbone) as a...
Stay tuned!
Please take a look at https://github.com/Aradhye2002/EcoDepth/issues/25.
Hi @PreyumKr, Might I ask what per device batch size you used for training? Also, how much memory does (do) your GPU(s) have? As per our experiments, for the smallest...
Hi, Can you try visualizing the images? My suspicion is that the scale factor is incorrect since the a{1,2,3} values are coming 0. If this is indeed the case, then...
Hi @wanshishuns. We're delighted that you found our work helpful! The VIT_MODEL which we have defined in EcoDepth/model.py `google/vit-base-patch16-224`, is built for the ImageNet dataset. The ImageNet dataset has 1000...
I think this should be possible now that we have integrated EcoDepth with pytorch-lightning. We'll get back to you on this.