When finetuning, len(train_loader)==0, ZeroDivisionError: integer division or modulo by zero in tasks/pretrain.py
I want to finetune the InternVideo2-Stage2_1B-224p-f4 on activitynet. I adjust the data in data.py. My data is:
available_corpus["anet_ret_val"] = dict( anno_path=".../ActivityNet/anno_downstream/anet_ret_val.json", data_root=".../ActivityNet", media_type="video", is_paragraph_retrieval=True, max_txt_l = 150 ) available_corpus["anet_ret_train"] = dict( anno_path=".../ActivityNet/anno_downstream/anet_ret_train.json", data_root=".../ActivityNet", media_type="video", is_paragraph_retrieval=True, max_txt_l = 150 )
anet_ret_train.json data example: [ { "video": ".../ActivityNet/videos_images/v_QOlSCBRmfWY.mp4", "caption": "A young woman is seen standing in a room and leads into her dancing.", } ...... ]
In pretrain.py I got len(train_loaders)=1, but len(train_loader)=0. So I got ZeroDivisionError: integer division or modulo by zero. I carefully check the frame extraction, I found the frame and video are processed correctly. But MetaLoader has 1 dataloaders, 0 batches in total Was it because I've done incorrect setting of some parameters?
In the command line, the mistake is: Ceph is not working!!! 2025-03-07T18:08:22 | main: train_loaders!: [<torch.utils.data.dataloader.DataLoader object at 0x7f2ad6ed1810>],len:1 2025-03-07T18:08:22 | dataset.dataloader: MetaLoader has 1 dataloaders, 0 batches in total dataloader index=0 name=video, batch-size=1024 length(#batches)=0 .... [rank0]: begin_step = global_step % len(train_loader) [rank0]: ZeroDivisionError: integer division or modulo by zero