Rohit Gupta
Rohit Gupta
Hi guys, as I have mentioned previously, I do not recommend using this code for any of your projects.
This feature would be useful
Hi guys @Shrinidhi1928 @sojhal could you give some more details about this error, when do you get this error ?
Hi ! The final step in the process will have generated captions for the MSVD dataset inside the ```language_model/coco-caption/results/``` folder. If you want to generate captions on different videos, you...
@chikiuso I am working on implementing this, will get back to you in a couple of days
Hey @chikiuso, sorry for the delay, but I have finally checked-in the feature you wanted. Check it out ! ``` bash fetch-pretrained-model.sh bash fetch-from-localpath.sh /home/ubuntu/vid1.mp4 bash process-youtube-video.sh ```
Hi @saharannaveen, I recommend that you use a more modern implementation of video captioning instead of this repository, which is only provided for research purposes and is no longer maintained.
So, I think this is a problem with the path you are using, you might need to use absolute path instead of relative, so something like "/home/yu239/datasets/youtubeclips.zip". Though the exact...
Has anyone resolved this issue ? It seems like a waste of memory to do this, and pointlessly requires us to filter the text file afterwards.
@mobicham minor documentation issue, but the transformers documentation page for quantization has a giant features matrix which still says serialization of HQQ models is not supported https://huggingface.co/docs/transformers/main/quantization/overview