vokenization icon indicating copy to clipboard operation
vokenization copied to clipboard

PyTorch code for EMNLP 2020 Paper "Vokenization: Improving Language Understanding with Visual Supervision"

Results 8 vokenization issues
Sort by recently updated
recently updated
newest added

Hi, thanks for your interesting work. I met a problem when I tried to finetune the model. I loaded the released pretrained model BERT_base model, and finetuned it on GLUE...

![image](https://github.com/airsplay/vokenization/assets/61137732/eba8f244-4b41-4182-96d3-693bde72e45a)

Thanks for your great work! And I notice that you utilized a non-linear layer with GELU and a LayerNorm operation and a linear layer called decoder as the voken classification...

I have two questions. (1) I notice that in your code https://github.com/airsplay/vokenization/blob/5601b799184ed54414872565f233e22c76f5f6f0/vlm/model.py#L238 , you design three loss function voken classification, voken regression and voken constrastive. But you only report "voken...

Hi authors, Thanks for sharing this nice work! I'm big fan of it. I notice the paper reported results on SQUAD datasets, but I did not find relevant code in...

Hi, Thank you for your great work. I'm trying to train a RoBERTa-based VLM model on my own dataset. I plan to use your pre-trained vokenizer provided [here]( https://github.com/airsplay/vokenization#models). But,...

Minor typos and grammatical fixes

Training of Epoch 0: GPU 0 will process 591616 data in 2311 iterations. 0%| | 0/2311 [00:31