encoder-agnostic-adaptation
encoder-agnostic-adaptation copied to clipboard
Encoder-Agnostic Adaptation for Conditional Language Generation
The issues with its current version: - Broken code due to inappropriate `requirements.txt` - Version conflicts between different packages - Some unclear steps in the README on data-used and how...
Hi, Thanks for your great work. It is possible for your guys to release the hyper-parameters for the image paragraph captioning task? I notice that it has been missing for...
In story generation task, the original code fetches incorrect index of "logprob", leading to incorrect perplexity evaluation. Originally: tgt_in = tgt[:-1] log_probs, attn = ... ... gold = tgt_in Edited:...
Hi, thanks for the great work. I was wondering if you can supply the pointers for the copy mechanism? Also, I was not aware of encouraging to copy using BPE...
Are you planning on using the biggest GPT-2 for the image captioning model you are working on?
Switching variable type from `torch.uint8` to `torch.bool` makes it pytorch 1.2 compatible.
Hi, according to the `preprocess.py` file, you choose the special tokens as follows, ``` tgt_bos = '' tgt_eos = '\u0120GDDR' tgt_pad = '\u0120SHALL' tgt_unk = '\u0120RELE' src_pad = '\u0120SHALL' src_unk...
Hi, According to the README file, for summarization (cnndm) task the following truncation setup is recommended: `-src_seq_length_trunc 400` However, on the training data, the average/median length of the source is...
Bumps [torch](https://github.com/pytorch/pytorch) from 1.0.1 to 2.2.0. Release notes Sourced from torch's releases. PyTorch 2.2: FlashAttention-v2, AOTInductor PyTorch 2.2 Release Notes Highlights Backwards Incompatible Changes Deprecations New Features Improvements Bug fixes...