Mehwish FATIMA
Mehwish FATIMA
> Hi @MehwishFatimah Thank you for your feedback. This can be done by modifying the `multitask_data_collator` and adding parallel batches for all tasks. Then in the forward function of the...
Hi, I guess MBart large is not going to fit in 16GB GPU memory. There are a couple of options you might try: - Try Mbart base rather than the...
np_mask = np_mask.cuda() trg_mask = trg_mask.cuda() trg_mask = trg_mask & np_mask It will resolve the issue, hopefully.
> Traceback (most recent call last): ] 0% loss = ... > File "train.py", line 183, in > main() > File "train.py", line 111, in main > train_model(model, opt) >...
I guess this error is due to max_seq_len in Positional Encoder, not due to max_strlen
> could you resolve this error? I am getting the same Yes, however, I couldn't produce the correct output. Change the max_seq_len according to your input size for encoder and...