Wee Tee, Soh

Results 20 comments of Wee Tee, Soh

I have never encountered this issue. After `resize_token_embeddings()`, the trained model weights will be loaded with `load_state` which loads the trained embeddings, so there is no reason for them to...

Hi, yes the training loss curve for semeval training is in the results folder. Please note that the MTB model has been updated and hence the old loss curve for...

Yup, will do so once I have the available GPU compute to satisfactorily pre-train it on suitable data.

Looks good, I also got something like this with cnn dataset. But note that the loss consists of lm_loss + MTB_loss. From what I can see, lm_loss seems to decrease...

Yeah, no good results pretraining MTB based on CNN dataset so far. Best is to directly fine-tune using pre-trained BERT.

I don't yet have a satisfactory trained EM-MTB model, I will add that in once I have one.

probably got to do with your test_loader outputs

If you want to run the task on your own data, do note that in preprocessing_funcs.py line 70: `rm = Relations_Mapper(df_train['relations'])`, the relation classes are mapped using the train set...

can you post the full error traceback please? Meanwhile, try updating numpy and see if the problem persists.

Thanks for your support. I saw MuZero quite some time ago. The authors published a pseudo-code from which one can modify but it takes quite a bit of work +...