MyungHa Kwon

Results 35 comments of MyungHa Kwon

I fixed this problem. Check if it works. Thank you

Well, I haven't trained BERT for many times with different vocab types. This is the only vocab I tried that has the same format with official google research's BERT. So...

Hi, sorry for late reply. It's been 4 months.. First, it seems you need to check each subwords like 'ein', 'tausend' are in your vocab. And if there are, the...

Hi, anidiatm41, Thank you. For 3. Q and A model, Visit [official bert github](https://github.com/google-research/bert). There are instructions about how to do tasks like QA(SQuAD). Predicting missing words and next sentence...

Hello, @xiaoqtcd I'm trying to reproduce Saint model with KT1 dataset and got worse AUC compared to other papers like LPKT, SAINT+, SAINT. As my code worked fine with kaggle...

Hi, @xiaoqtcd LPKT is the model proposed in "Learning Process-consistent Knowledge Tracing"(KDD '21) I used SAINT models implemented from the participants of Kaggle Riiid competition. And modified codes to deal...

@xiaoqtcd As SAINT model doesn't take prior_question_had_explanation, prior_question_elapsed_time as input, I didn't try to reconstruct KT1 into kaggle format. I didn't use them while testing kaggle dataset with SAINT and...

It would be great for me to have entire codes for reproducing the results on the paper either. Because I failed to get the performance with my implementation. Mine was...

Hi, @Nino-SEGALA Thanks for the informing. In my case, I think the problem exists in data processing or data itself, not in modeling. Because my model works fine with Ednet...

@Nino-SEGALA Here's the link to the dataset I mentioned. https://github.com/riiid/ednet It's KT-1 and you also need to download content data.