Results 16 comments of Xiang Gao

hi @tsutsen , The simplest way is to use DialogRPT to rank the hypothese generated by DialoGPT, [this](https://github.com/golsun/DialogRPT/blob/master/src/generation.py#L142) is one example of implementation. and you can try [this](https://github.com/golsun/DialogRPT#use-generator--ranker) to play...

hi @vivek22122014 , DialogRPT is independently installed. it doesn't need DialoGPT script. but it needs a DialoGPT checkpoint ,if you want to integrate them. if you already downloaded a DialogRPT...

hi @pablogranolabar , I can think of several potential reasons of OOM: * `torch.no_grad` which avoids grad taking memory it was already applied in [scorer](https://github.com/golsun/DialogRPT/blob/e25ef21a7961e2045405fad4249c15b99876e7a8/src/model.py#L50), but not in `generation.py` --...

yes `human_vs_rand` (together with `updown`)should help in that case. if memory is a concern, a low-memory way without using `human_vs_rand` is to decode response with small `top_k` or `top_p`, this...

hi, in this case, I guess a simple way without dealing with `ensemble.yml` is ```python # `get_model` and `predict` are functions from score.py hvm = get_model('restore/human_vs_machine.pth') updown = get_model('restore/updown.pth') score_hvm...

Hi @LeiShenVictoria , sorry for the late reply! The training is based on this repo: https://github.com/golsun/utterance_classifier I'll add more details in the readme later. Xiang