Results 16 comments of Xiang Gao

hi @tsutsen thanks for your interest in our work! I'm going to post replies in [this issue](https://github.com/golsun/DialogRPT/issues/8)

well... I have to admit this process doesn't look very elegant -- basically what I wanted to is to remain special token `__url__` and `__mention__`, but line `txt = re.sub(r"[^A-Za-z0-9()\[\]:,.!?'“”...

@ArEnSc @chiranshu14 maybe you can try [this script](https://github.com/golsun/DialogRPT/blob/master/src/generation.py) with `python src/generation.py play -pg=restore/medium_ft.pkl --sampling` It's from [DialogRPT](https://github.com/golsun/DialogRPT), a dialog response ranking model from our group as a follow-up work of...

No it’s not related to Tay. On Mon, Feb 22, 2021 at 7:43 AM Michael Chung wrote: > @golsun is this the same technique that was > used to train...

> @ArEnSc @golsun > > _python src/generation.py play -pg=restore/medium_ft.pkl -pr=restore/updown.pth --sampling_ > > This command worked perfectly, got awesome results. I'm going to play around with this some more. But...

> > No it’s not related to Tay. > > On Mon, Feb 22, 2021 at 7:43 AM Michael Chung _**@**_.***> wrote: @golsun https://github.com/golsun is this the same technique that...

Sorry for late reply. Could you please try with smaller batch size? e.g. `python src/main.py mtask train --data_name=toy --batch_size=32` (you may need to `git pull` first)

hi @pablogranolabar , in case of CPU, instead of `weights = torch.load(path)` as in `model.py`, you can use `weights = torch.load(path, map_location=torch.device('cpu'))`

hi @vivek22122014 , actually you only need step 1 and step 4 the huggingface links are mainly for online demo purpose.

when you run step 4, these models will be automatically downloaded by [this function](https://github.com/golsun/DialogRPT/blob/e25ef21a7961e2045405fad4249c15b99876e7a8/src/shared.py#L8) if you haven't download them before