easy_seq2seq
easy_seq2seq copied to clipboard
[unmaintained] go to https://github.com/suriyadeepan/practical_seq2seq
After trained enough, I run with "test" and input someting. But get Error. > hello Traceback (most recent call last): File "E:/A Files/chatbot-master/execute.py", line 324, in decode() File "E:/A Files/chatbot-master/execute.py",...
# Only allocate 2/3 of the gpu memory to allow for running gpu-based predictions while training: What's the logic for this decision?
Its possible to serve/test while training? Thanks.
Just curious, what model architectures have you experimented with? I tried a 3-layer LSTM with 256 nodes, and noticed that as the global perplexity fell below 5, the individual bucket...
After training the model for around 108300 training cycles I interrupted the training and started the testing process. As per the instructions provided I replaced mode= test in seq2seq.ini file...
Is there somewhere I could access your pretrained model?
A lot of the time when I talk to the model it replies with simply _UNK. Especially for quite short queries. When I train with my own larger corpus it...