Conchylicultor

Results 115 comments of Conchylicultor

You are right, the models are incompatibles between the datasets. It is a limitation of the program. A more serious problem is that the models are even incompatible within the...

As both chatbots would use different sessions, I think they should be on different graphs. If you simply change the scope, as the scope is global, the second session would...

This is indeed a missing feature and I'll try to add the TF dynamic padding feature. It should definitely help the performances and speed up the training. However, I don't...

It may be true if you add the padding to the right of your original sequence (input - padding - decoder), but I don't see why that would be the...

That's definitely a required feature. I'll try to add that with the input queue and bucketing mechanism.

Unfortunately I'm not familiar enough with Bazel, but if someone send me a pull request, I'll be happy to accept it.

The program is killed by the Linux Kernel when you run out of RAM. Without GPU, you also consume more RAM due to the network. I also [had planned](https://github.com/Conchylicultor/DeepQA/blob/b624a24272d891bada95b788fd88655c445984f7/chatbot/textdata.py#L105) to...

Glad you succeeded. I had a quick look at the code and it seems you didn't use C++ as you originally planed. Was it because of technical limitation of the...

Indeed, interactive and testing mode only generate one sentence at a time. It should be possible to add minibatch support by modifying the `sentence2enco` and `deco2sentence` functions in `textdata.py`. For...

An other way to get diverse answer I had thought of was to sample the answer from the softmax outputs as done in https://github.com/karpathy/char-rnn . In theory, it should be...