ParlAI icon indicating copy to clipboard operation
ParlAI copied to clipboard

Missing Models

Open jaseweston opened this issue 7 years ago • 14 comments

This is a list of models not yet in ParlAI that would be great to have. Feel free to add more to the list also! We will remove individual items when they are done.

  • BiDaf model for QA: https://allenai.github.io/bi-att-flow/
  • model in decaNLP that are missing: https://github.com/salesforce/decaNLP
  • Hierarchical Encoder Decoder for Dialog Modelling https://github.com/julianser/hed-dlg
  • A general-purpose encoder-decoder framework for Tensorflow https://github.com/google/seq2seq
  • Seq2Seq from http://opennmt.net
  • Utilities from AI2 toolkit?
  • ELMo word embeddings: https://github.com/allenai/allennlp/blob/master/tutorials/how_to/elmo.md

jaseweston avatar Jan 10 '18 02:01 jaseweston

@jaseweston for HRED there's already changes made in Julian's fork of ParlAI (https://github.com/julianser/ParlAI).

jsedoc avatar May 14 '18 05:05 jsedoc

If nobody is doing bidaf I can add it. It will take me some time though.

theSage21 avatar Oct 09 '18 10:10 theSage21

If nobody is doing bidaf I can add it. It will take me some time though.

@theSage21 sure that would be great!

jaseweston avatar Oct 12 '18 14:10 jaseweston

I can lay out the code like in the drqa system?

theSage21 avatar Oct 12 '18 14:10 theSage21

@alexholdenmiller can give advice, perhaps

jaseweston avatar Oct 12 '18 14:10 jaseweston

@theSage21, I guess right now the best thing is to use TorchAgent as a base class, check seq2seq agent for example

uralik avatar Oct 12 '18 15:10 uralik

Yes we definitely prefer using the TorchAgent parent class e.g. how seq2seq or memnn or example_seq2seq are set up. It eliminates a lot of copy-pasta from the model.

alexholdenmiller avatar Oct 12 '18 18:10 alexholdenmiller

I should caveat my recommendation: if you're not using pyorch then there will be a few inefficiencies (e.g. casting the torch tensors into another format), but it will still likely simplify the code. You're certainly welcome to roll it from scratch, the TorchAgent (parlai/core/torch_agent) just includes a lot of basic stuff like remembering the conversation history, vectorizing the text, and putting it into batches to feed into the model.

alexholdenmiller avatar Oct 12 '18 20:10 alexholdenmiller

Commenting here to let interested people know, that I have a somewhat working integration of the VHCR model on this fork: https://github.com/Mrpatekful/ParlAI/tree/dialogwae. VHCR is a state-of-the-art dialog model, and I used the official implementation (https://github.com/ctr4si/A-Hierarchical-Latent-Structure-for-Variational-Conversation-Modeling).

The model is far from done however, I haven't really tested it yet (the loss seems to at least go down). I am still working on the generating function at test time, and I haven't even thought about how to integrate beam search yet. I plan to send a PR when I finish these tasks. I am happy to collaborate if anyone is up to it.

ricsinaruto avatar Nov 30 '18 12:11 ricsinaruto

Thanks for the updates @ricsinaruto!

I wanted to quickly note that @stephenroller landed #1260 two weeks ago, which provides a lot of the wrapping around typical generator code. This makes the seq2seq code at parlai/agents/seq2seq/seq2seq.py remarkably short in the current version, and includes functionality for doing beam search for you. You might find it quite a bit easier to rebase and subclass this new TorchGeneratorAgent (parlai/core/torch_generator_agent.py).

alexholdenmiller avatar Nov 30 '18 20:11 alexholdenmiller

Yeah I knew about that but thanks for bringing it to my attention. So far I actually subclassed the seq2seq because it has a lot of funcionality, but I will change it to this new generator agent as it would be cleaner.

ricsinaruto avatar Dec 01 '18 11:12 ricsinaruto

Yes in the master branch nearly all of the functionality you were using in your fork has been moved to the TorchGeneratorAgent, actually!

alexholdenmiller avatar Dec 03 '18 15:12 alexholdenmiller

This issue has not had activity in 30 days. Marking as stale.

github-actions[bot] avatar Jun 03 '20 00:06 github-actions[bot]

In my experiments with BlenderBot 1.0, the 1B was nearly as fast as the 400M model but showed much better conversational performance. The 1B was also much faster than the 3B model.

Therefore, may I ask the BlenderBot 2.0 team @stephenroller @alexholdenmiller et al.: Any chance you consider releasing a 1B model also for BlenderBot 2.0 soon?

I guess this might benefit many other people as well :)

agilebean avatar Sep 19 '21 03:09 agilebean