LibRecommender icon indicating copy to clipboard operation
LibRecommender copied to clipboard

Multi-model inference

Open budbuddy opened this issue 1 year ago • 2 comments

Hello,

One of the main use cases when deploying real-world models is to train several "expert models" on different subtasks of the main inference task, then using some linear combination of them to do inference.

Example:

  • Model1 learns to predict clicks on an item
  • Model2 learns to predict purchases of an item

Main model = amodel1 + bmodel2

Here the task is separated into two subtasks, and the model used to give recommendations is a linear combination of the two.

Do you have any plans to support this in the library or should it be done case by case for each project? I'm not event sure if there's a clean way to do multi-model inference, especially given that the infer function on the different types of models you offer can work in very different ways. For example I think it's very easy to do for Two Tower, you can pretty much just add the models together and get the expected output, but things get a bit more tricky if you add a model like DIN to the mix.

budbuddy avatar Aug 14 '24 08:08 budbuddy

Hi, I wouldn't recommend doing multi-model inference with the models in this library as they are primarily designed for single tasks.

I'm planning a major upgrade of the whole library, which aims to convert all models to the PyTorch implementation, as the tf1 syntax used now is outdated. Support for multi-task models will also be included, but this new version will be released at least a few months later.

massquantity avatar Aug 16 '24 04:08 massquantity

That's good to hear, personally I much prefer pytorch syntax over tensorflow's. Thank you for the update

budbuddy avatar Aug 16 '24 08:08 budbuddy