RecBole icon indicating copy to clipboard operation
RecBole copied to clipboard

[💡SUG] RepeatNET expensive tensor multiplications of very sparse matrices

Open bkersbergen opened this issue 1 year ago • 5 comments

Is your feature request related to a problem? Please describe. The RepeatNet model contains expensive tensor multiplications of very sparse matrices which are implemented with dense operations and representations and therefore incur high overheads when the self.num_item becomes large.

Describe the solution you'd like Both the 'repeat' and the 'explore' modules use one-hot-encoding for each item in item_seq with vector length self.num_item and then these matrices are multiplied with a hidden state. The multiplication of these very sparse matrices are implemented with dense operations and representations and therefore incur high memory and computational overheads

Dense matrix operation in Repeat_Recommendation_Decoder

Dense matrix operation in Explore_Recommendation_Decoder

The multiplication of both matrices can be done efficiently using the sparse API of PyTorch.

Describe alternatives you've considered

Additional context

bkersbergen avatar Oct 23 '23 08:10 bkersbergen

@bkersbergen Thank you for your suggestion! We will consider it in our next development plan.

BishopLiu avatar Oct 24 '23 06:10 BishopLiu

You can check this modification of RepeatNet: https://github.com/iesl/softmax_CPR_recommend/commit/dccc0f631883ced3eccfee637dac015ddcf9c151#diff-83d3abd653e8e8377a420b979057df9d9064c531bc608ffc08d2e3c7b8be1004

This code lets you avoid the expensive multiplication without using sparse API of PyTorch.

ken77921 avatar Nov 05 '23 00:11 ken77921

@ken77921 Thank you for providing the code! We will check it and give feedback to you soon.

BishopLiu avatar Nov 06 '23 03:11 BishopLiu

@bkersbergen @ken77921 Hello! We have optimized the expensive tensor multiplication in repeatnet. The detail is available in #1916. Thanks again for your suggestion and support!

BishopLiu avatar Nov 15 '23 16:11 BishopLiu

Great to hear that optimizations for the tensor multiplication in RepeatNET have been implemented! Thank you for addressing the inefficiency. I'll definitely check out the details. Your responsiveness to feedback is appreciated, and I'm glad I could contribute.

bkersbergen avatar Nov 15 '23 21:11 bkersbergen