irgan
irgan copied to clipboard
about training batch size in Item Recommendation
I notice that in the code of item recommendations, both dynamic negative sampling and irgan generator is trained by a single user one iteration. In my perspective of view, this training strategy is same to setting the training batch size as 1, which is not quite weird for me. Could you please explain the reason why you choose the training strategy that way? Thanks.
I also am a bit puzzled by the choice of iterating over all users when training the generator. Is there a specific reason for doing this? Thank you!
@pangolulu Any insight about this? Thank you!