SASRec.pytorch
                                
                                 SASRec.pytorch copied to clipboard
                                
                                    SASRec.pytorch copied to clipboard
                            
                            
                            
                        PyTorch(1.6+) implementation of https://github.com/kang205/SASRec
In [this line](https://github.com/pmixer/SASRec.pytorch/blob/master/model.py#L117), what is the best way to think about this `matmul`? I see that it is calculating dot products for the `final_feat` and each emb in `item_embs`. If...
I am testing with a large data set with 10+ million users. I have 64GB RAM. The dataset can fit in RAM initially. But as training progresses, e.g. during epoch...
Is there a way that predict method works in the following way : (This can be used for real time cases) Given a new user sequence of items that the...
I am experiencing an issue when I give the network sequences in which the last object is replaced by padding (0). In this case, the trained model always outputs the...
In the sample_function (utils module, line 36) there is a "while True" loop, where the sample() result is appended infinitely many times.
I don't quite understand why there is a minus sign in front of the forecast. Ask the boss to answer. 
pos_logits = (log_feats * pos_embs).sum(dim=-1) neg_logits = (log_feats * neg_embs).sum(dim=-1) Ask the boss to explain the two steps and why to multiply by bits. After reading this paper, I didn't...
The codes are running well, but if I substitute the given dataset for the dataset used in Tisasrec which was also wirtten by you (the first line is 1 1193...
I found `seq *= self.item_emb.embedding_dim ** 0.5` in function `log2feats(self, log_seqs)`, Is there any reason for adjusting seqs after embedding? `seqs = self.item_emb(torch.LongTensor(log_seqs).to(self.dev)) seqs *= self.item_emb.embedding_dim ** 0.5`
https://github.com/pmixer/SASRec.pytorch/blob/72ba89d4a1d0319389ef67ee416e33b7431c8b9b/model.py#L85 Can you explain what this line does? Why is the attention output being added to Q?