torchrec icon indicating copy to clipboard operation
torchrec copied to clipboard

Pytorch domain library for recommendation systems

Results 455 torchrec issues
Sort by recently updated
recently updated
newest added

Differential Revision: D38637338

CLA Signed
fb-exported

Summary: For batched dense embedding we incorrectly provide a `named_buffers` implementation due to inheritance. For batched dense embeddings the weight values should be provided only through `named_parameters` rather than `named_buffers`...

CLA Signed
fb-exported

Summary: Incorporate Torchrec's OSS planner into train_module. The OSS planner can be enabled by setting "`use_torchrec_oss_planner`" flag to true in SharderOptions (https://fburl.com/code/xjlbrscm). When the flag is true, `run_planner` calls `run_oss_planner`...

CLA Signed
fb-exported

Summary: This change adds a new `PooledEmbeddingsReduceScatterV` which uses `_reduce_scatter_v` and `_all_gather_v` in c10d for performing row-wise and table-wise row-wise sharding. Differential Revision: D37735264

CLA Signed
fb-exported

Summary: Update torchrec docs to include docker Differential Revision: D38466887

CLA Signed
fb-exported

when I use horovod to run deepFM, it error File "/home/maer/zhipeng.li/project/torch_rec_demo/torchrec_ctr/run_horovod_deepfm.py", line 451, in main(args) File "/home/maer/zhipeng.li/project/torch_rec_demo/torchrec_ctr/run_horovod_deepfm.py", line 380, in main model = init_model(device=device) File "/home/maer/zhipeng.li/project/torch_rec_demo/torchrec_ctr/run_horovod_deepfm.py", line 187, in init_model...

Summary: The current EmbeddingBagCollection/FusedEmbeddingBagCollection are only usable through the DistributedModelParallel wrapper which override common torch.nn.module APIs (named_parameters/state_dict) etc. However, this makes these modules inflexible, and sometimes unusable without using DMP....

CLA Signed
fb-exported

Summary: Since we no longer rely on DistributedModelParallel (for composability piece), we need an alternative way of getting the fused optimizer. get_fused_optimizer implements this, logically it's the same as the...

CLA Signed
fb-exported

## Desired behavior We want to make TorchRec sharding composable w/ other sharding/parallelism techniques. This practically means that after applying TorchRec sharding model characteristics remain the same (e.g. state_dict() doesn’t...

Summary: pruning sharding options in the greedy proposer like we did in the grid search proposer to reduce the search space. -> by so, number of proposals decrease and thus...

CLA Signed
fb-exported