torchrec icon indicating copy to clipboard operation
torchrec copied to clipboard

Pytorch domain library for recommendation systems

Results 455 torchrec issues
Sort by recently updated
recently updated
newest added

Summary: PP requires non contiguous DMP sharding. In today's torchrec planner, there are various locations where ranks are assumed to be contiguous, this prevents intra host pipeline parallel to utilize...

CLA Signed
fb-exported

Reviewed By: PaulZhang12 Differential Revision: D55389988

CLA Signed
fb-exported

Hi, team, In the `ShardedEmbeddingBagCollection`, I found torchrec explicit make dp lookup as `DistributedDataParallel`([code here](https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/embeddingbag.py#L503)). And I also know inside [DistributedModelParallel](https://github.com/pytorch/torchrec/blob/main/torchrec/distributed/model_parallel.py#L216) we have ddp wrapper to warp the non-sharded part...

Summary: Sort group keys in embedding_sharding so keys(lookups) with has_feature_processor=True executes first. Differential Revision: D55045404

CLA Signed
fb-exported

When I try to walk through the steps of the Colab demo for Torchrec, I get this error: Here is the link to the demo: https://colab.research.google.com/github/pytorch/torchrec/blob/main/Torchrec_Introduction.ipynb#scrollTo=4-v17rxkopQw

Summary: When available scaleup budget is larger then the amount of memory to promote all eligible scaleup tables to HBM, limit the search space to this ceiling, else we'll consume...

CLA Signed
fb-exported

Differential Revision: D54731104

CLA Signed
fb-exported

Differential Revision: D54756659

CLA Signed
fb-exported

Hello, I generated the KJT with lengths is a tensor with full of 1s. then I get an error as: keys = ['f0', 'f1', 'f2', 'f3', 'f4', 'f5', ...], stride...

Summary: As titled Created from CodeHub with https://fburl.com/edit-in-codehub Reviewed By: sarckk Differential Revision: D54489049

CLA Signed
fb-exported