torchrec icon indicating copy to clipboard operation
torchrec copied to clipboard

Pytorch domain library for recommendation systems

Results 455 torchrec issues
Sort by recently updated
recently updated
newest added

It seems torchrec does not support the combination of data parallelism and row-wise parallelism for embedding. I want to know is there a plan on it? Or is row-wise parallelism...

# Describe the bug A PSCollection should contain optimizer states besides weights. The optimizer states tensors are obtained directly [from EmbeddingCollection Module](https://github.com/pytorch/torchrec/blob/main/contrib/dynamic_embedding/src/torchrec_dynamic_embedding/ps.py#L165-L168). However, the [sharded_module.fused_optimizer.state_dict()['state']](https://github.com/pytorch/torchrec/blob/main/contrib/dynamic_embedding/src/torchrec_dynamic_embedding/ps.py#L153C32-L153C75) does not contain key `{table_name}.momentum2`...

bug

Summary: As titled Differential Revision: D58834418

CLA Signed
fb-exported

Summary: # context * convert `FeatureProcessedEmbeddingBagCollection` to custom op in IR export * add serialization and deserialization function for FPEBC * add an API for the `FeatureProcessorInterface` to export necessary...

CLA Signed
fb-exported

Differential Revision: D59019375

CLA Signed
fb-exported

Summary: Fusing states for each metric to reduce computation overhead. this will help **every** model that uses RecMetrics. By fusing state we no longer all gather per state, we see...

CLA Signed
fb-exported

Summary: packaging issue importing `_permute_tensor_by_segments` - jagged_tensor.py refactor once jagged_tensor has the changes Differential Revision: D59014188

CLA Signed
fb-exported

Summary: Implementing feature grouping in APS model generator to enable unified embedding onbaording to FM This diff introduced a new config section "unified_embedding" inside `AdsFeatureArchEntityConfig` and `AdsFeatureArchConfig` (for v1/v2 compatibility)...

CLA Signed
fb-exported

Use `ast.unparse(ast.parse(code))` to normalize the source code. This ignores formatting differences and allows us to enable code formattor on `torch` upstream. As per: - https://github.com/pytorch/pytorch/pull/128594#issuecomment-2181558483 - https://github.com/pytorch/pytorch/pull/128594#discussion_r1649795743

CLA Signed

Summary: # specs * inputs: a list of KTs a. the N-th KT has a shape of (batch_size, dimN), batch_size should be identical b. the N-th KT contains a list...

CLA Signed
fb-exported