algorithmic-efficiency
algorithmic-efficiency copied to clipboard
Support FSDP in JAX workloads
It is useful to shard optimizer state across devices (to save significant memory). This reflects current practice. We want to support it.
- We want to switch from no sharding to naive model parameter sharding in both framworks.
- We will forbid (in the rules) any hacks that change the model parallelization strategy and have workload-default sharding.
- Allow submitters to opt-out of it on a per-workload basis.
From meeting notes 9/5
Possible with pmap, but easier with jit.
- How we replicate parameters is not under the control of the submitters.
- Would introduce a breaking change to existing submissions.
- Could be solved by creating a no op.
- We currently have the workloads call jax.utils.replicate.
- DeepSpeech code would be simplified with jit but introduce breaking changes.
Sourabh’s Suggestion:
- Workloads shouldn’t do any across device all-gather.
- Allow submissions control over how parameters are replicated (with a new submission function).
- Switch to global arrays and switch the workload accordingly.
From 9/12 meeting notes: Recap: It is useful to shard optimizer state across devices (to save significant memory). This reflects current practice. We want to support it. We don’t want to support arbitrary model parallelism.
- Sourabh: We could allow model-agnostic model parameter sharding.
- Michael: We still want to ensure that the frameworks are comparative.
Proposal: Switch from no sharding to naive model parameter sharding. Switch from pmap to jit in JAX and allow optimizer state sharding (that follows the model parameter sharding) in both frameworks. Forbid (in the rules) any hacks that change the model parallelization strategy. Have workload-default sharding. Allow submitters to opt-out of it on a per-workload basis.
Fixed in https://github.com/mlcommons/algorithmic-efficiency/pull/848