timestep scheduling with np.linspace
Fixes #8600
Description
The np.linspace approach generates a descending array that starts exactly at 999 and ends exactly at 0 (after rounding), ensuring the scheduler samples the entire intended trajectory.
Types of changes
- [x] Non-breaking change (fix or new feature that would not break existing functionality).
- [ ] Breaking change (fix or new feature that would cause existing functionality to change).
- [ ] New tests added to cover the changes.
- [ ] Integration tests passed locally by running
./runtests.sh -f -u --net --coverage. - [ ] Quick tests passed locally by running
./runtests.sh --quick --unittests --disttests. - [ ] In-line docstrings updated.
- [ ] Documentation updated, tested
make htmlcommand in thedocs/folder.
Walkthrough
The diff changes how DDPM and DDIM compute inference timesteps in set_timesteps. Both schedulers now use torch.linspace to generate a linearly spaced sequence from (num_train_timesteps - 1) down to 0 across num_inference_steps, rounding and casting to int64, and creating the tensor directly on the target device. DDIM’s construction subtracts the provided steps_offset from the linspace start, then adds steps_offset back to the result and enforces steps_offset ∈ [0, num_train_timesteps). NumPy-based arange/round/device-move logic is removed. Public APIs remain unchanged.
Estimated code review effort
🎯 3 (Moderate) | ⏱️ ~25 minutes
- Files to inspect closely:
- monai/networks/schedulers/ddpm.py — verify torch.linspace endpoints, rounding, dtype, device, and behavior when num_inference_steps is 1 or equals num_train_timesteps.
- monai/networks/schedulers/ddim.py — verify steps_offset range check, the subtract-then-add pattern around linspace, rounding, dtype, device, and downstream uses of timesteps.
- Pay attention to rounding behavior, guaranteed inclusion of endpoints, extreme values (large timesteps), and any assumptions elsewhere about previous numpy-based timestep values.
Pre-merge checks and finishing touches
❌ Failed checks (1 warning, 1 inconclusive)
| Check name | Status | Explanation | Resolution |
|---|---|---|---|
| Docstring Coverage | ⚠️ Warning | Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. | You can run @coderabbitai generate docstrings to improve docstring coverage. |
| Title check | ❓ Inconclusive | Title mentions 'np.linspace' but changes show torch.linspace used instead; somewhat misleading about the actual implementation. | Clarify whether the title should reference 'torch.linspace' or generically 'timestep scheduling' to match the actual code changes. |
✅ Passed checks (3 passed)
| Check name | Status | Explanation |
|---|---|---|
| Description check | ✅ Passed | Description references issue #8600 and explains the fix approach, but doesn't document test coverage, docstring updates, or other checklist items. |
| Linked Issues check | ✅ Passed | Changes directly address issue #8600 requirements: timestep computation now includes endpoint via linspace in both DDPM and DDIM schedulers. |
| Out of Scope Changes check | ✅ Passed | All changes confined to DDPM and DDIM scheduler timestep computation; no unrelated modifications detected. |
✨ Finishing touches
- [ ] 📝 Generate docstrings
🧪 Generate unit tests (beta)
- [ ] Create PR with unit tests
- [ ] Post copyable unit tests in a comment
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.
Hi @virginiafdez, @KumoLiu, @Nic-Ma and @ericspod,
Sorry to bother. Could you please review the PR or give some tips?
Thanks in advance!
Hi @ytl0623 thanks for the contribution. I think the changes are fine, the resulting timesteps chosen are different as the issue discusses and starts from the correct value as far as I can tell. @virginiafdez please also have a look and see if there's anything else to spot.
Hi, @ericspod, thanks for the review and feedback. I've incorporated the changes.
/build