added an adaptive batching mechanism to handle large test sets based on estimated memory usage
This PR fixes #125
Decription
added an adaptive batching mechanism to handle large test sets based on estimated memory usage. The default behavior can be overridden by expert users by adjusting the memory_saving_mode parameter.
changes made
- Added a method to estimate memory usage.
- Modified the predict_proba method to use adaptive batching.
Hi @Krishnadubey1008,
Thanks for tackling issue #125 and adding the adaptive batching mechanism.
To get this merged, could you please address the following points:
Memory Estimation Function: Please move the memory estimation logic into a reusable function within our utils repository. This helps keep common utilities centralized. You can use this implementation as a reference: https://github.com/PriorLabs/tabpfn_common_utils/blob/524cee72cc6f33cf59fc943dc3e4b5428f3a79bc/expense_estimation.py#L9 CI Checks: The automated tests and the Ruff linter are currently failing. Please investigate the errors shown in the CI checks logs and apply the necessary fixes. Copilot Suggestion: Please review the comment/suggestion made by GitHub Copilot on this PR, as it may contain relevant points. Let me know if you have any questions!
@Krishnadubey1008 would you look into reusing functionality and potentially getting this merged? :-)
Closing due to inactivity.