Change compile for pipeline module torch.compile
We have encountered and issue with torch.compile and the pipeline module. modifying a member of the module (micro_offset) during the forward function will cause torch compile to restart the analysis and treat the module as dynamic. In order to bypass this issue without significantly changing the way the pipeline module works we propose to compile only the layers in the pipeline module instead of the forward function of pipeline module. this will bypass the issue and should still give most of the benefit of torch compiling the pipeline module while avoiding the issue.
Hi @NirSonnenschein, thank you for the great catch! Can you also add a small test case? We want to make sure that this change works for various settings.
Hi @NirSonnenschein, thank you for the great catch! Can you also add a small test case? We want to make sure that this change works for various settings.
Hi @NirSonnenschein - following up on this ask?
Hi @loadams , yes, sorry for the delay, I have been diverted to other urgent issues before completing the test. I should get back to this soon.
Hi @loadams , yes, sorry for the delay, I have been diverted to other urgent issues before completing the test. I should get back to this soon.
No problem, thanks for the update
Hi @loadams I've updated the PR with additional fix and a unit test which should cover this scenario, please re-review when convenient
added fix for test: tests that use torch.compile and run using daemonic process will result in an error on gpu due to the inductor trying to spawn a process.