[xla][cpu] Add Arm Compute Library (ACL) build option for torch-xla
ACL build support has been added under the build flag: XLA_CPU_USE_ACL
This will enable the acl runtime for xla once the below tensorflow/xla PR is merged. https://github.com/tensorflow/tensorflow/pull/55534/commits
Want to make sure that this flag is harmless(seems like that's the case) without corresponding tf change. We update tf version manually, next update will happens in couple weeks.
Hi @JackCaoG, yes, the acl build flag takes effect only when the tf changes are merged. Even then it is guarded it under the "XLA_CPU_USE_ACL" build option, so, it wouldn't trigger the acl build unless wheel/docker/manual build explicitly sets it.
@yeounoh FYI
Thanks, we can merge this one when tf one merged and test it next time we update tf.
Hi @JackCaoG , TensorFlow xla PR got merged.
Hi @JackCaoG , please let me know if there are any other dependencies for this PR. Thank you!
@snadampal Sorry for the delay. This pr is blocked on our tf pin update. We wants to update it but tf has moved to c++17 while pytorch stay in c++14. This creates a bunch of compilation issue we are trying to resolve. Will merge this one when tf pin update happens.
Thanks, @JackCaoG!
Just a quick update, we are still blocked by pytorch to do the tf pin update. The collaboration is ongoing, I will update here when we got is solved.
@JackCaoG Thanks for the update!
Hi @JackCaoG , can this PR be targeted for the upcoming release, PyTorch 1.13.0?
sure, sorry for the delay. We just updated the tf pin to 06/22 but it seems like your pr merged to tf in 06/29. We are doing another tf pin update this week. @wonjoolee95 FYI.
I think you have done manual testing to make sure this options works so I am not too worried about that. I will make sure 1.13 has this change(and the corresponding tf version that can actually build with ACL)
@snadampal Sorry for the delay, this c++17 thing is especially difficult, we usually do monthly tf update but this time we are almost 2 months behind.. trying to catch up.
thanks, @JackCaoG .
@snadampal Let's rebase this pr. You can build pt/xla with the new flag and see if it works for you. After you confirm it builds, I can merge this pr.
Note before building you want to run
pytorch/xla# ./scripts/apply_patches.sh
to apply some tf patches
Hi @JackCaoG , I've rebased and tested the pr on top of below commits, the feature is working as expected.
pytorch: commit d7029fea5113468441cb358bced6045e6e4d4b9a xla: commit 4226edb6d303cb9bdf4968a734269086aa278030
@JackCaoG , thanks for merging the pr!