Provide a list of supported XLA operations like TensorFlow Lite
TensorFlow Lite provides a list of currently supported ops here and I wonder if XLA could also have such a list. It's rough to develop and train a model with the full TensorFlow Python API only to get stuck during AOT compilation because of missing ops kernels in the tf2xla bridge.
We now have some auto-generated tables listing the supported ops on CPU and GPU: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/tf2xla/g3doc/cpu_supported_ops.md https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/tf2xla/g3doc/gpu_supported_ops.md
Unlike the TFLite docs, we don't have a breakdown starting from the Python APIs; the above tables are based on the op names in the GraphDef. At the moment, if we wanted the Python API breakdown, we'd need to do that manually, and that seems unlikely to remain up-to-date. I hope the above tables are still useful though.
Thanks!
@tatatodd @joker-eph @MarkDaoust Do you know who is going to re-generate the tables mentioned by @tatatodd? It seems that last time they were updated in 2018.
It seems it was introduced many years ago by @caisq with https://github.com/caisq/tensorflow/commit/4b0a23684852fe68ac2248fe2e04e118a6173848
@lamberta @mihaimaruseac Do you know what kind of internal infra is going to "regularly" run https://github.com/tensorflow/tensorflow/blob/07c2bcc534a6b41c4953a1db70f6386b144cc5b7/tensorflow/compiler/tf2xla/tf2xla_supported_ops_main.cc#L20 to update the Markdown tables?
The tflite page doesn't get regular updates either: https://www.tensorflow.org/lite/guide/ops_compatibility
That xla command still works.
One solution would be to integrate this into api-reference generator, add an XLA column to the https://www.tensorflow.org/api_docs/python/tf/raw_ops page:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/docs/generate2.py#L104
tensorflow.org/xla comes from tensorflow/compiler/xla/g3doc/ maybe someone there would have interest in pushing this through.
One solution would be to integrate this into api-reference generator, add an XLA column to the https://www.tensorflow.org/api_docs/python/tf/raw_ops page: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/docs/generate2.py#L104
Is this orchestrated by public available Github actions or with internal scripts?
If not:
Is this bazel target tensorflow/compiler/tf2xla:tf2xla_supported_ops available in the wheel?
Is the wheel installed in the orchestrated environment of docs/generate2.py?
At least can we reopen this ticket adding also the XLA label?
can we reopen this ticket adding also the XLA label? Done.
It's an internal tool that runs those. They're run from the target version's github branch, with bazel available, so just calling that bazel command and merging the output into that raw-ops table would work.
t's an internal tool that runs those.\nThey're run from the target version's github branch, with bazel available, so just calling that bazel command and
Thanks so probably It Is a little bit hard to contribute a PR with only the OSS/Github visibilty.
Yes.
It's possible that just integrating it into generate2.py with subprocess.check_output(['bazel', 'run', '-c', 'opt', '--', 'tensorflow/compiler/tf2xla:tf2xla_supported_ops']) could get the job done.
I meant could it be tested locally when we have no visibility of the CI logs?
If anyone gets it working locally then it's my job to be sure it works in the CI.
Yes when we don't have or we want to have the orchestration/environment with public visibility we need to have exrta docs on how to test this locally if we want to collect community contribution. As also the TFlite markdown is on hold since 2020 we could ping also the TFlite team
Can we find an owner? As I don't know if @tatatodd is still on this project.
@MarkDaoust We could do some steps ahead with https://github.com/tensorflow/tensorflow/pull/56510
Just curious, is there any plans to keep XLA operator information up-to-dated in TensorFlow's documentation? Thanks!
@ganler As you see I cannot make progress on my PR at https://github.com/tensorflow/tensorflow/pull/56510
/cc @cheshire @theadactyl
Hi,
Thank you for opening this issue. Since this issue has been open for a long time, the code/debug information for this issue may not be relevant with the current state of the code base.
The Tensorflow team is constantly improving the framework by fixing bugs and adding new features. We suggest you try the latest TensorFlow version with the latest compatible hardware configuration which could potentially resolve the issue. If you are still facing the issue, please create a new GitHub issue with your latest findings, with all the debugging information which could help us investigate.
Please follow the release notes to stay up to date with the latest developments which are happening in the Tensorflow space.
This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.
This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.