bhack
bhack
> Is @ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda only intended to build from a docker image ? It is mainly for producing manylinux2014 compatible wheels. But as we don't wan to maintain too much build...
@shkarupa-alex It was closed automatically as connect to your PR by Github "magic" keywords..
I suppose that is just a side effect of the monorepo approach: https://jmmv.dev/2021/02/google-monorepos-and-caching.html
In the TPU FAQ I see >How can I write a custom op for Compute Engine? TensorFlow ops that run on Compute Engine are implemented in XLA HLO, a language...
@frankchn Ok. can you reach anyone internally to fix the FAQ? Cause with that text seems that currently there is "an undocumented" path to build that custom ops.
Thanks
@ematejska As you can see also in https://github.com/keras-team/keras-cv/pull/146#issuecomment-1063974863 it is always hard to interact/mention the right codeowner if we don't adopt a codeowner transparency.
The last case was confirmed https://github.com/keras-team/keras-cv/pull/146#issuecomment-1068486128 like in https://github.com/keras-team/keras-cv/issues/74#issuecomment-1035597742
/cc @theadactyl
> @theadactyl To put on the agenda for the next TF steering committee mtg. Check also the recent policy change in pytorch: https://github.com/pytorch/pytorch/pull/78850/files