powderluv
powderluv
I suggest for TF we use our `n2-highcpu-64` icelake instances. It has two numa nodes of 16 cores (no HT) and the Intel TF version pins TF to one numa...
Today an end user installing SHARK does ``` pip install nodai-shark -f https://github.com/nod-ai/SHARK/releases -f https://github.com/llvm/torch-mlir/releases -f https://github.com/nod-ai/shark-runtime/releases --extra-index-url https://download.pytorch.org/whl/nightly/cpu ``` and the same user using IREE would do ``` pip...
Here is an example for how `-pre` is used for the nightly and doesn't require a -f URL https://github.com/openai/triton
@julianwa can I add this to NOD Model Coverage Milestone (Can't seem to add to JAX training too).
> Adding new conversions is fine pending on these two conditions: one is that there is the support behind it to carry to a fully (or close to fully) supported...
We (nod.ai) have couple customers that require this - and we plan to support them somehow. And that is why we added the initial RFC with implementation in torch-mlir (which...
you can workaround this by setting `ulimit -n unlimited` but still we probably want to cap the max number of parallel threads
Looks like Refbackend fails to run the test on GHA ? Should we skip it by default ?
Please do since I wont be able to test end to end without PyPi access. I think this https://github.com/marketplace/actions/pypi-publish would help. We will have to uncomment this line to run...
I can also think of pass through to drop into a command line in the docker and ability to execute any command.