FamousDirector
FamousDirector
It may be possible to export the jax models to Tensorflow format. See here: https://github.com/google/jax/tree/main/jax/experimental/jax2tf
@KshitizLohia have you tried setting the option of `zipmap=false` as seen here? http://onnx.ai/sklearn-onnx/auto_tutorial/plot_dbegin_options_zipmap.html#option-zipmap-false This should allow it to have an array-like output which should be Triton compatible.
I'm a bit of a noob at cross compiling. How would I do this for an armv6 architecture?
Okay so I followed those instructions and cross compiled the library with the following Dockerfile: ``` FROM balenalib/raspberry-pi-python:3.7-stretch-build ARG ONNXRUNTIME_REPO=https://github.com/Microsoft/onnxruntime ARG ONNXRUNTIME_SERVER_BRANCH=v1.8.0 # Enforces cross-compilation through Qemu. RUN [ "cross-build-start"...
I modified it from the documentation [here](https://github.com/microsoft/onnxruntime/blob/master/dockerfiles/README.md#arm-32v7). I scrolled up in the document you send and got it from [here](https://www.onnxruntime.ai/docs/how-to/build/inferencing.html#cross-compiling-for-arm-with-simulation-linuxwindows).