tonibofarull

Results 16 comments of tonibofarull

If there are no additional checks, maybe it's easier to encapsulate your function with a macro.

Any news on this? :cry:

I'll start working on it as soon as I have some time. I'll post a message here to let you know when I start, in case anyone else wants to...

WAMR generates the thread ID in the [spawn-thread implementation](https://github.com/bytecodealliance/wasm-micro-runtime/blob/WAMR-1.3.0/core/iwasm/libraries/lib-wasi-threads/lib_wasi_threads_wrapper.c#L114) but is wasi-libc who sets the `pthread_t *` [here](https://github.com/WebAssembly/wasi-libc/blob/wasi-sdk-20/libc-top-half/musl/src/thread/pthread_create.c#L575). after calling `wasi_thread_spawn`. The simplest solution would be to create independent args:...

Interesting! While we find a better solution, if you want to keep passing tids in args, you can synchronize the threads yourself at the WASM app level. I'm studying right...

You are right, in fact, currently the only input and output tensor datatype that supports is [fp32](https://github.com/bytecodealliance/wasm-micro-runtime/blob/main/core/iwasm/libraries/wasi-nn/README.md#what-is-missing). Will fix this and return an error if the datatype is wrong. Thanks...

Will it be possible to use the subset of functions for inference without having to implement the training ones?

any update regarding this?

Hi, which commit of WAMR are you using? Can it be because of https://github.com/bytecodealliance/wasm-micro-runtime/pull/3530#discussion_r1639662614 or are you using `WASM_ENABLE_WASI_EPHEMERAL_NN=0`? Thanks!

This is similar as what it was reported here https://github.com/bytecodealliance/wasm-micro-runtime/issues/2611#issuecomment-1741595890. The conclusion we reached is that if the model is quantized, then the de-quantization must be done within the engine,...