nopeslide
nopeslide
@alrevuelta with our current approach (no weak symbols) we may need to generate all onnx operators for this to work. so #41 is related
@alrevuelta I'm also pro testing, but dislike the way onnx does it. My approach would be: - autogenerate model for each operator for each input permutation for each type permutation...
> > > > Can you show where is this random float generation done? What I have seen so far are not randomly generated. [example](https://github.com/onnx/onnx/blob/master/onnx/backend/test/case/node/conv.py) [transpose](https://github.com/onnx/onnx/blob/master/onnx/backend/test/case/node/transpose.py) does this for example....
> Yes, this is something that was definitely missing. Which specific memory errors you had? you duplicate a string when integrating the input into the model. this string had a...
> Regarding `node_context`, there are some things that I don't understand. So a node is a given operator with its inputs and outputs. Why does a node have different `jobs`?...
> As long as all of the workers are joined before trying to access the outputs, it should be fine, I think. Though, I think this would require a reworking...
@alrevuelta keep in mind with #42 and #40 we already need to exchange data between the "init" (for the lack of a better name) aka typeless-operator function and the actual...
> If you also parallelize operators, unless there is a more efficient way than I implemented, a lot of the time you won't get much benefit. Add, matmul, and conv...
> This is why I'm in favor of parallelizing over the batch. If using pthread.h, the create/join calls will handle all of that. As far as I see it, if...
> * Batch parallelism is interesting and a cool feature to have. Either by building on top or using zero-th axis I would say the impact is reasonable. > >...