Andrei Zhabinski

Results 180 comments of Andrei Zhabinski

To continue our discussion, I see several possible options for implementing Julia workers. ### Option 1. Julia + libraries Many people look at Spark.jl as a way to run custom...

> This does not solve the latency problem, but for batch/"production" jobs, using a startup script to instantiate/precompile is very reasonable. The hard part is how to do it inside...

This might be related to [this commented line](https://github.com/JuliaGPU/OpenCL.jl/blob/master/src/array.jl#L136) - if I recall it correctly, I copied most of the code for `transpose` from some implementation in C where the trick...

Try this: ```julia ENV["JULIA_DEBUG"] = Main # this will help to debug the loading function load_node!(tape::Tape, ::OpConfig{:ONNX, :MatMul}, args::VarVec, attrs::AttrDict) return push_call!(tape, *, args[2], args[1]) end function load_node!(tape::Tape, ::OpConfig{:ONNX, :Sigmoid},...

Also, it looks like we can't load the model with batch size > 1, but it may be a limitation of the saved model itself.

MatMul needs a bit more work to be useful in general case, so let's leave it as is for now. I'm currently close to finishing a large update in some...

According to [the spec](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Matmul), ONNX Matmul behaves like [numpy matmul](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.matmul.html), i.e.: > The behavior depends on the arguments in the following way. > * If both arguments are 2-D they...

Do you use `eval()` anywhere in your code?

I can't reproduce the issue using either of these snippets, so perhaps there's something else in between. Other notes: 1. Generally, it's not recommended to load resources during module precompilation...

I added MatMul implementation for several most popular cases in #69. It's incomplete, but perhaps it makes more sense to open a new issue for improvements.