Danilo Guanabara

Results 71 comments of Danilo Guanabara

Hi @NovySan. I discovered that it's possible to create a load balancer to redirect https requests to http only instances.

@NovySan I will try to explain it with what I remember: Let's say you have an ec2 machine which you can access with the address http://ec2-xxx-xxx-xxx-xxx.sa-east-1.compute.amazonaws.com/ You can create an...

Would you guys be interested in a Rust port with Compute Shaders of the Fluid Simulation?

Nice! I just got rid of it on my end - I don't plan to do much CPU stuff there :')

I am unable to test it for the following weeks.

Changing ```console python .venv\Scripts\invokeai-web.exe %* ``` to ```console set CUDA_VISIBLE_DEVICES=1 & python .venv\Scripts\invokeai-web.exe %* ``` worked on Windows.

This is a duplicated question https://github.com/ProjectNUWA/DragNUWA/issues/1

I have a similar issue. Here is the onnx file [elections.zip](https://github.com/webonnx/wonnx/files/14566335/elections.zip) (based on https://huggingface.github.io/candle/training/simplified.html) In my case, it's `IrError(OutputNodeNotFound("/ln1/Gemm_output_0"))`.

I did some debugging. The error is coming from here: https://github.com/webonnx/wonnx/blob/7880ed8e6d95857e731341bd022b5e2eb8d1bb75/wonnx/src/ir.rs#L24-L26 The shapes are acquired here: https://github.com/webonnx/wonnx/blob/7880ed8e6d95857e731341bd022b5e2eb8d1bb75/wonnx/src/ir.rs#L198-L208 But I am assuming our onnx files doesn't have definitions for them because...

The onnx file needs to be pre-processed to infer shapes and save them back in the file. @maxwellflitton https://github.com/webonnx/wonnx?tab=readme-ov-file#shape-inference