Justin Chu
Justin Chu
Looks like this is correct https://github.com/onnx/onnx/blob/main/docs/docsgen/source/technical/float8.md
Sorry it still doesn't look right @xadupre Should the spec be updated? Inf -> Inf for E5M2, Inf -> Max for E4M3FNUZ? 
What about outer_scope_node_arg_names_? Where is it used now?
2025-06-14T22:47:04.4226131Z 1: [1;31m2025-06-14 22:47:04.420762770 [E:onnxruntime:, inference_session.cc:2488 operator()] Exception during initialization: /onnxruntime_src/onnxruntime/core/framework/allocation_planner.cc:2539 virtual onnxruntime::common::Status onnxruntime::DeviceBasedPartitioner::PartitionGraph(const onnxruntime::GraphViewer&, const onnxruntime::ExecutionProviders&, std::vector&, onnxruntime::ExecutionOrder) iter != node_stream_map.end() was false. Failed to find node "model_41/lambda_9/add" in...
Sdist is not updated right? This is downloading from pypi. Should we trigger the test only periodically?
I see. The change you made sounds good then!
Thank you! Could you fix the CI errors?
Could you update https://github.com/onnx/ir-py/blob/main/src/onnx_ir/_enums.py and the tensor representations e.g. https://github.com/onnx/ir-py/blob/fdee1e28e199f67ced802d785565ff6ebba6f63c/src/onnx_ir/_core.py#L258 as well, after consensus is reached? Thanks!
Out of curiosity: what are the benefits of each rounding mode? Was it different because of the lack of spec, or due to platform characteristics/ performance considerations?