onnx-mlir
onnx-mlir copied to clipboard
Next Ops to work on
Idea: put here a quick comment to claim the operations that you care currently working on, so that we do not replicate work. Can also add a request for new op.
Working on compress
Working on NonMaxSuppression
I am working on SpaceToDepth
(#926) and DepthToSpace
(#927).
FYI, here are some of the benchmarks we are focusing on and that have ops that are not working yet.
high priority: (from model zoo)
- Roberta,
- Bertsquad (onehot),
- Bidaf (‘compress’, [edit: now worked on] ‘hardmax’, ‘categorymapper’ [edit: now worked on] ),
- yolo3 nonmaxsuppression [edit: now worked on]
- tigny-yolo3: ‘round’[edit: now supported], ‘nonmaxsuppression’ [edit: now worked on]
high priority: support compile models compiled to their lowest component (like RNNs not exported as high level ONNX ops). No crash.
medium prio: hugging face GBERTQnA
A list of ops currently not supported and present in Model Zoo are listed at the end of this issue #128
I am going to look at categorymapper
(#941).
working on one hot to work with multiple types
working on Hardmax to support Bidaf. PR #950 (merged).
working on Resize.
Working on IsNaN op
Working on ScatterElements
(needed by fasterrcnn-10.onnx, maskrcnn-10.onnx).
PR is https://github.com/onnx/onnx-mlir/pull/1352
Scatter
is deprecated but we map it to ScatterElements
. PR is https://github.com/onnx/onnx-mlir/pull/1337
Working on ScatterND
. PR is https://github.com/onnx/onnx-mlir/pull/1370
Implemented GatherElements
. PR is https://github.com/onnx/onnx-mlir/pull/1375.
Working on GahterND
. PR is https://github.com/onnx/onnx-mlir/pull/1382.
Status of implemented ops are listed here now: https://github.com/onnx/onnx-mlir/blob/main/docs/SupportedONNXOps-cpu.md
Hi, thank you for your excellent work!
I am quite new to MLIR so the questions may be stupid, please never mind. I see ArgMax
is supported in onnx-mlir but ArgMin
not, is there any special issue for ArgMin
? If not, can I open a PR about ArgMin
just based on ArgMax
with little modification?
@airMeng please go ahead with a PR for ArgMin
. Thank you!
Hi, can I work on celu op ?
Somebody please support QuantizeLinear/DequantizeLinear ops for quantized networks.