Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tracking] ONNX Op Support #215

Open
saienduri opened this issue Dec 1, 2023 · 36 comments
Open

[tracking] ONNX Op Support #215

saienduri opened this issue Dec 1, 2023 · 36 comments
Labels
tracking-issue Tracking Issue

Comments

@saienduri
Copy link
Contributor

saienduri commented Dec 1, 2023

Tracking op support for OnnxToTorch lowering. The ops are ordered by priority (highest -> lowest) in each section.

IMPORTANT: Mark ops you are working on by hovering over it in the list, then clicking the bullseye symbol to its right. This creates an issue for the op and marks them on the list, avoiding duplicate effort.

Contact:

For people in turbine camp, feel free to pick an op from any of the alphabetical subgroups.

Instructions on adding ONNX or Torch OP:
https://github.com/llvm/torch-mlir/blob/main/docs/add_ops.md

Please add e2e operator level test(s) to e2eshark test suite for your newly added ops using instructions at:
https://github.com/nod-ai/SHARK-TestSuite/blob/main/e2eshark/README.md

If you have questions, please check and ask in the discord LLVM/torch-mlir channel.

For TorchToLinAlg support tracking, see #347

[Tracker] Onnx FE Support #564

@kumardeepakamd 's Guidance for prioritizing the ONNX work (higher to lower priority order):

  1. Fix failures related with actual models (such as llama2 etc.) ([tracking] E2EShark Model Tests Onnx Mode #566)
  2. Fix failure in unit tests that are likely to be close (variation of the issue) to actual model failures -- cross check with one of model failures (Start at [Tracker] Onnx FE Support #564 and cross check on issues across different failures to assess its importance)
  3. Implement one of Unsupported ONNX Op ([tracking] ONNX Op Support #215)
  4. Fix rest of unit test failure
  • Unsupported Ops (not planned to be supported) - count:5

    • String ops
      • StringConcat
      • StringNormalizer
      • StringSplit
      • RegexFullMatch
    • Needs external image library support
  • Need revisit

Completed Ops (count: 188):

@saienduri saienduri changed the title Op Support Op Support Overview (Torch MLIR + Onnx) Dec 1, 2023
@saienduri saienduri changed the title Op Support Overview (Torch MLIR + Onnx) [tracking] Op Support Dec 1, 2023
@AmosLewis AmosLewis changed the title [tracking] Op Support [tracking] ONNX Op Support Dec 4, 2023
@AmosLewis AmosLewis added the tracking-issue Tracking Issue label Dec 4, 2023
@godot73
Copy link

godot73 commented Dec 8, 2023

The list high priority ops:

https://gist.github.com/renxida/7510f2a0e5b1e7f0b62025f70854c553

(moved to Gist by @renxida to avoid interfering with ctrl+f)

@wu-s-john
Copy link

Working on the MatMul changes right now.

@Shukla-Gaurav
Copy link

Working on Gather, LeakyRelu and Pad op.

@vivekkhandelwal1
Copy link
Contributor

Working on Gather, LeakyRelu and Pad op.

Can you please create issues for these ops, so that no one else takes that op?

@vivekkhandelwal1
Copy link
Contributor

vivekkhandelwal1 commented Dec 13, 2023

@godot73 @frafranz @renxida @frederik-h @rsuderman Please create an issue for the op you're working on right now, and do the same if you take further any ops.

@AmosLewis
Copy link
Contributor

@godot73 @frafranz @renxida @frederik-h @rsuderman Please create an issue for the op you're working on right now, and do the same if you take further any ops.

@wu-s-john

@wu-s-john
Copy link

@saienduri working on these two right now:

#248
#249

@kumardeepakamd kumardeepakamd moved this to In Progress in Shark FE Feb 16, 2024
@jinchen62
Copy link
Contributor

jinchen62 commented Mar 27, 2024

@vivekkhandelwal1 According to the triage of iree onnx tests failures, these ops are missing OnnxToTorch lowering.

  • Adagrad
  • Adam
  • ArrayFeatureExtractor
  • Binarizer
  • LabelEncoder
  • Momentum

Seems the listed ops are ordered by priority, I'm not sure how we would prioritize these missing ops. Could you add them to the list?

@Peefy
Copy link

Peefy commented Apr 11, 2024

I have an ONNX model that uses the Unique operator that is not yet supported, can I take the issue and implement it?

@renxida
Copy link
Contributor

renxida commented Apr 11, 2024

@Peefy yes please!

@vivekkhandelwal1
Copy link
Contributor

Classical ML and Training ops (not planned to be supported):

  • Adagrad
  • Adam
  • ArrayFeatureExtractor
  • Binarizer
  • LabelEncoder
  • Momentum

@aldesilv
Copy link
Collaborator

I'll take group normalization.

@archana-ramalingam
Copy link

archana-ramalingam commented Apr 17, 2024

I am working on ReduceSumSquare, ReduceLogSum and ReduceLogSumExp op

@NeverRaR
Copy link

Working on GlobalMaxPool

@123epsilon
Copy link
Contributor

123epsilon commented May 22, 2024

Working on Onnx.Multinomial
#705

(Not sure how to set up tracking)

@andfau-amd
Copy link

andfau-amd commented May 23, 2024

I noticed that some ONNX operators are functions, which means that we can probably systematically expand them before conversion, instead of having to write bespoke conversions for all of them. I made an issue about this: llvm/torch-mlir#3384. Assuming this does actually get implemented, it might be wise for people considering implementing new conversions to avoid operators that are functions, if it is desirable to avoid redundant effort. You can tell if an operator is a function by going to https://onnx.ai/onnx/operators/ and seeing if it says "function: True".

@suryajasper
Copy link

Working on Onnx.Scatter
#708

@manupak
Copy link

manupak commented Jun 4, 2024

Working on #717

@umangyadav
Copy link

umangyadav commented Jun 6, 2024

I'll take GlobalLpPool #727

@andfau-amd
Copy link

Update on the operators that are functions thing: as of llvm/torch-mlir@51902ec, support for this is in the importer. This means that if there's a new op to support and the ONNX documentation says "Function: true", it may be easy to support by just adding a line to the allowlist in the importer. But this isn't guaranteed to work, and it requires judgment on a case-by-case basis as to whether this approach should be used instead of modifying TorchOnnxToTorch. Also, the imported function might, for example, depend on other operators that TorchOnnxToTorch doesn't support.

See also: llvm/torch-mlir#3464.

@jinchen62
Copy link
Contributor

I'm taking CenterCropPad #741 and ReverseSequence #742

@andfau-amd
Copy link

Bernoulli might be implemented wrong: llvm/torch-mlir#3527

@mdazz
Copy link

mdazz commented Jul 24, 2024

Hello, could someone clarify what is the rationale for picking a certain opset for a given op? For example, I see Softmax is ticked off as done here, but actually only opset version 13 is supported. Are the supported opset versions documented somewhere I might've missed?

@AmosLewis
Copy link
Contributor

Hello, could someone clarify what is the rationale for picking a certain opset for a given op? For example, I see Softmax is ticked off as done here, but actually only opset version 13 is supported. Are the supported opset versions documented somewhere I might've missed?

Depends on the model requirement. It means when it fixed, probably only 13 needed, then when the next models need 19, we prioritize to support 19, if not, we can just leave it there for now, and prioritize other ops not implemented yet. If no model driven, when a new op implemented, we try to support the state of art ONNX opset version. But there is a tradeoff, if new version support takes too much time, we always pick up the low handing fruit first.

@vivekkhandelwal1
Copy link
Contributor

Tracker #797 for the Onnx ops failing during Torch->Linalg lowering.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tracking-issue Tracking Issue
Projects
Status: In Progress
Development

No branches or pull requests