-
Notifications
You must be signed in to change notification settings - Fork 511
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ONNX][TorchToLinalg] Add support for dynamic dims in Interpolate lowering #3351
Conversation
And it would be better to also test with other resize op related model to make sure they all pass. |
here is the issue for this. It is unrelated: |
I cherry pick this patch and test locally. Looks like someother passed models failed again with this pr:
|
Hi @AmosLewis , thanks for testing this out. The failing with convolution op is happening because the following pr's have not been merged yet: torch-mlir PR3341 which depends on upstream: llvm-project PR92136 This is not an issue related to this particular patch, but likely came about due to work being done on improving operand quantization in #3327 and #3332 I'm not sure exactly what causes the stack allocation limit issue. It seems to happen during some dequant ops, but this should not be new as far as I am aware. I can focus my attention on these issues if you'd like, but again, I don't think that issue is likely to be specific to this patch. A good comparison would be to run those same tests at head and compare to this branch. |
Also for reference, a few days ago, I ran all of the onnx model tests and triaged the torch-mlir failures:
All of the ones marked "grouped q convolution" have a fix incoming. This list and the flags used to run them are in my most recent comment in this issue. |
Make sense, we need to test along with those patch #3341.
Agree. But still need test to double check.
Could you run on your machine, it might because my VM running out of memory? |
I'm not sure exactly what the guard is in place for. I was reading into someone else's similar issue recently: iree issue. It might be possible to remove the guard by adding the flag |
…ering (llvm#3351) Addresses [Shark-Turbine llvm#196](nod-ai/SHARK-TestSuite#196) Related tracker [Shark-Turbine llvm#566](nod-ai/SHARK-ModelDev#566) Related onnx.Resize issues [Shark-Turbine llvm#616](nod-ai/SHARK-ModelDev#616)
…ering (llvm#3351) Addresses [Shark-Turbine Related tracker [Shark-Turbine Related onnx.Resize issues [Shark-Turbine
…ering (llvm#3351) Addresses [Shark-Turbine llvm#196](nod-ai/SHARK-TestSuite#196) Related tracker [Shark-Turbine llvm#566](nod-ai/SHARK-ModelDev#566) Related onnx.Resize issues [Shark-Turbine llvm#616](nod-ai/SHARK-ModelDev#616)
Addresses Shark-Turbine #196
Related tracker Shark-Turbine #566
Related onnx.Resize issues Shark-Turbine #616