Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tuner] Add direct TD spec generation for candidates #606

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

Max191
Copy link

@Max191 Max191 commented Nov 25, 2024

This PR adds direct transform dialect spec generation for candidate configurations. This is the first part of the large refactoring described in #577. The way TD specs are generated is by matching against certain types of operations, and then creating a named sequence with transform.iree.match.cast_compatible_dag_from_root based on the matched operation. This is done for each configuration found, and the specs are saved to the temporary tuning directory to be used later in tuning.

One main difference in the flow of candidate generation is that state is no longer tracked by saving files to a temporary directory. Instead, ir modules are passed to each function, and only at the very end of candidate generation are the transform dialect specs written to files. This makes things cleaner, since there no longer needs to be a coordination of file paths.

@Max191
Copy link
Author

Max191 commented Nov 25, 2024

This PR is broken right now because of an issue tracked here iree-org/iree#19269. Reviews are appreciated, but it won't be able to land until after this issue is resolved.

tuner/tuner/libtuner.py Outdated Show resolved Hide resolved
tuner/tuner/op_matchers.py Outdated Show resolved Hide resolved
@Max191 Max191 requested a review from kuhar November 25, 2024 20:56
tuner/pyproject.toml Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/spec_builder.py Outdated Show resolved Hide resolved
tuner/tuner/spec_builder.py Outdated Show resolved Hide resolved
tuner/tuner/libtuner.py Outdated Show resolved Hide resolved
@Max191 Max191 marked this pull request as ready for review December 12, 2024 19:07
@Max191 Max191 requested a review from kuhar December 12, 2024 19:07
tuner/examples/test/conv_benchmark.mlir Show resolved Hide resolved
tuner/examples/test/mmt_benchmark.mlir Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen.py Show resolved Hide resolved
Comment on lines +658 to +664
# Index 0 is reserved for default config, so it gets no td spec.
with ir.Location.unknown() as loc:
empty_module = ir.Module.create(loc)
config_specs: list[ir.Module] = [empty_module]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To avoid this special case, we could also look up the lowering_config and translation_info from the original module and re-materialize it here, but I don't think it's worth it...

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or we could generate a placeholder spec that never matches anything.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kind of like the empty module for now. It seems relatively clean to me, and it makes it clear to someone looking through the specs that it is a placeholder. Having an explicit placeholder spec would also be good.

tuner/tuner/candidate_gen_test.py Outdated Show resolved Hide resolved
tuner/tuner/candidate_gen_test.py Show resolved Hide resolved
tuner/tuner/dispatch_parser.py Outdated Show resolved Hide resolved
tuner/tuner/libtuner.py Outdated Show resolved Hide resolved
@Max191 Max191 force-pushed the tuner-candidate-spec-gen branch 2 times, most recently from 872be87 to 119833b Compare December 12, 2024 20:54
@Max191 Max191 requested a review from kuhar December 12, 2024 20:55
tuner/tuner/libtuner.py Outdated Show resolved Hide resolved
@Max191 Max191 requested a review from kuhar December 13, 2024 14:32
@Max191 Max191 force-pushed the tuner-candidate-spec-gen branch from 596647c to 2876528 Compare December 13, 2024 14:34
@Max191 Max191 force-pushed the tuner-candidate-spec-gen branch from 083018d to 41bb86d Compare December 13, 2024 17:06
Copy link
Member

@kuhar kuhar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Make sure you drop the benchmark mlir files before landing.

Comment on lines +116 to +117
if contraction_op is None:
assert False, f"contraction op not found"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if contraction_op is None:
assert False, f"contraction op not found"
assert contraction is not None, f"contraction op not found"

Comment on lines +168 to +169
if conv_op is None:
assert False, f"convolution op not found"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants