-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tuner] Add direct TD spec generation for candidates #606
base: main
Are you sure you want to change the base?
Conversation
This PR is broken right now because of an issue tracked here iree-org/iree#19269. Reviews are appreciated, but it won't be able to land until after this issue is resolved. |
de3e5e6
to
fb7846f
Compare
# Index 0 is reserved for default config, so it gets no td spec. | ||
with ir.Location.unknown() as loc: | ||
empty_module = ir.Module.create(loc) | ||
config_specs: list[ir.Module] = [empty_module] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To avoid this special case, we could also look up the lowering_config and translation_info from the original module and re-materialize it here, but I don't think it's worth it...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or we could generate a placeholder spec that never matches anything.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I kind of like the empty module for now. It seems relatively clean to me, and it makes it clear to someone looking through the specs that it is a placeholder. Having an explicit placeholder spec would also be good.
872be87
to
119833b
Compare
Signed-off-by: Max Dawkins <[email protected]>
Signed-off-by: Max Dawkins <[email protected]>
Signed-off-by: Max Dawkins <[email protected]>
Signed-off-by: Max Dawkins <[email protected]>
Signed-off-by: Max Dawkins <[email protected]>
Signed-off-by: Max Dawkins <[email protected]>
596647c
to
2876528
Compare
Signed-off-by: Max Dawkins <[email protected]>
083018d
to
41bb86d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Make sure you drop the benchmark mlir files before landing.
if contraction_op is None: | ||
assert False, f"contraction op not found" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if contraction_op is None: | |
assert False, f"contraction op not found" | |
assert contraction is not None, f"contraction op not found" |
if conv_op is None: | ||
assert False, f"convolution op not found" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also here
This PR adds direct transform dialect spec generation for candidate configurations. This is the first part of the large refactoring described in #577. The way TD specs are generated is by matching against certain types of operations, and then creating a named sequence with
transform.iree.match.cast_compatible_dag_from_root
based on the matched operation. This is done for each configuration found, and the specs are saved to the temporary tuning directory to be used later in tuning.One main difference in the flow of candidate generation is that state is no longer tracked by saving files to a temporary directory. Instead, ir modules are passed to each function, and only at the very end of candidate generation are the transform dialect specs written to files. This makes things cleaner, since there no longer needs to be a coordination of file paths.