Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make lab_to_target more parallel for improved in-logic compilation #468

Closed
myreen opened this issue Feb 28, 2018 · 1 comment
Closed

Make lab_to_target more parallel for improved in-logic compilation #468

myreen opened this issue Feb 28, 2018 · 1 comment

Comments

@myreen
Copy link
Contributor

myreen commented Feb 28, 2018

This issue is about investigating whether the lab_to_target compiler phase can be made faster for in-logic compilation. The current implementation of lab_to_target almost forces computation to be a tedious global process.

Suggestion: one could possibly modify the lab_to_target phase so that a large part of the computation can be done in parallel for each function in the in-logic compilation. One could e.g. compute a maximum encoding length for each function and then assume in other parts that this is the function's exact length. If the encoding turns out to be shorter, then one can add padding at the end to make the function's encoding in order for the actual encoding to always be the maximum encoding length.

Since the lab_to_target phase is complicated and its verification proof is delicate, I suggest trying to make minimal changes before more adventurous changes are attempted.

Note: The lab_to_target compiler must not be changed on master before the install-and-run branch has been merged. Thus any immediate work on this should be a fork off of install-and-run.

@tanyongkiam
Copy link
Contributor

Closed in favor of #1080

@tanyongkiam tanyongkiam closed this as not planned Won't fix, can't repro, duplicate, stale Nov 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants