-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimizing vector.sou #157
Comments
So I played a bit with better array shape approximations, but my code is a mess right now (it's just a prototype) and it's rather about investigating what can easily be done. I discover patterns of an interesting form that we cannot currently optimize (the comments come from inlined code, but they are useful to make a sense of what is happening):
This codes loops over a linked list In practice that does not happen because the loop-exit condition If we wanted this optimization to happen, we would need a constant folding flow analysis where the current approximation environment influences the control-flow traversal (and not just the traversing values): if we have a |
(Right now the resulting code has one small defect: it does not quite have the same semantics as the input code. But that's to be fixed and, besides, the point is that it's branch-free, right?) (The reason why the code is broken is that array approximations don't currently take aliasing into account. But that's not too hard to fix, by approximating the heap, I suppose.) |
So fixing the aliasing handling reveals that if you want to optimize well, you need constant-folding-directed loop unrolling (and not just constant-folding-directed branch taking). Consider a linked list with a single cell:
And a loop that want to find the key
When constant folding over this with the known value of If we wanted to get branch-free code out of this (like you would collect in a tracelet), we would have to duplicate |
would it make sense to use assumptions to speculate on the things we cannot analyze, like vector length and such? |
Yes! Insertion of assumptions: right now I can't think of an automatic assumption-insertion strategy besides the kind of branch pruning you implemented (branches are the natural thing to speculate on). Right now the way I think of OSRs in this context is more as story-telling: we write an example, point out that if we had a profiling apparatus it would reveal that this assumption is relevant, we manually insert the assumption (note: we could have code in place to make this easier, to "script" such examples), and then we optimize the resulting code (the new version). Use of existing assumptions: right now our optimizations don't really take advantage of existing assumptions (in the "new version" in the paragraph above). The constant folder could recognize variable-equal-value or array-length assumptions to enrich its static environment, but we have not implemented that yet. You are very right that this would be an important thing to experiment with. I propose to try to cleanup the part I have already done (not the magic unrolling of the last message), see if we can merge some of those in the master branch, and then try to make use of assumptions as you suggest. |
From #131 (comment)
Also #131 (comment)
There are probably other cases we can consider.
The text was updated successfully, but these errors were encountered: