You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ralloc expands the heap when superblock free list is empty. However, currently more than one thread may notice that the free list is empty and expand the heap simultaneously, causing superfluous expansion.
One workaround is to expand the heap by some size proportional to the number of threads, so that even superfluous expansion occurs, it won't get too many superblocks at once and there's no real harm. We applied this workaround on the copy in Montage.
The ultimate solution might be to introduce some helping mechanism while linking and publishing newly allocated superblocks while still ensuring no data race.
To realize this, we may want to add a mark bit in the next_free field of each descriptor and in curr_addr. When curr_addr is marked, threads noticing that the free list is empty will help link superblocks made available by the most recent yet unfinished expansion. Any helper realizing that a descriptor has marked next_free knows someone else has linked it (it may now already be in use) and thus moves forward to link the next. Once helpers iterate through all descriptors, they try to CAS the new head in and unmark curr_addr.
The issue of this solution is that, currently next_free is pptr, which stores offset from the object and is hard to steal a bit for marking. I'll come back and implement this in the future when I have more free cycles.
The text was updated successfully, but these errors were encountered:
Ralloc expands the heap when superblock free list is empty. However, currently more than one thread may notice that the free list is empty and expand the heap simultaneously, causing superfluous expansion.
One workaround is to expand the heap by some size proportional to the number of threads, so that even superfluous expansion occurs, it won't get too many superblocks at once and there's no real harm. We applied this workaround on the copy in Montage.
The ultimate solution might be to introduce some helping mechanism while linking and publishing newly allocated superblocks while still ensuring no data race.
To realize this, we may want to add a mark bit in the
next_free
field of each descriptor and incurr_addr
. Whencurr_addr
is marked, threads noticing that the free list is empty will help link superblocks made available by the most recent yet unfinished expansion. Any helper realizing that a descriptor has markednext_free
knows someone else has linked it (it may now already be in use) and thus moves forward to link the next. Once helpers iterate through all descriptors, they try to CAS the new head in and unmarkcurr_addr
.The issue of this solution is that, currently
next_free
ispptr
, which stores offset from the object and is hard to steal a bit for marking. I'll come back and implement this in the future when I have more free cycles.The text was updated successfully, but these errors were encountered: