-
Notifications
You must be signed in to change notification settings - Fork 4.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Repair responses always saturate the MTU limit #25818
Comments
Good catch; the relevant snippet where we form repair packets is: solana/core/src/repair_response.rs Lines 10 to 23 in 061dc53
As you called out, the shred bytestreams are 0-padded when pulled from blockstore. When we create the packet, we use the length of the bytestream, and not the length of the shred payload. solana/core/src/repair_response.rs Lines 25 to 35 in 061dc53
So, at first glance, simply adding a blockstore retrieval function that returns unpadded streams seems like it would do the trick |
The zero padding is part of the signed message; So the receiving node has to:
That being said current shreds are labeled as legacy, and we are in the process of moving to merkle variants: #25237 which have different layout and signed message. So I wouldn't suggest putting effort into above until merkle variants are fully rolled out. It is though a good point to consider in designing layout for merkle variants so that above would be easier. |
can someone make a call as to whether we're going to fix this or let it ride 'til the new shred format is live? |
I lean towards not making shreds processing any more convoluted until we have fully migrated to merkle variants. Lets keep the issue open though because longer term we do want to address this if it has significant impact. |
Just ran
Row refers to kv whereas val refers to just v; k is 16 bytes (u64 for slot && u64 for shred_id). So
And then
Need to double check this, but seems like the significant drop in network bandwidth could warrant the additional overhead that would be incurred by the process that @behzadnouri mentioned above. Not a deep dive now, just a quick comment to get this back on the radar |
An alternative way to address this issue is to optimize the coalescing/buffering of entries here: |
Friendly bump, did this get implemented yet? |
no, once merkle shreds are rolled out we can look into impact of removing the zero padding; it will require a slice shift at the receiving end though. This is because for Reed-Solomon erasure coding we need a contiguous buffer which forces us to put the Merkle proof after the zero padding and not before the data buffer. |
See #25818 (comment) this should not have been closed. |
I'll reopen for now, will let @behzadnouri chime in as he is more up to date on the latest shred happenings. EDIT: Lol, nevermind; github-actions said no |
This repository is no longer in use. Please re-open this issue in the agave repo: https://github.com/anza-xyz/agave |
Problem
When repair requests are serviced, shreds are pulled straight from the blockstore, padded with a nonce and then send back to the requestor. However the data that is returned from the blockstore is padded with a bunch of 0-bytes.
Repair is a non-trivial part of Solana network traffic. Doing this more efficient and not sending the 0-byte padding should reduce overall network overhead by quite a bit (about 3-4x in my small experiments).
Proposed Solution
Don't send the 0-padding in repair responses.
The text was updated successfully, but these errors were encountered: