Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mitigate pressure on socket send buffer #1402

Open
goshawk-3 opened this issue May 27, 2022 · 0 comments · May be fixed by #1404
Open

Mitigate pressure on socket send buffer #1402

goshawk-3 opened this issue May 27, 2022 · 0 comments · May be fixed by #1404
Assignees

Comments

@goshawk-3
Copy link
Contributor

goshawk-3 commented May 27, 2022

Describe what you want implemented
Options to consider:

  • Rate limiter - as these messages (the one-to-one Kadcast messaging) are secondary compared to Consensus messaging, they could be sent at 20 TPS - an config param. This should enable a node to queue messages in an internal queue instead of flooding UDP buffers
  • Increase udp_sender_buffer size at startup as Kadcast already does with udp_recv_buffer
  • Increase recovery rate in RaptorQ (the least preferable for now)
  • Any combination of the above.

Describe "Why" this is needed
In essence, we use RaptorQ to recover from dropped UDP messages in the outside network. We should not lose UDP messages on writing them onto local udp_sender_buffer.

As per finding in #1399 (comment), a strategy should be considered to avoid losing sent messages silently. At first, the issue from 1399 should be reproduced on Devnet. ,

Additional context
Issues:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant