You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When running the following docker run -e N="100" -e t="2" -e B="16" -it honeybadgerbft
The program returns a segfault (please see screenshot), systems resources looked okay at the time. Let me know if you want me to run any other tests. Same behaviour at N="200".
Update on this one, I suspect the "Segfault" is caused by OOM. I did manage to get a successful run at N=50 with "docker run -e N="100" -e t="2" -e B="16" -it --memory-swap -1 honeybadgerbft", the run took 21 hours.
Can you give any guidance on how much memory is required to run N=200?
Hi Mark, thanks for looking into this and posting it.
The simulation that runs from the docker file right now is probably not good for large numbers of nodes, since it's simulating in a single node what would ordinarily run across N nodes.
We could do a back of envelope calculation to predict how much memory is required, but it would also depend on the interleaving order. (I'm travelling at the moment, can try to help with this in a few days).
In the worst case, i think the asymptotic figure would be O(N^3 log N) if every message sent pertaining to an entire block were buffered in memory for all nodes at once.
From @mark-liu on July 26, 2017 10:24
When running the following docker run -e N="100" -e t="2" -e B="16" -it honeybadgerbft
The program returns a segfault (please see screenshot), systems resources looked okay at the time. Let me know if you want me to run any other tests. Same behaviour at N="200".
Copied from original issue: amiller/HoneyBadgerBFT#19
The text was updated successfully, but these errors were encountered: