-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Losing quorum as soon as a node goes down #162
Comments
Have you checked with |
Yes, they do lose quorum. For example just now :
It has an UpToDate and a Diskless node, and yet it thinks it lost quorum. That's the only volume that lost quorum, the other ones look the same but with quorum, and the local node became Primary, maybe it's something to do with that specific volume somehow |
Very weird. Probably something for the DRBD folks to look at. If you just want to disable the taints, you can disable the HA Controller since 2.3.0: https://github.com/piraeusdatastore/piraeus-operator/blob/v2/docs/reference/linstorcluster.md#spechighavailabilitycontroller |
It looks like the TieBreaker / Diskless node doesn't count towards the quorum when changing primary, so if the Primary for a volume goes down (even cleanly, it appears) the other one can't become primary anymore, and goes into a lost quorum state. That is probably a drbd issue, but when the primary goes down cleanly I wonder if the operator could make sure the secondary switches first, while it has quorum ? |
Nothing fancy, I'm using 3 Talos nodes, with scheduling on control plane nodes (since there's only 3 nodes) and a replica 3. But this actually seems to have fixed itself, I suspect DRBD 9.2.9 is what did it. Or at least I used to run into this all the time, and since that upgrade I haven't seen it once, so I think this was it :
|
Check if the right DRBD version is in use: I'm not sure what exact steps you run when you cordon, could you please elaborate a bit on that? |
Hi,
I have 3 nodes, and a placementCount of 2. After quite a bit of fiddling, the third node got 'TieBreaker' volumes (or Diskless, for some) setup on it, so I'd assume I'm okay to lose one node.
But sadly as soon as any of the nodes go down, I lose quorum and the remaining two nodes get tainted with
drbd.linbit.com/lost-quorum:NoSchedule
.I have no idea why the above leads to loosing quorum, there's clearly two connected nodes (even if one is the TieBreaker).
I'm not sure what I'm doing wrong, but tainting the nodes like that make recovering pretty difficult as most pods won't get re-scheduled, depending on what went down I sometimes have to manually untaint a node to let pods come back up and slowly recover by hand, using drbdadm to decide which to keep for every volume.
Thanks
The text was updated successfully, but these errors were encountered: