You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to create a new REQ socket that can connect to multiple servers automatically and keep those backends updated from dns and load balance between them.
The code is just like the req/rep example except that there are multiple clients connecting to multiple servers.
Kubernetes allows you to deploy a Headless service for a StatefulSet. When DNS makes a request, it returns all the endpoints available. This bypasses kube-proxy so we can handle load balancing at the client side.
When I run the following code, the socket is connected to one of the endpoints at random:
I would like to do this for Mangos if this isn't already possible. Do you know what would be the best approach here?
I thought about starting with the tcp transport and adding a layer on top of it to manage each connection. We could continually resolve DNS every 5-10 seconds to see if any members have changed.
Please let me know your thoughts. Thank you!
The text was updated successfully, but these errors were encountered:
For REQ or REP with TCP this is probably fairly straight-forward to do -- except that instead of just one connection alive, we would have one to each returned server.
We would want to add a property indicating that we want to connect to all the returned DNS entries, not just the first one. At that point, each connection is going to be used and considered when issuing request, giving round robin load balancing for example. Hopefully that's what you want, and you don't have to be concerned about state sharing between the far side.
This won't work for PAIR for obvious reasons. It would probably work out ok for pretty much the rest of the protocols though.
So the way to handle this is with a Dialer property (DialAll or something like that).
If you want to resolve just to one, well that's what happens to today. If the remote peer disconnects for any reason, we automatically reconnect. (The dialer does -- obviously the accepter can't initiate a new connection). We hit up DNS each time we do that, so that hopefully we get a different answer (depends on the resolver).
A cool enhancement might be to have an algorithm like HappyEyeballs where we dial out simultaneously, but then disconnect them all except the first one to negotiate. That would tend to resolve to whatever comes back first. That work could be done in the TCP dialer.
If this is something you need commercially, let me know and we can talk about how Staysail can help -- otherwise I'm happy to consider a PR if you feel equipped to do the work yourself.
I'm trying to create a new REQ socket that can connect to multiple servers automatically and keep those backends updated from dns and load balance between them.
The code is just like the req/rep example except that there are multiple clients connecting to multiple servers.
Kubernetes allows you to deploy a Headless service for a StatefulSet. When DNS makes a request, it returns all the endpoints available. This bypasses
kube-proxy
so we can handle load balancing at the client side.When I run the following code, the socket is connected to one of the endpoints at random:
Inside the container, I can see from netstat that only one tcp connection is alive:
To explain the endpoint, please refer to how pods within a statefulset maintain a stable network id here.
Lets say I set my replica set to 3 replicas, the pods will have the following DNS entries:
Here is a good article on how gRPC can be setup to load balance from the client side.
I would like to do this for Mangos if this isn't already possible. Do you know what would be the best approach here?
I thought about starting with the tcp transport and adding a layer on top of it to manage each connection. We could continually resolve DNS every 5-10 seconds to see if any members have changed.
Please let me know your thoughts. Thank you!
The text was updated successfully, but these errors were encountered: