Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

talos_cluster_health fails, while talosctl health is fine #153

Open
stelb opened this issue Mar 15, 2024 · 3 comments
Open

talos_cluster_health fails, while talosctl health is fine #153

stelb opened this issue Mar 15, 2024 · 3 comments

Comments

@stelb
Copy link

stelb commented Mar 15, 2024

Hi,

  • I have 2 interfaces attached, external and internal network.
  • I provide internal control plane ips
  • endpoint one of the external control plane ips

first problem:
│ waiting for etcd members to be control plane nodes: etcd member ips ["10.1.0.6" "XX.75.176.68" "10.1.0.2"] are not subset of control plane node ips ["10.1.0.2" "10.1.0.6" "10.1.0.7"]
I added advertisedSubnets to be the internal cidr

Now etcd is ok, but now there is an unexpected k8s node
│ waiting for all k8s nodes to report: can't find expected node with IPs ["10.1.0.3"]
│ waiting for all k8s nodes to report: unexpected nodes with IPs ["XX.75.176.68"]
(I reduced nodes)

But when I check this with talosctl:

talosctl -n 10.1.0.3 -e xx.13.164.153 health

discovered nodes: ["10.1.0.3" "xx.75.176.68"]
waiting for etcd to be healthy: ...
waiting for etcd to be healthy: OK
waiting for etcd members to be consistent across nodes: ...
waiting for etcd members to be consistent across nodes: OK
waiting for etcd members to be control plane nodes: ...
waiting for etcd members to be control plane nodes: OK
waiting for apid to be ready: ...
waiting for apid to be ready: OK
waiting for all nodes memory sizes: ...
waiting for all nodes memory sizes: OK
waiting for all nodes disk sizes: ...
waiting for all nodes disk sizes: OK
waiting for kubelet to be healthy: ...
waiting for kubelet to be healthy: OK
waiting for all nodes to finish boot sequence: ...
waiting for all nodes to finish boot sequence: OK
waiting for all k8s nodes to report: ...
waiting for all k8s nodes to report: OK
waiting for all k8s nodes to report ready: ...
waiting for all k8s nodes to report ready: OK
waiting for all control plane static pods to be running: ...
waiting for all control plane static pods to be running: OK
waiting for all control plane components to be ready: ...
waiting for all control plane components to be ready: OK
waiting for kube-proxy to report ready: ...
waiting for kube-proxy to report ready: SKIP
waiting for coredns to report ready: ...
waiting for coredns to report ready: OK
waiting for all k8s nodes to report schedulable: ...
waiting for all k8s nodes to report schedulable: OK

or with public cp ip:

talosctl -n xx.13.164.153 -e xx.13.164.153 health

discovered nodes: ["10.1.0.3" "xx.75.176.68"]
waiting for etcd to be healthy: ...
waiting for etcd to be healthy: OK
waiting for etcd members to be consistent across nodes: ...
waiting for etcd members to be consistent across nodes: OK
waiting for etcd members to be control plane nodes: ...
waiting for etcd members to be control plane nodes: OK
waiting for apid to be ready: ...
waiting for apid to be ready: OK
waiting for all nodes memory sizes: ...
waiting for all nodes memory sizes: OK
waiting for all nodes disk sizes: ...
waiting for all nodes disk sizes: OK
waiting for kubelet to be healthy: ...
waiting for kubelet to be healthy: OK
waiting for all nodes to finish boot sequence: ...
waiting for all nodes to finish boot sequence: OK
waiting for all k8s nodes to report: ...
waiting for all k8s nodes to report: OK
waiting for all k8s nodes to report ready: ...
waiting for all k8s nodes to report ready: OK
waiting for all control plane static pods to be running: ...
waiting for all control plane static pods to be running: OK
waiting for all control plane components to be ready: ...
waiting for all control plane components to be ready: OK
waiting for kube-proxy to report ready: ...
waiting for kube-proxy to report ready: SKIP
waiting for coredns to report ready: ...
waiting for coredns to report ready: OK
waiting for all k8s nodes to report schedulable: ...
waiting for all k8s nodes to report schedulable: OK

so what is the problem?

@JonasKop
Copy link

JonasKop commented Sep 16, 2024

I have the same issue when using vip. It works with talosctl health ....

machine:
  network:
    interfaces:
      - interface: eth0
        dhcp: true
        vip:
          ip: 10.0.2.160

@spastorclovr
Copy link

spastorclovr commented Oct 8, 2024

Almost the Same issue here.
kubelet.nodeIp.validsubnets is wel defined to internal IPs and using advertisedSubnets set to internal Ips but still

the terraform data does not find the cluster healthy while the command talosctl health does.

Errror is

 unexpected nodes with IP 

followed by the list of the private ips for the worker nodes.

@samos667
Copy link

samos667 commented Nov 21, 2024

Same error, the node reported unhealthy by talos_cluster_health is the remote nodes linked by kubespan that is in a different network than control-planes

image

Then talosctl with the same endpoint but targeting only 1 CP, report all nodes ok like it should:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants