Skip to content
This repository has been archived by the owner on Oct 22, 2021. It is now read-only.

KubeCF Deploy to Kind Cluster stuck on diego-cell-0 #42

Open
bappy776 opened this issue Jun 16, 2021 · 0 comments
Open

KubeCF Deploy to Kind Cluster stuck on diego-cell-0 #42

bappy776 opened this issue Jun 16, 2021 · 0 comments

Comments

@bappy776
Copy link

bappy776 commented Jun 16, 2021

I was trying to deploy CF on Kind K8S cluster on my Mac by using https://kubecf.io/docs/tutorials/deploy-kind/ and diego-cell-0 was failing 10/12 Error and rest of the deployment did not go any forward.

My Cluster Node

k get node -o wide
NAME                   STATUS   ROLES                  AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE       KERNEL-VERSION     CONTAINER-RUNTIME
kubecf-control-plane   Ready    control-plane,master   53m   v1.21.1   172.18.0.2    <none>        Ubuntu 21.04   5.10.25-linuxkit   containerd://1.5.2 

Pod status

k get pod -A
NAMESPACE            NAME                                           READY   STATUS      RESTARTS   AGE
cfo                  quarks-cd9d4b96f-4ltpq                         1/1     Running     0          51m
cfo                  quarks-job-6d8d744bc6-qgdsq                    1/1     Running     0          51m
cfo                  quarks-secret-7d76f854dc-cnhqq                 1/1     Running     0          51m
cfo                  quarks-statefulset-f6dc85fb8-q7k8v             1/1     Running     0          51m
kube-system          coredns-558bd4d5db-6r2dz                       1/1     Running     0          51m
kube-system          coredns-558bd4d5db-vsszq                       1/1     Running     0          51m
kube-system          etcd-kubecf-control-plane                      1/1     Running     0          52m
kube-system          kindnet-2jhsw                                  1/1     Running     0          51m
kube-system          kube-apiserver-kubecf-control-plane            1/1     Running     0          52m
kube-system          kube-controller-manager-kubecf-control-plane   1/1     Running     0          52m
kube-system          kube-proxy-sqp9r                               1/1     Running     0          51m
kube-system          kube-scheduler-kubecf-control-plane            1/1     Running     0          52m
kubecf               api-0                                          0/17    Pending     0          28m
kubecf               auctioneer-0                                   6/6     Running     1          28m
kubecf               cc-worker-0                                    0/6     Init:0/11   0          28m
kubecf               cf-apps-dns-76947f98b5-rgdm5                   1/1     Running     0          45m
kubecf               coredns-quarks-7cf8f9f58d-4cfgl                1/1     Running     0          43m
kubecf               coredns-quarks-7cf8f9f58d-svznn                1/1     Running     0          43m
kubecf               credhub-0                                      8/8     Running     0          28m
kubecf               database-0                                     2/2     Running     0          43m
kubecf               database-seeder-9dbcfd8207815599-psphk         0/2     Completed   0          44m
kubecf               diego-api-0                                    9/9     Running     2          29m
kubecf               diego-cell-0                                   10/12   Error       19         28m
kubecf               doppler-0                                      6/6     Running     0          28m
kubecf               log-api-0                                      9/9     Running     0          28m
kubecf               log-cache-0                                    10/10   Running     0          29m
kubecf               nats-0                                         7/7     Running     0          29m
kubecf               router-0                                       7/7     Running     0          28m
kubecf               routing-api-0                                  6/6     Running     0          29m
kubecf               scheduler-0                                    0/12    Init:0/21   0          28m
kubecf               singleton-blobstore-0                          8/8     Running     0          29m
kubecf               tcp-router-0                                   7/7     Running     0          28m
kubecf               uaa-0                                          9/9     Running     0          29m
local-path-storage   local-path-provisioner-547f784dff-9qrs8        1/1     Running     0          51m

diego-cell-0 Pod error

k --namespace kubecf describe pod diego-cell-0 | grep CrashLoopBackOff -A20 -B30
      /var/vcap/data/grootfs/store from diego-cell-ephemeral (rw,path="grootfs/store")
      /var/vcap/data/rep from rep-data (rw)
      /var/vcap/jobs from jobs-dir (rw)
      /var/vcap/sys from sys-dir (rw)
  rep-rep:
    Container ID:  containerd://0c24536fd91a7d3a5bd36de955d71612a9a56c70386cff9e926b03af3afe95cf
    Image:         ghcr.io/cloudfoundry-incubator/diego:SLE_15_SP2-29.1-7.0.0_374.gb8e8e6af-2.48.0
    Image ID:      ghcr.io/cloudfoundry-incubator/diego@sha256:559e8ff21e7225f4f50dba566b2ffdb9c7afb49e9c331ffae82f5d831a4c7961
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/bin/dumb-init
      --
    Args:
      /var/vcap/all-releases/container-run/container-run
      --post-start-name
      /var/vcap/jobs/rep/bin/post-start
      --post-start-condition-name
      sh
      --post-start-condition-arg
      -c
      --post-start-condition-arg
      ss -nlt sport = 1800 | grep "LISTEN.*:1800"
      --job-name
      rep
      --process-name
      rep
      --
      /var/vcap/jobs/rep/bin/rep
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 16 Jun 2021 16:39:34 +1000
      Finished:     Wed, 16 Jun 2021 16:39:38 +1000
    Ready:          False
    Restart Count:  17
    Limits:
      memory:  3Gi
    Requests:
      memory:   2Gi
    Readiness:  exec [curl --head --fail --insecure --cert /var/vcap/jobs/rep/config/certs/tls.crt --key /var/vcap/jobs/rep/config/certs/tls.key https://127.0.0.1:1800/ping] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      AZ_INDEX:        1
      POD_ORDINAL:      (v1:metadata.labels['quarks.cloudfoundry.org/pod-ordinal'])
      REPLICAS:        1
      KUBE_AZ:         
      BOSH_AZ:         
      CF_OPERATOR_AZ:  
    Mounts:
--
--
      CF_OPERATOR_AZ:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dlqx7 (ro)
      /var/vcap/all-releases from rendering-data (rw)
      /var/vcap/data from diego-cell-ephemeral (rw)
      /var/vcap/data/container-metadata from diego-cell-ephemeral (rw,path="container-metadata")
      /var/vcap/data/garden-cni from diego-cell-ephemeral (rw,path="garden-cni")
      /var/vcap/jobs from jobs-dir (rw)
      /var/vcap/sys from sys-dir (rw)
  vxlan-policy-agent-vxlan-policy-agent:
    Container ID:  containerd://0dbbc8a22382f731193dc3f90984f22550ac46599861c4e5f7a66972757d0b47
    Image:         ghcr.io/cloudfoundry-incubator/silk:SLE_15_SP2-29.1-7.0.0_374.gb8e8e6af-2.33.0
    Image ID:      ghcr.io/cloudfoundry-incubator/silk@sha256:669ce7aa0cf6685e68eb039eec0403d4b67393582776b730189e6e83282a38ee
    Port:          <none>
    Host Port:     <none>
    Command:
      /usr/bin/dumb-init
      --
    Args:
      /var/vcap/all-releases/container-run/container-run
      --post-start-name
      /var/vcap/jobs/vxlan-policy-agent/bin/post-start
      --job-name
      vxlan-policy-agent
      --process-name
      vxlan-policy-agent
      --
      /var/vcap/packages/vxlan-policy-agent/bin/vxlan-policy-agent
      -config-file=/var/vcap/jobs/vxlan-policy-agent/config/vxlan-policy-agent.json
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Wed, 16 Jun 2021 16:39:06 +1000
      Finished:     Wed, 16 Jun 2021 16:39:06 +1000
    Ready:          False
    Restart Count:  17
    Limits:
      memory:  64Mi
    Requests:
      memory:  32Mi
    Environment:
      POD_ORDINAL:      (v1:metadata.labels['quarks.cloudfoundry.org/pod-ordinal'])
      REPLICAS:        1
      AZ_INDEX:        1
      KUBE_AZ:         
      BOSH_AZ:         
      CF_OPERATOR_AZ:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dlqx7 (ro)
    ```
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant