-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.9.0] Can't start (or update) a cluster #1064
Comments
No, it should use k8ssandra-client as the target Cassandra version is 4.1.x.
|
Hi @Mokto, I'm testing your configuration and will report back with my findings. |
@Mokto, we were able to reproduce the bug. |
Thanks! |
I think we can close this with the 1.9.1 release. |
Thanks for updating the ticket @Mokto ! |
@adejanovski I am still experiencing this issue with version Environment K8ssandra Operator version: Kubernetes version information: Kubernetes cluster kind: I'm happy to open another issue if this is a misconfiguration on my side. Here's the exception. PodInitializing for sage-cassandra/cluster1-dc1-default-sts-0 (server-system-logger)
stream logs failed container "cassandra" in pod "cluster1-dc1-default-sts-0" is waiting to start: PodInitializing for sage-cassandra/cluster1-dc1-default-sts-0 (cassandra)
server-config-init Error: open /cassandra-base-config/cassandra-env.sh: no such file or directory
server-config-init Usage:
server-config-init k8ssandra config build [flags]
server-config-init
server-config-init Examples:
server-config-init
server-config-init # Process the config files from cass-operator input
server-config-init kubectl k8ssandra config build [<args>]
server-config-init
server-config-init
server-config-init Flags:
server-config-init --as string Username to impersonate for the operation. User could be a regular user or a service account in a namespace.
server-config-init --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups.
server-config-init --as-uid string UID to impersonate for the operation.
server-config-init --cache-dir string Default cache directory (default "/.kube/cache")
server-config-init --certificate-authority string Path to a cert file for the certificate authority
server-config-init --client-certificate string Path to a client certificate file for TLS
server-config-init --client-key string Path to a client key file for TLS
server-config-init --cluster string The name of the kubeconfig cluster to use
server-config-init --context string The name of the kubeconfig context to use
server-config-init --disable-compression If true, opt-out of response compression for all requests to the server
server-config-init -h, --help help for build
server-config-init --input string read config files from this directory instead of default
server-config-init --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure
server-config-init --kubeconfig string Path to the kubeconfig file to use for CLI requests.
server-config-init -n, --namespace string If present, the namespace scope for this CLI request
server-config-init --output string write config files to this directory instead of default
server-config-init --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0")
server-config-init -s, --server string The address and port of the Kubernetes API server
server-config-init --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used
server-config-init --token string Bearer token for authentication to the API server
server-config-init --user string The name of the kubeconfig user to use
server-config-init
stream logs failed container "server-system-logger" in pod "cluster1-dc1-default-sts-0" is waiting to start: |
As an update, I removed the # Ref: https://docs-v2.k8ssandra.io/reference/crd/k8ssandra-operator-crds-latest/
# Ref: https://github.com/k8ssandra/k8ssandra-operator
apiVersion: k8ssandra.io/v1alpha1
kind: K8ssandraCluster
metadata:
name: cluster1
spec:
cassandra:
# Ref: https://hub.docker.com/r/k8ssandra/cass-management-api/tags
serverVersion: "4.1.2"
clusterName: cluster1
# Sets whether multiple Cassandra instances can be scheduled on the same node.
# This should normally be false to ensure cluster resilience but may be set true
# for test/dev scenarios to minimise the number of nodes required.
softPodAntiAffinity: true
# Use superuserSecretName to setup superuser pre-defined credentials for the
# database in a Kubernetes secret. Cass Operator will read the secret and pass
# the values to the Management API when managing the cluster. If this is
# empty, Cass Operator will generate a secret instead.
superuserSecretRef:
name: ""
# Limit each pod to a fixed 2 CPU cores and 8 GB of RAM.
resources:
requests:
memory: 8Gi
cpu: 2000m
limits:
memory: 13Gi
cpu: 3000m
tolerations:
- key: "storage"
operator: "Equal"
value: "cassandra"
effect: "NoSchedule"
datacenters:
- metadata:
name: dc1
# The number of server nodes.
size: 3
initContainers:
- name: server-config-init # defaults cannot be overridden ?
resources:
requests:
cpu: 1000m
memory: 1Gi
limits:
cpu: 1000m
memory: 1Gi
storageConfig:
cassandraDataVolumeClaimSpec:
storageClassName: server-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
config:
jvmOptions:
heapSize: 4Gi
gc: G1GC
gc_g1_rset_updating_pause_time_percent: 5
gc_g1_max_gc_pause_ms: 300
cassandraYaml:
authenticator: org.apache.cassandra.auth.PasswordAuthenticator
authorizer: org.apache.cassandra.auth.CassandraAuthorizer
role_manager: org.apache.cassandra.auth.CassandraRoleManager
# Ref: https://github.com/apache/cassandra/blob/cassandra-4.0.0/NEWS.txt#L374-L380
sasi_indexes_enabled: true
materialized_views_enabled: true I was attempting to set resource requests and limits on the |
What happened?
I can't start new clusters (or update existing ones) with version
2.9.01.9.0How to reproduce it (as minimally and precisely as possible):
Start a new operator or update an existing one. Create a cluster or let one of your clusters update. It will fail.
I think The "server-config-init" initContainer is using the docker container "k8ssandra/k8ssandra-client:v0.2.0" even though it should use "datastax/cass-config-builder:1.0-ubi7"
Environment
K8ssandra Operator version:
2.9.0
Kubernetes version information:
1.27.3
Kubernetes cluster kind:
GKE
Manifests:
Anything else we need to know?:
When using datastax/cass-config-builder:1.0-ubi7 is starts properly
The text was updated successfully, but these errors were encountered: