-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Non-helm installation is not idempotent #1393
Comments
Hi! It's actually the case when helm is used, whenever you run That doesn't mean there is no bug somewhere that prevents it from being run in the first place. |
Thanks @perk-sumo. It's helpful to know the job is idempotent. For more context, we manage our k8s config with kustomize and apply it all in regular intervals. We have a kustomize generator that constructs the necessary sumologic config and dumps it onto stdout, where it will be applied by kubectl. Our generator logic, for the moment, looks like: #Create the sumologic-agent namespace
echo 'apiVersion: v1
kind: Namespace
metadata:
name: sumologic-agent
---'
# Use `helm template` to create a kubernetes configuration.
# For proper functioning of the sumo agent, we set the namespace on all `Kind`s
# using 'yq'.
helm template "${REPO_PATH}" \
--name-template 'collection' \
--namespace 'sumologic-agent' \
--set sumologic.accessId="${sumologic_accessid}" \
--set sumologic.accessKey="${sumologic_accesskey}" \
--set sumologic.collectorName="$(yq eval .collector_name "${config_file}")" \
--set sumologic.clusterName="$(yq eval .cluster_name "${config_file}")" \
--set sumologic.metrics.enabled=false \
--set sumologic.traces.enabled=false \
--set fluentd.events.enabled=false \
--set cleanupEnabled=true | yq eval '.metadata.namespace="sumologic-agent"' - At the end of the YAML kustomize pipeline, there's a Of course, the general expectation with kubectl apply is that it should be idempotent. That is not however true with the YAML your helm chart generates. Reapplying the YAML -- as our CI does regularly -- fails with messages to the effect of:
You can google and see that this is a common error occurring when trying to bash completed jobs: kubernetes/kubernetes#89657. The job clearly has helm lifecycle annotations on it, which means that As that issue mentions, the real fix here is the k8s TTL controller; I am not sure, but that might be an avenue for you to less-tightly couple your installs to helm. |
We can install and start pushing logs just fine using the "non-helm" mechanism. However, if we re-apply the configuration, it fails recreating the
collection-sumologic-setup
job.Is there a recommended way to make this idempotent? If the job is run more than once, are there problematic side-effects?
The text was updated successfully, but these errors were encountered: