Releases: GoogleCloudPlatform/marketplace-k8s-app-tools
Fix SIGINT handling in mpdev verify
Before this change, the mpdev verify command would not reliably work to a ctrl-c command. The fix for this was to add init to the docker command for the mpdev verify script.
Upgrade helm and kubectl to latest versions
kubectl
versions updated to 1.21.12, 1.22.8, 1.23.7 following https://cloud.google.com/kubernetes-engine/docs/release-noteshelm
updated to 3.9.0 from https://github.com/helm/helm/releases
Fix transient auth error when deleting deployer service account
- Bug fix. Do not wait for deployer service account to be deleted within the deployer pod, to avoid transient auth error
Ensure /logs dir exists within /scripts/verify
/scripts/verify
is typically run via mpdev verify
which creates/mounts a logs directory, but this change should make it easier to directly run /scripts/verify
without /scripts/dev
.
0.11.0
- Upgrade CRDs that are still using apiextensions.k8s.io/v1beta1
- Upgrade kubectl and helm versions and change base image to Ubuntu 20.04
- Creating multiple log files for mpdev verify
- Set time limits to garbage collection script
Full Changelog: 0.10.19...0.11.0
Install gke-gcloud-auth-plugin in dev container
Install gke-gcloud-auth-plugin in dev container
Update kubectl versions in deployer image
Update kubectl versions in deployer image
We recommend that partners using these base deployers indicate a clusterConstraints.k8sVersion of at least >=1.18 for this
Support kubernetes.io/no-provisioner
Support option for storage provisioner kubernetes.io/no-provisioner
Deprecate helm_v2 deployer
Like Helm v2 itself, The helm_v2
deployer is deprecated and will no longer receive updates, though the existing base deployer will remain available in the repo. Updating to the helm
(v3) deployer to continue receiving updates is recommended (only needed if you explicitly switched from using the helm
to the helm_v2
deployer when it was introduced in release 0.10.4
).
Use WaitForFirstConsumer for GCP PersistentDisks
Pods can be unscheduable due to a Volume Node Affinity Conflict. This can happen when a pod has multiple pvcs, but the persistent volumes were created in different zones and thus not all pvs could be attached.
Kubernetes documentation suggests using WaitForFirstConsumer to avoid these issues. https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode