Releases: Dynatrace/dynatrace-oneagent-operator
v0.7.1
Bug fixes
- Marked for Termination events are not a point in time instead of a time range of a few minutes (#229)
- Fixed error message when OneAgent has been already removed from the cache but the node was still there (#232)
Other changes
- Added environment variable 'RELATED_IMAGE_DYNATRACE_ONEAGENT' as preparation for RedHat marketplace release (#228)
- Fixed some problems with the current Travis CI build (#230)
Upgrading
The Operator can be upgraded from 0.7.0 with,
# Kubernetes
$ kubectl apply -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/v0.7.1/kubernetes.yaml
# Openshift
$ oc apply -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/v0.7.1/openshift.yaml
v0.7.0
Improvements
- Added a setting to configure a proxy via the CR (#207)
- Added a setting to add custom CA certificates via the CR. Requires OneAgent image 1.39.1000 or newer (#208)
- No longer change the OneAgent .spec section to set defaults, easing integration with GitOps tools (#206)
- Show operator phase in the
status.phase
field of the OneAgent object (#197) - Added proper error handling for Dynatrace API quota limit (#216)
Bug fixes
- Proxy environment variables (e.g.,
http_proxy
, etc.) can be ignored on Operator container whenskipCertCheck
is true (#204) - Istio objects don't have an owner object, so wouldn't get removed if the OneAgent object is deleted (#217)
- Handle sporadic (and benign) race conditions where the error below would appear (#194),
Operation cannot be fulfilled on oneagents.dynatrace.com \"oneagent\": the object has been modified; please apply your changes to the latest version and try again
Breaking changes
- Operator images will be now published on Docker Hub (docker.io/dynatrace/dynatrace-oneagent-operator) instead of Quay.
- This version drops support for Kubernetes 1.11, 1.12, and 1.13; support for OpenShift 3.11, and 4.1, based on Kubernetes 1.11, and 1.13 respectively, is kept (#219, #220)
- And while OpenShift 3.11 is supported, minimum version required is 3.11.188 unless a workaround is applied to the
openshift.yaml
manifest, see note on README.
- And while OpenShift 3.11 is supported, minimum version required is 3.11.188 unless a workaround is applied to the
Other changes
- Work in progress to add support for ARM64 environments (#201, #203):
- Update to Operator SDK 0.15.1 (#200)
- Initial work to ease release automation (#198)
- Added automatic creation of CSV files for OLM (#210)
- Refactor of Marked for Termination feature, events will be sent only for deleted Nodes (#189, #196, #213, #214, #223)
- Support deprecation of
beta.kubernetes.io/arch
andbeta.kubernetes.io/os
labels (#199) - Use
v1
instead ofv1beta1
forrbac.authorization.k8s.io
objects (#215)
Upgrading
The Operator can be upgraded from 0.6.0 with,
# Kubernetes
$ kubectl apply -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/v0.7.0/kubernetes.yaml
# Openshift
$ oc apply -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/v0.7.0/openshift.yaml
v0.6.0
Improvements
Additional fields have been added to the OneAgent CRD:
- Allow custom DNS Policy for OneAgent pods (#162)
- The service account for pods can now be customized (#182, #187)
- Custom labels can be added to pods (#183)
A schema has also been added to the OneAgent CRD:
- Add OpenAPI V3 Schema to CRD objects (#171)
- Validate tokens for OneAgent and show results as conditions on OneAgent status section (#188)
Other changes
- Operator Docker images have been merged, and are now the base image is UBI ([#179]
- Improve error logging from Dynatrace API requests on Operator (#185)
- Operator log entries now use ISO-8601 timestamps (e.g.,
"2019-10-30T12:59:43.717+0100"
) (#159) - Most operations now use HTTP Header for authentication with Dynatrace API (#167)
(#179)) - Update to nested OLM bundle structure (#163)
- Code style improvements (#158, #175)
- Update to Operator SDK 0.12.0 and Go modules (#157, #172)
- Using istio.io/client-go to manage Istio objects (#174)
- Add OLM manifests for v0.6.0 (#193)
Breaking changes
- New OneAgent objects will be validated through the added schema by Kubernetes, so some issues may appear now, e.g., having an integer as
tokens
- it needs to be a YAML string now. - From now on, we'll publish the installation manifests,
kubernetes.yaml
andopenshift.yaml
, as release attachments rather than having them on the repository code base. For this specific release however, we'll keep both.
Upgrading
The Operator can be upgraded from 0.5.x with,
# Kubernetes
$ kubectl apply -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/v0.6.0/kubernetes.yaml
# Openshift
$ oc apply -f https://github.com/Dynatrace/dynatrace-oneagent-operator/releases/download/v0.6.0/openshift.yaml
v0.5.4
v0.5.3
v0.5.2
v0.5.1
v0.5.0
The Dynatrace OneAgent Operator v0.5.0 includes:
- Better detection and handling of node scaling events.
- Improved documentation for installation of the Operator in OCP 3.11 environments.
To upgrade from v0.4.0 you can run:
# Kubernetes
$ kubectl apply -f https://raw.githubusercontent.com/Dynatrace/dynatrace-oneagent-operator/v0.5.0/deploy/kubernetes.yaml
# OpenShift
$ oc apply -f https://raw.githubusercontent.com/Dynatrace/dynatrace-oneagent-operator/v0.5.0/deploy/openshift.yaml
v0.4.2
This release version includes a bugfix -
Handles nil pointer panic in oneagent controller when reconciling istio objects - caused by no virtualservice created for IP-based communication hosts.
v0.4.1
This release version includes bugfixes -
- Error handling when new CRDs are created after deploying operator (relevant in the case of istio-automanagement)
- Prevents operator from going into an infinite reconcile loop when no oneagent pods are deployed
- Creating appropiate Istio objects for HTTP and IP communication endpoints
Known Limitation
The enableIstio
feature requires to restart the operator if Istio was deployed after deployment of the operator.
Background: This happens because the cache maintained by controller-runtime's Kubernetes Client is not dynamic. The bug for same is reported here kubernetes-sigs/controller-runtime#321 and the fix for same is currently a work in progress kubernetes-sigs/controller-runtime#554 .