Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sumologic-eks-sumologi-metrics-collector-0 pod having wrong args --config file instead of --config-file in 4.6.0 version of chart #3706

Closed
vinayaksakharkar opened this issue May 15, 2024 · 3 comments
Labels
question Further information is requested

Comments

@vinayaksakharkar
Copy link

sumologic-eks-sumologi-metrics-collector-0 pod having wrong args --config file instead of --config-file in 4.6.0 version of chart.

image is target-allocator:0.93.0. My question here. am I using right image for statefulset sumologic-eks-sumologi-metrics-collector or ?

[root@ ~]# kubectl logs sumologic-eks-sumologi-metrics-collector-0 -n sumologic
unknown flag: --config
Usage of target-allocator:
--config-file string The path to the config file. (default "/conf/targetallocator.yaml")
--enable-prometheus-cr-watcher Enable Prometheus CRs as target sources
--kubeconfig-path string absolute path to the KubeconfigPath file (default "/.kube/config")
--listen-addr string The address where this service serves. (default ":8080")
--zap-devel Development Mode defaults(encoder=consoleEncoder,logLevel=Debug,stackTraceLevel=Warn). Production Mode defaults(encoder=jsonEncoder,logLevel=Info,stackTraceLevel=Error)
--zap-encoder encoder Zap log encoding (one of 'json' or 'console')
--zap-log-level level Zap Level to configure the verbosity of logging. Can be one of 'debug', 'info', 'error', or any integer value > 0 which corresponds to custom debug levels of increasing verbosity
--zap-stacktrace-level level Zap Level at and above which stacktraces are captured (one of 'info', 'error', 'panic').
--zap-time-encoding time-encoding Zap time encoding (one of 'epoch', 'millis', 'nano', 'iso8601', 'rfc3339' or 'rfc3339nano'). Defaults to 'epoch'.
unknown flag: --config

@vinayaksakharkar vinayaksakharkar added the question Further information is requested label May 15, 2024
@vinayaksakharkar
Copy link
Author

Here is yaml file for that statefulset.

apiVersion: apps/v1
kind: StatefulSet
metadata:
annotations:
meta.helm.sh/release-name: sumologic-eks
meta.helm.sh/release-namespace: sumologic
opentelemetry-operator-config/sha256: ec8f9b08f1dde085346d7e707f4ad2c0be864931a9779c5522b8744198ffde8f
prometheus.io/path: /metrics
prometheus.io/port: "8888"
prometheus.io/scrape: "true"
creationTimestamp: "2024-04-23T21:23:21Z"
generation: 8
labels:
app.kubernetes.io/component: opentelemetry-collector
app.kubernetes.io/instance: sumologic.sumologic-eks-sumologi-metrics
app.kubernetes.io/managed-by: opentelemetry-operator
app.kubernetes.io/name: sumologic-eks-sumologi-metrics-collector
app.kubernetes.io/part-of: opentelemetry
app.kubernetes.io/version: target-allocator-0.89.0
chart: sumologic-4.6.0
heritage: Helm
release: sumologic-eks
sumologic.com/app: otelcol
sumologic.com/component: metrics
sumologic.com/scrape: "true"
name: sumologic-eks-sumologi-metrics-collector
namespace: sumologic
ownerReferences:

  • apiVersion: opentelemetry.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: OpenTelemetryCollector
    name: sumologic-eks-sumologi-metrics
    uid: 708ef789-e8de-4ffc-ab74-e3b1153aa74c
    resourceVersion: "170136061"
    uid: bd291b44-83f7-41a9-8406-95a2f9c9ace7
    spec:
    persistentVolumeClaimRetentionPolicy:
    whenDeleted: Retain
    whenScaled: Retain
    podManagementPolicy: Parallel
    replicas: 1
    revisionHistoryLimit: 10
    selector:
    matchLabels:
    app.kubernetes.io/component: opentelemetry-collector
    app.kubernetes.io/instance: sumologic.sumologic-eks-sumologi-metrics
    app.kubernetes.io/managed-by: opentelemetry-operator
    app.kubernetes.io/part-of: opentelemetry
    serviceName: sumologic-eks-sumologi-metrics-collector
    template:
    metadata:
    annotations:
    meta.helm.sh/release-name: sumologic-eks
    meta.helm.sh/release-namespace: sumologic
    opentelemetry-operator-config/sha256: ec8f9b08f1dde085346d7e707f4ad2c0be864931a9779c5522b8744198ffde8f
    prometheus.io/path: /metrics
    prometheus.io/port: "8888"
    prometheus.io/scrape: "false"
    creationTimestamp: null
    labels:
    app.kubernetes.io/component: opentelemetry-collector
    app.kubernetes.io/instance: sumologic.sumologic-eks-sumologi-metrics
    app.kubernetes.io/managed-by: opentelemetry-operator
    app.kubernetes.io/name: sumologic-eks-sumologi-metrics-collector
    app.kubernetes.io/part-of: opentelemetry
    app.kubernetes.io/version: target-allocator-0.89.0
    chart: sumologic-4.6.0
    heritage: Helm
    release: sumologic-eks
    sumologic.com/app: otelcol
    sumologic.com/component: metrics
    sumologic.com/scrape: "true"
    spec:
    containers:
    • args:
      • --config-file=/conf/collector.yaml
        env:
      • name: METADATA_METRICS_SVC
        valueFrom:
        configMapKeyRef:
        key: metadataMetrics
        name: sumologic-configmap
      • name: NAMESPACE
        valueFrom:
        fieldRef:
        apiVersion: v1
        fieldPath: metadata.namespace
      • name: POD_NAME
        valueFrom:
        fieldRef:
        apiVersion: v1
        fieldPath: metadata.name
      • name: SHARD
        value: "0"
        image::target-allocator-0.89.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
        failureThreshold: 3
        httpGet:
        path: /
        port: 13133
        scheme: HTTP
        periodSeconds: 10
        successThreshold: 1
        timeoutSeconds: 1
        name: otc-container
        ports:
      • containerPort: 8888
        name: metrics
        protocol: TCP
      • containerPort: 1777
        name: pprof
        protocol: TCP
        resources:
        limits:
        cpu: "1"
        memory: 2Gi
        requests:
        cpu: 100m
        memory: 768Mi
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        volumeMounts:
      • mountPath: /conf
        name: otc-internal
      • mountPath: /tmp
        name: tmp
      • mountPath: /var/lib/storage/otc
        name: file-storage
        dnsPolicy: ClusterFirst
        nodeSelector:
        kubernetes.io/os: linux
        restartPolicy: Always
        schedulerName: default-scheduler
        securityContext:
        fsGroup: 999
        serviceAccount: sumologic-eks-sumologi-metrics
        serviceAccountName: sumologic-eks-sumologi-metrics
        shareProcessNamespace: false
        terminationGracePeriodSeconds: 30
        volumes:
    • configMap:
      defaultMode: 420
      items:
      • key: collector.yaml
        path: collector.yaml
        name: sumologic-eks-sumologi-metrics-collector
        name: otc-internal
    • emptyDir: {}
      name: tmp
    • emptyDir: {}
      name: file-storage
      updateStrategy:
      rollingUpdate:
      partition: 0
      type: RollingUpdate
      status:
      availableReplicas: 0
      collisionCount: 0
      currentReplicas: 1
      currentRevision: sumologic-eks-sumologi-metrics-collector-7cc7f59d8b
      observedGeneration: 8
      replicas: 1
      updateRevision: sumologic-eks-sumologi-metrics-collector-7cc7f59d8b
      updatedReplicas: 1

@swiatekm
Copy link

Can you post your values.yaml? It looks like you're trying to use the target allocator image where the collector image is expected.

@vinayaksakharkar
Copy link
Author

Can you post your values.yaml? It looks like you're trying to use the target allocator image where the collector image is expected.

Hi thanks @swiatekm-sumo I had doubts on wrong image use. Same target-allocator image worked in version 4.2 so that's why couldn't catch it early. Tested 4.6 version sumologic-otel-collector-0.92.0-sumo-0 and its working.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants