Skip to content
This repository has been archived by the owner on Nov 8, 2022. It is now read-only.

Few failures #28

Open
evilezh opened this issue Nov 9, 2016 · 5 comments
Open

Few failures #28

evilezh opened this issue Nov 9, 2016 · 5 comments

Comments

@evilezh
Copy link

evilezh commented Nov 9, 2016

this happens with DF plugin -> KAFKA plugin (0.17.0 and 0.16.1-beta tested):

time="2016-11-09T20:35:05Z" level=error msg="error with publisher job" _module=scheduler-job block=run error="Publish call error: Cannot marshal metrics to JSON format, err=json: unsupported value: NaN" job-type=publisher plugin-config=map[] plugin-name=kafka plugin-version=-1 

this happens with 0.18.0:

time="2016-11-09T20:32:56Z" level=error msg="error with publisher job" _module=scheduler-job block=run error="Publish call error: Cannot initialize a new Sarama SyncProducer using the given broker addresses ([localhost:9092]), err=kafka: client has run out of available brokers to talk to (Is your cluster reachable?)" job-type=publisher plugin-config=map[brokers:{localhost:9092} topic:{snap}] plugin-name=kafka plugin-version=-1 

Same configuration, only snapd binary changed.

@kindermoumoute
Copy link

The first issue comes from a bug introduced in 0.16.1 and fixed in 0.18 (intelsdi-x/snap#1316), where the default values where not set in config map. You can see plugin-config=map[] in the first example, and plugin-config=map[brokers:{localhost:9092} topic:{snap}] with the second example (0.18.0).

Cannot initialize a new Sarama SyncProducer using the given broker addresses ([localhost:9092]), err=kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Are you sure you can reach Kafka on localhost:9092?

@evilezh
Copy link
Author

evilezh commented Nov 10, 2016

As I wrote, I did not change configuration files, Just downloaded new snapd and snapctl. I did test 2 times at least. Switched back and forward to confirm that it is exactly 0.18 problem.
I did not check for configuration file format change. It might be an issue, if there is new config format in place. Let me check and confirm.
I do not have localhost:9092 in configuration file defined (I've several servers with different port)- I assume it is default, if there is no config at all.

@jcooklin
Copy link
Collaborator

Will you share what your task looks like? If you are starting snapd with a
config it would be helpful to see that too. Specifically I'm in interested
in the config blocks for the publisher.

"plugin_name": "kafka", "config": {
"topic": "test",
"brokers": "172.17.0.14:9092"
}

On Thu, Nov 10, 2016 at 4:10 AM evilezh [email protected] wrote:

As I wrote, I did not change configuration files, Just downloaded new
snapd and snapctl. I did test 2 times at least. Switched back and forward
to confirm that it is exactly 0.18 problem.
I did not check for configuration file format change. It might be an
issue, if there is new config format in place. Let me check and confirm.
I do not have localhost:9092 in configuration file defined (I've several
servers with different port)- I assume it is default, if there is no config
at all.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#28 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AA0q-DIapo5suFNFKRrZQ9sC2YvWeTxxks5q8wmhgaJpZM4KuA-r
.

@evilezh
Copy link
Author

evilezh commented Nov 11, 2016

So, i've kafka in global config, rather than per task.
Actually having it inside tasks is much better. But here is both files for reference.

---
log_level: 4
log_path: /var/log
log_truncate: false
log_colors: false
gomaxprocs: 1
control:
  cache_expiration: 750ms
  listen_addr: 127.0.0.1
  listen_port: 8082
  max_running_plugins: 10
  plugin_trust_level: 0

  plugins:
    collector:
      mesos:
        all:
          agent: 127.0.0.1:5051
    publisher:
      kafka:
        all:
          topic: "snap-metrics-1"
          brokers: "xx.xx.xx.xx:31000;yy.yy.yy.yy:31000;zz.zz.zz.zz:31000"
scheduler:
  work_manager_queue_size: 25
  work_manager_pool_size: 4
restapi:
  enable: true
  https: false
  rest_auth: false
  port: 8181
tribe:
  enable: false

Task:

{
  "version": 1,
  "schedule": {
    "type": "simple",
    "interval": "10s"
  },
  "name" : "collector-mesos",
  "max-failures": -1,
  "workflow": {
    "collect": {
      "metrics": {
        "/intel/mesos/*": {}
      },
      "config": {},
      "process": null,
      "publish": [
        {
          "plugin_name": "kafka"
        }
      ]
    }
  }
}

@kindermoumoute
Copy link

It seems it's applying default config over the global config. The issue doesn't come from this plugin, I guess you can "fix the issue" by putting your config in the task manifest. But it's not the right behavior, do you want to open an issue on Snap?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants