Skip to content

Performance Evaluation (Memo)

Kazuhiro Yamato edited this page Sep 30, 2015 · 11 revisions

Date: 2015/09/16 20:00

Setting

  • Machine1
    • OS: CentOS7
    • Components
      • Hatohol server
        • events-statistics-system-wide-4 (3b5de1868da2e)
      • serverRabbitMQ
      • MariaDB
    • OpenStack instance
      • of CPUs: 4

      • bogomips: 6983.82
      • Memory: 8GB
  • Machine2
    • OS: Ubuntu 15.04
    • Component: events_generator.py

Result

  • Event generation rate
    • approx. 310 events/sec.
  • Event record rate
    • approx. 35 events/sec.

Date: 2015/09/25 20:00

Concurrent event write

I changed the number of test plugin processes (<g> in the following command line) that generate dummy events concurrently. The write rate in Hatohol server increase as them up to approx. 32 processes.

$  PYTHONPATH=~/hatohol/server/tools ./events_generator.py 192.168.15.10 192.168.15.10 -n 10000 -g <g>
  • g=1: Elapsed time: 281.050 [s], rate: 35.6 [events/s]
  • g=2: Elapsed time: 256.141 [s], rate: 39.0 [events/s]
  • g=4: Elapsed time: 130.551 [s], rate: 76.6 [events/s]
  • g=8: Elapsed time: 66.859 [s], rate: 149.6 [events/s]
  • g=16: Elapsed time: 37.858 [s], rate: 264.1 [events/s]
  • g=32: Elapsed time: 22.657 [s], rate: 441.4 [events/s]
  • g=64: Elapsed time: 18.916 [s], rate: 528.6 [events/s]
  • g=128: Elapsed time: 26.421 [s], rate: 378.5 [events/s]

I think the reason why the write rate increase is that CPU is efficiently utilized as the number of the threads of HatoholArmPluginGateHAPI2 instances increase because they often wait for I/O in the MariaDB (MySQL) library.

Note that the used machines and configuration are basically the same as the above log of 2015/09/16, although the following components are updated.

  • Hatohol: hatohol-server-15.07_dev1_20150916_105927-1.el7.centos.x86_64
  • test-chocolat: fcd52bf

Date: 2015/09/25

Chunk size dependency

I changed the chunk size (the number of events in putEvent call) and measure the performance. It increased as the chunk size.

Although I had thought that it had a great effect because the number of transaction reduces, the dependency is not greater than I expected from the following measured results.

Condition: <g>=16

  • c=1 Elapsed time: 37.858 [s], rate: 264.1 [events/s] (sited from the above)
  • c=2 Elapsed time: 29.197 [s], rate: 342.5 [events/s]
  • c=3 Elapsed time: 26.621 [s], rate: 375.6 [events/s]
  • c=5 Elapsed time: 22.928 [s], rate: 436.2 [events/s]
  • c=10 Elapsed time: 22.092 [s], rate: 452.7 [events/s]
  • c=100 Elapsed time: 22.358 [s], rate: 447.3 [events/s]

Date: 2015/9/30 11:56

Easy benchmark with distributed virtual machines.

When Hatohol server, DB(MariaDB) server, and RabbitMQ server run on different virtual machines, the simple benchmark result (g=1, c=1) was approx. 20events/s.

$ PYTHONPATH=~/hatohol/server/tools ./events_generator.py 192.168.16.2 192.168.16.6 -n 1000
Delete existing monitoring server: events-generator:0 (ID: 3)
Registered a monitoring server for event generation: events-generator:0 (ID: 4)
Generator: events-generator:0, # of events: 1000
Chunk size: 1
Receiver: events-generator:0
Got response of exchange profile
309.935 events/sec.
309.499 events/sec.
310.673 events/sec.
Completed: Generator: events-generator:0
Joined one process: 19762
Joined one process: 19763
Elapsed time: 50.051 [s], rate: 20.0 [events/s]
Saved parameters: parameters.yaml
$ sudo nova-manage vm list | grep cho | awk '{ print $1 "\t" $2 }'
chocolat-hatohol-server opy3
chocolat-db     opy2
chocolat-hatohol-web    opy1
chocolat-rabbitmq       botcom1