-
Notifications
You must be signed in to change notification settings - Fork 76
fiorbd test failed #274
Comments
@liaow8 , hi, thanks for your report, looks like there are some incorrect configuration, which led to fio unable to run test, can you send logs under CeTune/Log/ to me? Or if you're familiar with python and fio, you can check your self, the process_log shows all command being executed during test, so it is easy to find out what is the really failure point. |
sorrt,I'm not familiar with python an fio,I send log for you |
Hi,
I noticed that cetune gracefully interrupted that failure, so logs are moved to //mnt/data//15-3-fiorbd-seqwrite-4k-qd64-2g-100-400-rbd, can you also send out logs under that folder?
Best regards,
Chendi
From: wei liao [mailto:[email protected]]
Sent: Tuesday, October 17, 2017 12:46 PM
To: 01org/CeTune <[email protected]>
Cc: Xue, Chendi <[email protected]>; Comment <[email protected]>
Subject: Re: [01org/CeTune] fiorbd test failed (#274)
sorrt,I'm not familiar with python an fio,I send log for you
cetune_python_log_file.log<https://github.com/01org/CeTune/files/1389767/cetune_python_log_file.log>
cetune_error_log_file.log<https://github.com/01org/CeTune/files/1389768/cetune_error_log_file.log>
cetune_process_log_file.log<https://github.com/01org/CeTune/files/1389769/cetune_process_log_file.log>
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#274 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AEJ1pqu6KRVbY_eRqExxkf4Shl3mbr3Tks5stDEfgaJpZM4P7bpb>.
|
please see the new logs,thank you |
@liaow8 , hi, through log, I saw your test is running normally, from 2017/10/17 09:47 - 09:55 without error interrupts, can you check ? |
yes,I found the problem.But I don't know the reason.I have 3 osd nodes,when I set the list_client to be one of the nodes in cluster configuration,the test can be success ,when I set the list_client to be 3 nodes,the test go to failed.I don't know the reason and how to set the list_client |
Make sure all 3 nodes need to be able to auto ssh by head, and all three nodes install fio, ceph rbd.
And if you can reproduce the issue, please send me those logs.
From: wei liao [mailto:[email protected]]
Sent: Tuesday, October 17, 2017 2:29 PM
To: 01org/CeTune <[email protected]>
Cc: Xue, Chendi <[email protected]>; Comment <[email protected]>
Subject: Re: [01org/CeTune] fiorbd test failed (#274)
yes,I found the problem.But I don't know the reason.I have 3 osd nodes,when I set the list_client to be one of the nodes in cluster configuration,the test can be success ,when I set the list_client to be 3 nodes,the test go to failed.I don't know the reason and how to set the list_client
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#274 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AEJ1pkEZA17FipRlyAoKgNAQCrCBvWuuks5stEkdgaJpZM4P7bpb>.
|
I reproduce the problem,please see the logs |
when I start benchmark test,faild,how to resolve this problem
[2017-10-17T08:33:28.541710][LOG]: start to run performance test
[2017-10-17T08:33:28.546344][LOG]: Calculate Difference between Current Ceph Cluster Configuration with tuning
[2017-10-17T08:33:33.438428][LOG]: Tuning[analyzer] is not same with current configuration
[2017-10-17T08:33:33.908256][LOG]: Tuning has applied to ceph cluster, ceph is Healthy now
[2017-10-17T08:33:36.914239][LOG]: ============start deploy============
[2017-10-17T08:33:39.591453][LOG]: Shutting down mon daemon
[2017-10-17T08:33:40.237892][LOG]: Shutting down osd daemon
[2017-10-17T08:33:40.576835][LOG]: Starting mon daemon
[2017-10-17T08:33:40.943608][LOG]: Started mon.node02 daemon on node02
[2017-10-17T08:33:41.319379][LOG]: Started mon.node03 daemon on node03
[2017-10-17T08:33:41.696928][LOG]: Started mon.node01 daemon on node01
[2017-10-17T08:33:41.697034][LOG]: Starting osd daemon
[2017-10-17T08:33:42.005184][LOG]: Started osd.0 daemon on node01
[2017-10-17T08:33:42.317962][LOG]: Started osd.1 daemon on node01
[2017-10-17T08:33:42.634781][LOG]: Started osd.2 daemon on node02
[2017-10-17T08:33:42.954045][LOG]: Started osd.3 daemon on node03
[2017-10-17T08:33:43.432979][LOG]: not need create mgr
[2017-10-17T08:33:43.443442][LOG]: Clean process log file.
[2017-10-17T08:33:43.919403][WARNING]: Applied tuning, waiting ceph to be healthy
[2017-10-17T08:33:47.403901][WARNING]: Applied tuning, waiting ceph to be healthy
[2017-10-17T08:33:50.888564][LOG]: Tuning has applied to ceph cluster, ceph is Healthy now
[2017-10-17T08:33:52.350882][LOG]: RUNID: 13, RESULT_DIR: //mnt/data//13-3-fiorbd-seqwrite-4k-qd64-2g-100-400-rbd
[2017-10-17T08:33:52.351263][LOG]: Prerun_check: check if sysstat installed
[2017-10-17T08:33:52.658104][LOG]: Prerun_check: check if blktrace installed
[2017-10-17T08:33:53.332501][LOG]: check if FIO rbd engine installed
[2017-10-17T08:33:53.720802][LOG]: check if rbd volume fully initialized
[2017-10-17T08:33:54.206562][WARNING]: Ceph cluster used data occupied: 2.698 KB, planned_space: 10485760.0 KB
[2017-10-17T08:33:54.206722][WARNING]: rbd volume initialization has not be done
[2017-10-17T08:33:54.206871][LOG]: Preparing rbd volume
[2017-10-17T08:33:55.164264][LOG]: 1 FIO Jobs starts on node02
[2017-10-17T08:33:55.474774][LOG]: 1 FIO Jobs starts on node03
[2017-10-17T08:33:55.783428][LOG]: 1 FIO Jobs starts on node01
[2017-10-17T08:33:57.122272][WARNING]: 0 fio job still runing
[2017-10-17T08:33:57.122398][ERROR]: Planed to run 0 Fio Job, please check all.conf
[2017-10-17T08:33:57.123074][ERROR]: The test has been stopped, error_log: Traceback (most recent call last):
File "/CeTune/benchmarking/mod/benchmark.py", line 46, in go
self.prerun_check()
File "/CeTune/benchmarking/mod/bblock/fiorbd.py", line 89, in prerun_check
self.prepare_images()
File "/CeTune/benchmarking/mod/bblock/fiorbd.py", line 52, in prepare_images
raise KeyboardInterrupt
KeyboardInterrupt
The text was updated successfully, but these errors were encountered: