-
Hello! I use Jobs plugin with AMQP driver (RabbitMQ protocol) and move daemon from separate processes to Roadrunner workers. Roadrunner version: 2.12.3 I expect performance improvement but degradation occurs. I'm trying to keep the performance up by changing the values in the config. So far nothing works, I'm looking for help. More detailed information about queue and load:
Old daemons:
New worker:
I assume that the workers block the execution of requests and wait for each other. Apparently this does not happen with separately running daemons? I tried to increase or reduce: the number of workers, prefetch size and pollers number. The best result was shown by changing the prefetch size. But this is still very far from the previous result with separately running daemons. My rr.yaml config looks like this:
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 18 replies
-
Hey @embargo2710 👋🏻
Workers execute the request in an asynchronous way. This means that if you have 100 workers, 100 messages would be sent to all of them. I tested locally (using local rabbitmq instances in docker) and if you use a simple worker, you can consume up to 100k messages per second. For sure, that depends on disk IOPS, CPU and many other factors, but 10s average is too much. General advices here is to profile your code. You can try buggregator to find a slow spots in your code.
You may set the number of concurrent pollers to the CPU_CORES*2. No need to set it to super high numbers, like, e.g. 200.
This is a rabbitmq prefetch. That is, how many messages to pull and store locally in the RR to save some network and increase speed (like cache) for the round trip to the rabbitmq.
Do not use As for |
Beta Was this translation helpful? Give feedback.
-
Let me try to summarise: you have an empty worker with a random sleep function inside. Then you just bombard the rabbitmq with the messages that RR reads and sends to the worker. And the problem is that you have too big latency? (how do you measure it? ) |
Beta Was this translation helpful? Give feedback.
-
This depends on what your tasks are. How fast your code runs inside a worker, and should be chosen based on your application.
That is not strange, because
I don't agree with this statement. You don't seem to read the RabbitMQ documentation (sorry for pointing on that). Batch blocks execution by design, because you have to ACK all messages. If you just want to read everything without acknowledging - you may use AutoAck option or multiply ack. However, I see, that I need to include more verbose expanation about that in the docs, thanks for pointing me on that 👍🏻 |
Beta Was this translation helpful? Give feedback.
This depends on what your tasks are. How fast your code runs inside a worker, and should be chosen based on your application.
That is not strange, because
prefech
is well-known parameter of many queues (sqs, rabbitmq, kafka, etc). Literally the first statement: