-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow task submission from Prefect DaskExecutor prevents cluster scaling. #1
Comments
Can you reproduce this without Pangeo Forge? Like, if you just create a really simple flow with 400k mapped tasked, do you see the same behavior? |
☝️ @rabernat I will run a test to verify. |
Can you also explain why there are so many tasks? Is that basically just how many files there are? |
Correct. That is the total number of input files. |
I wonder if we should be batching together some of these into larger tasks. That is possible at the |
@rabernat I tested with the following example flow
And we did not see the same slow task submission issue However this revealed another, deeper issue as my scheduler pod (which is configured with 10GB of memory) was killed almost instantly with an OOM error. Additionally we see worker unmanaged memory growing rapidly. As shown in the image above, unmanaged memory has inexplicably grown to 11GB after only 311 tasks have been processed. I'm flummoxed that this simple example map is not manageable with Prefect on instances with this resource capacity. |
@rabernat I'm quickly putting together a pure Dask demonstration to try and narrow this down to a Prefect issue. |
As a quick experiment to verify the issues we are experiencing are Prefect specific (as there have been some related Dask issue discussions previously dask/distributed#2757). I ran a small experiment to approximate the task submission behavior used by Prefect's
I did not encounter task submission delays or the memory spikes demonstrated in the equivalent Prefect operation using the |
We are experiencing extremely slow task submission via the DaskExecutor for very large mapped tasks. With previous flow tests where a task was mapped over roughly 20K items, task submission was sufficiently fast that our Dask cluster scaled workers up to the worker limit. But with a task mapped over 400K items, the DaskExecutor task submission to the scheduler appears rate limited and there are never sufficient tasks on the scheduler it to create more workers and scale so we are stuck with the cluster crawling along with the minimum number of workers.
And note the relatively small number of task which the scheduler has received. Normally the number of cache_inputs tasks should be growing very rapidly and the workers should be saturated forcing the cluster to scale but as you can see in the dashboard image below, the task submission to the scheduler is slow for some reason.
Prefect Slack discussion is https://prefect-community.slack.com/archives/C01TK7XSLT0/p1631896033088800 @rabernat This might be another topic of technical discussion for when you meet with the Prefect team.
The text was updated successfully, but these errors were encountered: