Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One worker does not shut down after code has finished running #58

Open
kjgoodrick opened this issue Nov 21, 2020 · 7 comments
Open

One worker does not shut down after code has finished running #58

kjgoodrick opened this issue Nov 21, 2020 · 7 comments

Comments

@kjgoodrick
Copy link
Contributor

What happened:
Code is run with mpirun across two nodes each with 24 cores. Dask launches scheduler on one core and 46 workers as expected. Code executes as expected and dask begins to shut down workers. However, one worker is not shut down and continues trying to reconnect with the scheduler until the job is canceled by the machine job scheduler.

What you expected to happen:
All workers close and the program ends.

Minimal Complete Verifiable Example:

from dask_mpi import initialize
initialize(interface='ib0', nanny=False)

from dask.distributed import Client
client = Client()

# The code running is too much to share, but it is essentially as follows
def main():
    inputs = generate_inputs()

    # worker is an expensive function (2 seconds to 3 minutes depending on inputs)
    futures = [client.submit(worker, i) for i in inputs]

    results = [f.result() for f in as_completed(futures)]

    with open('output.pkl', 'wb') as f:
        pickle.dump(results, f)

if __name__ == '__main__':
    main()

Anything else we need to know?:
The worker that does not stop apparently is never sent a stop command

distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:40013
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:40013
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:40013', name: 17, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:40013
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:40013', name: 17, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:40013

For example, a worker that does stop:

distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:45938
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:45938
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:45938', name: 24, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:45938
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:45938
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:45938', name: 24, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:45938

The script is launched with: mpirun -n 48 python real_script.py

I also sometimes get messages like: distributed.core - INFO - Event loop was unresponsive in Worker for 37.01s. This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability. Not sure if this is causing it, but it's the only thing that looks out of place in the logs. My workers do also write work to disk.

I also tried using the nanny option, but it does not work on the machine I'm using.

Environment:

  • Dask version: 2.30.0 (dask-mpi: 2.21.0)
  • Python version: 3.8.6
  • Operating System: Red Hat Enterprise Linux Server release 7.4 (Maipo)
  • Install method (conda, pip, source): pip

Full log:

Full log
distributed.http.proxy - INFO - To route to workers diagnostics web server please install jupyter-server-proxy: python -m pip install jupyter-server-proxy
distributed.scheduler - INFO - Clear task state
distributed.scheduler - INFO -   Scheduler at:  tcp://10.225.6.104:44912
distributed.scheduler - INFO -   dashboard at:                     :8787
distributed.scheduler - INFO - Receive client connection: Client-d9107e91-2b7c-11eb-97eb-7cd30ab15d36
distributed.core - INFO - Starting established connection
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.diskutils - INFO - Found stale lock file and directory '/gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-ba4tdd5_', purging
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:43113
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:43113
distributed.worker - INFO -          dashboard at:         10.225.6.104:35418
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-z0grodjc
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:39284
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:38768
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:39284
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:38768
distributed.worker - INFO -          dashboard at:         10.225.6.104:37192
distributed.worker - INFO -          dashboard at:         10.225.6.104:39707
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-enblknqx
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-fzhf05_d
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:34559
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:34559
distributed.worker - INFO -          dashboard at:         10.225.6.104:34188
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-xywktaxn
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:45376
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:45376
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:44944
distributed.worker - INFO -          dashboard at:         10.225.6.104:35331
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:44944
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -          dashboard at:         10.225.6.104:40318
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-zz36b0gf
distributed.worker - INFO -               Threads:                          1
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:40013
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-n1k201ri
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:40013
distributed.worker - INFO -          dashboard at:         10.225.6.104:45114
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:33488
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:33488
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -          dashboard at:         10.225.6.104:44399
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-9yuzlgug
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:43447
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:43447
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-fjyf9nr6
distributed.worker - INFO -          dashboard at:         10.225.6.104:45311
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-m68db4nv
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:35031
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:35031
distributed.worker - INFO -          dashboard at:         10.225.6.104:39546
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-p7trnsmk
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:43432
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:43432
distributed.worker - INFO -          dashboard at:         10.225.6.104:40536
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-3tlpc2xa
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:42101
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:33746
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:42101
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:33415
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:33746
distributed.worker - INFO -          dashboard at:         10.225.6.104:42755
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO -          dashboard at:         10.225.6.104:35304
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:33415
distributed.worker - INFO -          dashboard at:         10.225.6.104:39469
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-56s8wlwo
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-s79044xj
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-wjle8lcv
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:38000
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:38000
distributed.worker - INFO -          dashboard at:         10.225.6.104:46190
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-mhm6h_v8
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:42900
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:42900
distributed.worker - INFO -          dashboard at:         10.225.6.104:35810
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-tv24j2wo
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:36236
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:36236
distributed.worker - INFO -          dashboard at:         10.225.6.104:46029
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-6zl_co85
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:36233
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:36233
distributed.worker - INFO -          dashboard at:         10.225.6.104:40601
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-qc4ncig0
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:44390
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:44390
distributed.worker - INFO -          dashboard at:         10.225.6.104:42854
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-yappalm4
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:34029
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:34029
distributed.worker - INFO -          dashboard at:         10.225.6.104:46262
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-79y2ehjz
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:40996
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:40996
distributed.worker - INFO -          dashboard at:         10.225.6.104:38444
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-wzpgfxr3
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.104:40832
distributed.worker - INFO -          Listening to:   tcp://10.225.6.104:40832
distributed.worker - INFO -          dashboard at:         10.225.6.104:38454
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-8u_k7kul
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:43113', name: 10, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:43113
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:38768', name: 8, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:38768
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:39284', name: 11, memory: 0, processing: 0>
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:39284
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:34559', name: 9, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:34559
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:45376', name: 6, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:45376
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:44944', name: 16, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:44944
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:40013', name: 17, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:40013
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:33488', name: 21, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:33488
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:43447', name: 5, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:43447
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:35031', name: 7, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:35031
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:43432', name: 20, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:43432
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:42101', name: 22, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:42101
distributed.core - INFO - Starting established connection
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:33415', name: 13, memory: 0, processing: 0>
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:33415
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:33746', name: 3, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:33746
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:38000', name: 19, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:38000
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:42900', name: 23, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:42900
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:36236', name: 4, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:36236
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:36233', name: 2, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:36233
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:44390', name: 15, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:44390
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:34029', name: 14, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:34029
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:40996', name: 18, memory: 0, processing: 0>
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:40996
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.104:40832', name: 12, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.104:40832
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:44103
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:44103
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:44927
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:44927
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:43228
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:43228
distributed.worker - INFO -          dashboard at:         10.225.6.159:35445
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:40590
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:40590
distributed.worker - INFO -          dashboard at:         10.225.6.159:38631
distributed.worker - INFO -          dashboard at:         10.225.6.159:44185
distributed.worker - INFO -          dashboard at:         10.225.6.159:37303
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -               Threads:                          1
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:44333
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:44333
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-uezp5ofx
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-fo3lt7ma
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-4k1yy3bq
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-8i3gm77d
distributed.worker - INFO -          dashboard at:         10.225.6.159:38596
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-n8g1mmro
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:44440
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:44440
distributed.worker - INFO -          dashboard at:         10.225.6.159:43946
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-wz3j2411
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:44953
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:44953
distributed.worker - INFO -          dashboard at:         10.225.6.159:45055
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-krh9x27h
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:41592
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:37663
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:41592
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:37663
distributed.worker - INFO -          dashboard at:         10.225.6.159:44769
distributed.worker - INFO -          dashboard at:         10.225.6.159:33547
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-7xr_3j0v
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-xekb4lc1
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:36325
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:36325
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -          dashboard at:         10.225.6.159:44743
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-qhud_jb2
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:34208
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:34208
distributed.worker - INFO -          dashboard at:         10.225.6.159:34413
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-nkrvs6b0
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:40590', name: 34, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:40590
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:43228', name: 33, memory: 0, processing: 0>
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:43228
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:44103', name: 26, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:44103
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:44927', name: 32, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:44927
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:44333', name: 39, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:44333
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:42154
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:42154
distributed.core - INFO - Starting established connection
distributed.worker - INFO -          dashboard at:         10.225.6.159:34141
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:34808
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:34808
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -          dashboard at:         10.225.6.159:36731
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-14gfprk1
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-6y2g43t0
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:44440', name: 27, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:44440
distributed.core - INFO - Starting established connection
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:34466
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:34466
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:44953', name: 38, memory: 0, processing: 0>
distributed.worker - INFO -          dashboard at:         10.225.6.159:41691
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-mo90cfq0
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:44953
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:37663', name: 29, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:37663
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:41592', name: 28, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:41592
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:36325', name: 30, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:36325
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:34208', name: 44, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:42255
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:42255
distributed.worker - INFO -          dashboard at:         10.225.6.159:46453
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:34208
distributed.core - INFO - Starting established connection
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-dy6qlnby
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:42154', name: 42, memory: 0, processing: 0>
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:45994
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:45994
distributed.worker - INFO -          dashboard at:         10.225.6.159:44846
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-hd6zjz98
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:42154
distributed.core - INFO - Starting established connection
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:34466', name: 45, memory: 0, processing: 0>
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:34466
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:34808', name: 36, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:34808
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:45994', name: 46, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:45994
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:42255', name: 40, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:42255
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:37798
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:37798
distributed.worker - INFO -          dashboard at:         10.225.6.159:38649
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-8m1t189a
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:35120
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:35120
distributed.worker - INFO -          dashboard at:         10.225.6.159:39303
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-hommlr17
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:46204
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:46204
distributed.worker - INFO -          dashboard at:         10.225.6.159:43005
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:42744
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:42744
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -          dashboard at:         10.225.6.159:39842
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-_j7iqol0
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-d63_uex6
distributed.worker - INFO - -------------------------------------------------
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:33401
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:33401
distributed.worker - INFO -          dashboard at:         10.225.6.159:46755
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:37798', name: 35, memory: 0, processing: 0>
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-vr5hv790
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:37798
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:46204', name: 43, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:46204
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:35120', name: 37, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:35120
distributed.core - INFO - Starting established connection
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:35201
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:35201
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:42744', name: 25, memory: 0, processing: 0>
distributed.worker - INFO -          dashboard at:         10.225.6.159:39119
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-ypr2xh75
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:42744
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:33401', name: 47, memory: 0, processing: 0>
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:33401
distributed.core - INFO - Starting established connection
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:40599
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:40599
distributed.worker - INFO -          dashboard at:         10.225.6.159:45595
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-8rm12_ao
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:35201', name: 41, memory: 0, processing: 0>
/projects/kygo7210/.conda/envs/simo_automator/lib/python3.8/site-packages/distributed/worker.py:482: UserWarning: The local_dir keyword has moved to local_directory
warnings.warn("The local_dir keyword has moved to local_directory")
distributed.worker - INFO -       Start worker at:   tcp://10.225.6.159:45938
distributed.worker - INFO -          Listening to:   tcp://10.225.6.159:45938
distributed.worker - INFO -          dashboard at:         10.225.6.159:42668
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:35201
distributed.core - INFO - Starting established connection
distributed.worker - INFO -               Threads:                          1
distributed.worker - INFO -                Memory:                  122.00 GB
distributed.worker - INFO -       Local Directory: /gpfs/summit/scratch/kygo7210/SIMO-Automator/dask-worker-space/worker-po5qe660
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:40599', name: 31, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:40599
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.scheduler - INFO - Register worker <Worker 'tcp://10.225.6.159:45938', name: 24, memory: 0, processing: 0>
distributed.scheduler - INFO - Starting worker compute stream, tcp://10.225.6.159:45938
distributed.core - INFO - Starting established connection
distributed.worker - INFO -         Registered to:   tcp://10.225.6.104:44912
distributed.worker - INFO - -------------------------------------------------
distributed.core - INFO - Starting established connection
distributed.core - INFO - Event loop was unresponsive in Worker for 37.01s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.03s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.03s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 36.95s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 36.95s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 36.94s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.01s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 36.96s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 36.95s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 36.95s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.05s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 36.96s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.03s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 37.02s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
[11/20/2020 03:09:14 PM] Run 1/420
[11/20/2020 03:09:14 PM] Run 2/420
[11/20/2020 03:09:14 PM] Run 3/420
[11/20/2020 03:09:14 PM] Run 4/420
[11/20/2020 03:09:14 PM] Run 5/420
[11/20/2020 03:09:14 PM] Run 6/420
[11/20/2020 03:09:16 PM] Run 7/420
[11/20/2020 03:09:16 PM] Run 8/420
[11/20/2020 03:09:16 PM] Run 9/420
[11/20/2020 03:09:16 PM] Run 10/420
[11/20/2020 03:09:16 PM] Run 11/420
[11/20/2020 03:09:16 PM] Run 12/420
[11/20/2020 03:09:16 PM] Run 13/420
[11/20/2020 03:09:17 PM] Run 14/420
[11/20/2020 03:09:17 PM] Run 15/420
[11/20/2020 03:09:18 PM] Run 16/420
[11/20/2020 03:09:18 PM] Run 17/420
[11/20/2020 03:09:18 PM] Run 18/420
[11/20/2020 03:09:18 PM] Run 19/420
[11/20/2020 03:09:18 PM] Run 20/420
[11/20/2020 03:09:20 PM] Run 21/420
[11/20/2020 03:09:20 PM] Run 22/420
[11/20/2020 03:09:21 PM] Run 23/420
[11/20/2020 03:09:22 PM] Run 24/420
[11/20/2020 03:09:22 PM] Run 25/420
[11/20/2020 03:09:22 PM] Run 26/420
[11/20/2020 03:09:23 PM] Run 27/420
[11/20/2020 03:09:23 PM] Run 28/420
[11/20/2020 03:09:23 PM] Run 29/420
[11/20/2020 03:09:23 PM] Run 30/420
distributed.core - INFO - Event loop was unresponsive in Worker for 48.80s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.73s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.73s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.73s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.80s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.74s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.73s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.74s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.81s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.74s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.81s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.81s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.74s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.73s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.79s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.80s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.80s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.79s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.81s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.74s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.73s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.73s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.80s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
distributed.core - INFO - Event loop was unresponsive in Worker for 48.81s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
[11/20/2020 03:09:23 PM] Run 31/420
[11/20/2020 03:09:23 PM] Run 32/420
[11/20/2020 03:09:23 PM] Run 33/420
[11/20/2020 03:09:24 PM] Run 34/420
[11/20/2020 03:09:25 PM] Run 35/420
[11/20/2020 03:09:25 PM] Run 36/420
[11/20/2020 03:09:25 PM] Run 37/420
[11/20/2020 03:09:25 PM] Run 38/420
[11/20/2020 03:09:25 PM] Run 39/420
[11/20/2020 03:09:25 PM] Run 40/420
[11/20/2020 03:09:25 PM] Run 41/420
[11/20/2020 03:09:25 PM] Run 42/420
[11/20/2020 03:09:25 PM] Run 43/420
[11/20/2020 03:09:25 PM] Run 44/420
[11/20/2020 03:09:25 PM] Run 45/420
[11/20/2020 03:09:25 PM] Run 46/420
[11/20/2020 03:09:25 PM] Run 47/420
[11/20/2020 03:09:25 PM] Run 48/420
[11/20/2020 03:09:25 PM] Run 49/420
[11/20/2020 03:09:26 PM] Run 50/420
[11/20/2020 03:09:26 PM] Run 51/420
[11/20/2020 03:09:26 PM] Run 52/420
[11/20/2020 03:09:26 PM] Run 53/420
[11/20/2020 03:09:26 PM] Run 54/420
[11/20/2020 03:09:26 PM] Run 55/420
[11/20/2020 03:09:26 PM] Run 56/420
[11/20/2020 03:09:27 PM] Run 57/420
[11/20/2020 03:09:27 PM] Run 58/420
[11/20/2020 03:09:27 PM] Run 59/420
[11/20/2020 03:09:27 PM] Run 60/420
[11/20/2020 03:09:28 PM] Run 61/420
[11/20/2020 03:09:28 PM] Run 62/420
[11/20/2020 03:09:28 PM] Run 63/420
[11/20/2020 03:09:28 PM] Run 64/420
[11/20/2020 03:09:28 PM] Run 65/420
[11/20/2020 03:09:28 PM] Run 66/420
[11/20/2020 03:09:28 PM] Run 67/420
[11/20/2020 03:09:28 PM] Run 68/420
[11/20/2020 03:09:28 PM] Run 69/420
[11/20/2020 03:09:29 PM] Run 70/420
[11/20/2020 03:09:29 PM] Run 71/420
[11/20/2020 03:09:29 PM] Run 72/420
[11/20/2020 03:09:29 PM] Run 73/420
[11/20/2020 03:09:29 PM] Run 74/420
[11/20/2020 03:09:29 PM] Run 75/420
[11/20/2020 03:09:29 PM] Run 76/420
[11/20/2020 03:09:30 PM] Run 77/420
[11/20/2020 03:09:30 PM] Run 78/420
[11/20/2020 03:09:30 PM] Run 79/420
[11/20/2020 03:09:30 PM] Run 80/420
[11/20/2020 03:09:30 PM] Run 81/420
[11/20/2020 03:09:30 PM] Run 82/420
[11/20/2020 03:09:30 PM] Run 83/420
[11/20/2020 03:09:30 PM] Run 84/420
[11/20/2020 03:09:30 PM] Run 85/420
[11/20/2020 03:09:30 PM] Run 86/420
[11/20/2020 03:09:30 PM] Run 87/420
[11/20/2020 03:09:30 PM] Run 88/420
[11/20/2020 03:09:30 PM] Run 89/420
[11/20/2020 03:09:30 PM] Run 90/420
[11/20/2020 03:09:30 PM] Run 91/420
[11/20/2020 03:09:31 PM] Run 92/420
[11/20/2020 03:09:31 PM] Run 93/420
[11/20/2020 03:09:31 PM] Run 94/420
[11/20/2020 03:09:31 PM] Run 95/420
[11/20/2020 03:09:31 PM] Run 96/420
[11/20/2020 03:09:31 PM] Run 97/420
[11/20/2020 03:09:31 PM] Run 98/420
[11/20/2020 03:09:31 PM] Run 99/420
[11/20/2020 03:09:31 PM] Run 100/420
[11/20/2020 03:09:31 PM] Run 101/420
[11/20/2020 03:09:32 PM] Run 102/420
[11/20/2020 03:09:32 PM] Run 103/420
[11/20/2020 03:09:32 PM] Run 104/420
[11/20/2020 03:09:32 PM] Run 105/420
[11/20/2020 03:09:32 PM] Run 106/420
[11/20/2020 03:09:33 PM] Run 107/420
[11/20/2020 03:09:33 PM] Run 108/420
[11/20/2020 03:09:33 PM] Run 109/420
[11/20/2020 03:09:33 PM] Run 110/420
[11/20/2020 03:09:33 PM] Run 111/420
[11/20/2020 03:09:33 PM] Run 112/420
[11/20/2020 03:09:33 PM] Run 113/420
[11/20/2020 03:09:33 PM] Run 114/420
[11/20/2020 03:09:33 PM] Run 115/420
[11/20/2020 03:09:34 PM] Run 116/420
[11/20/2020 03:09:34 PM] Run 117/420
[11/20/2020 03:09:34 PM] Run 118/420
[11/20/2020 03:09:34 PM] Run 119/420
[11/20/2020 03:09:34 PM] Run 120/420
[11/20/2020 03:09:35 PM] Run 121/420
[11/20/2020 03:09:35 PM] Run 122/420
[11/20/2020 03:09:35 PM] Run 123/420
[11/20/2020 03:09:35 PM] Run 124/420
[11/20/2020 03:09:35 PM] Run 125/420
[11/20/2020 03:09:35 PM] Run 126/420
[11/20/2020 03:09:35 PM] Run 127/420
[11/20/2020 03:09:35 PM] Run 128/420
[11/20/2020 03:09:35 PM] Run 129/420
[11/20/2020 03:09:35 PM] Run 130/420
[11/20/2020 03:09:35 PM] Run 131/420
[11/20/2020 03:09:36 PM] Run 132/420
[11/20/2020 03:09:36 PM] Run 133/420
[11/20/2020 03:09:36 PM] Run 134/420
[11/20/2020 03:09:36 PM] Run 135/420
[11/20/2020 03:09:36 PM] Run 136/420
[11/20/2020 03:09:36 PM] Run 137/420
[11/20/2020 03:09:37 PM] Run 138/420
[11/20/2020 03:09:37 PM] Run 139/420
[11/20/2020 03:09:37 PM] Run 140/420
[11/20/2020 03:09:37 PM] Run 141/420
[11/20/2020 03:09:37 PM] Run 142/420
[11/20/2020 03:09:37 PM] Run 143/420
[11/20/2020 03:09:37 PM] Run 144/420
[11/20/2020 03:09:37 PM] Run 145/420
[11/20/2020 03:09:38 PM] Run 146/420
[11/20/2020 03:09:38 PM] Run 147/420
[11/20/2020 03:09:38 PM] Run 148/420
[11/20/2020 03:09:38 PM] Run 149/420
[11/20/2020 03:09:38 PM] Run 150/420
[11/20/2020 03:09:38 PM] Run 151/420
[11/20/2020 03:09:38 PM] Run 152/420
[11/20/2020 03:09:38 PM] Run 153/420
[11/20/2020 03:09:38 PM] Run 154/420
[11/20/2020 03:09:38 PM] Run 155/420
[11/20/2020 03:09:38 PM] Run 156/420
[11/20/2020 03:09:38 PM] Run 157/420
[11/20/2020 03:09:38 PM] Run 158/420
[11/20/2020 03:09:38 PM] Run 159/420
[11/20/2020 03:09:38 PM] Run 160/420
[11/20/2020 03:09:38 PM] Run 161/420
[11/20/2020 03:09:39 PM] Run 162/420
[11/20/2020 03:09:39 PM] Run 163/420
[11/20/2020 03:09:39 PM] Run 164/420
[11/20/2020 03:09:39 PM] Run 165/420
[11/20/2020 03:09:39 PM] Run 166/420
[11/20/2020 03:09:39 PM] Run 167/420
[11/20/2020 03:09:39 PM] Run 168/420
[11/20/2020 03:09:39 PM] Run 169/420
[11/20/2020 03:09:39 PM] Run 170/420
[11/20/2020 03:09:39 PM] Run 171/420
[11/20/2020 03:09:39 PM] Run 172/420
[11/20/2020 03:09:40 PM] Run 173/420
[11/20/2020 03:09:40 PM] Run 174/420
[11/20/2020 03:09:40 PM] Run 175/420
[11/20/2020 03:09:40 PM] Run 176/420
[11/20/2020 03:09:40 PM] Run 177/420
[11/20/2020 03:09:40 PM] Run 178/420
[11/20/2020 03:09:40 PM] Run 179/420
[11/20/2020 03:09:40 PM] Run 180/420
[11/20/2020 03:09:40 PM] Run 181/420
[11/20/2020 03:09:40 PM] Run 182/420
[11/20/2020 03:09:40 PM] Run 183/420
[11/20/2020 03:09:40 PM] Run 184/420
[11/20/2020 03:09:41 PM] Run 185/420
[11/20/2020 03:09:41 PM] Run 186/420
[11/20/2020 03:09:41 PM] Run 187/420
[11/20/2020 03:09:41 PM] Run 188/420
[11/20/2020 03:09:41 PM] Run 189/420
[11/20/2020 03:09:41 PM] Run 190/420
[11/20/2020 03:09:41 PM] Run 191/420
[11/20/2020 03:09:41 PM] Run 192/420
[11/20/2020 03:09:41 PM] Run 193/420
[11/20/2020 03:09:41 PM] Run 194/420
[11/20/2020 03:09:42 PM] Run 195/420
[11/20/2020 03:09:42 PM] Run 196/420
[11/20/2020 03:09:42 PM] Run 197/420
[11/20/2020 03:09:42 PM] Run 198/420
[11/20/2020 03:09:42 PM] Run 199/420
[11/20/2020 03:09:42 PM] Run 200/420
[11/20/2020 03:09:42 PM] Run 201/420
[11/20/2020 03:09:42 PM] Run 202/420
[11/20/2020 03:09:42 PM] Run 203/420
[11/20/2020 03:09:43 PM] Run 204/420
[11/20/2020 03:09:43 PM] Run 205/420
[11/20/2020 03:09:43 PM] Run 206/420
[11/20/2020 03:09:43 PM] Run 207/420
[11/20/2020 03:09:43 PM] Run 208/420
[11/20/2020 03:09:43 PM] Run 209/420
[11/20/2020 03:09:43 PM] Run 210/420
[11/20/2020 03:09:43 PM] Run 211/420
[11/20/2020 03:09:43 PM] Run 212/420
[11/20/2020 03:09:43 PM] Run 213/420
[11/20/2020 03:09:43 PM] Run 214/420
[11/20/2020 03:09:43 PM] Run 215/420
[11/20/2020 03:09:43 PM] Run 216/420
[11/20/2020 03:09:43 PM] Run 217/420
[11/20/2020 03:09:44 PM] Run 218/420
[11/20/2020 03:09:44 PM] Run 219/420
[11/20/2020 03:09:44 PM] Run 220/420
[11/20/2020 03:09:44 PM] Run 221/420
[11/20/2020 03:09:44 PM] Run 222/420
[11/20/2020 03:09:44 PM] Run 223/420
[11/20/2020 03:09:44 PM] Run 224/420
[11/20/2020 03:09:45 PM] Run 225/420
[11/20/2020 03:09:45 PM] Run 226/420
[11/20/2020 03:09:45 PM] Run 227/420
[11/20/2020 03:09:45 PM] Run 228/420
[11/20/2020 03:09:45 PM] Run 229/420
[11/20/2020 03:09:45 PM] Run 230/420
[11/20/2020 03:09:45 PM] Run 231/420
[11/20/2020 03:09:45 PM] Run 232/420
[11/20/2020 03:09:45 PM] Run 233/420
[11/20/2020 03:09:45 PM] Run 234/420
[11/20/2020 03:09:45 PM] Run 235/420
[11/20/2020 03:09:45 PM] Run 236/420
[11/20/2020 03:09:46 PM] Run 237/420
[11/20/2020 03:09:46 PM] Run 238/420
[11/20/2020 03:09:46 PM] Run 239/420
[11/20/2020 03:09:46 PM] Run 240/420
[11/20/2020 03:09:46 PM] Run 241/420
[11/20/2020 03:09:46 PM] Run 242/420
[11/20/2020 03:09:46 PM] Run 243/420
[11/20/2020 03:09:46 PM] Run 244/420
[11/20/2020 03:09:46 PM] Run 245/420
[11/20/2020 03:09:46 PM] Run 246/420
[11/20/2020 03:09:46 PM] Run 247/420
[11/20/2020 03:09:46 PM] Run 248/420
[11/20/2020 03:09:47 PM] Run 249/420
[11/20/2020 03:09:47 PM] Run 250/420
[11/20/2020 03:09:47 PM] Run 251/420
[11/20/2020 03:09:47 PM] Run 252/420
[11/20/2020 03:09:47 PM] Run 253/420
[11/20/2020 03:09:47 PM] Run 254/420
[11/20/2020 03:09:47 PM] Run 255/420
[11/20/2020 03:09:47 PM] Run 256/420
[11/20/2020 03:09:47 PM] Run 257/420
[11/20/2020 03:09:47 PM] Run 258/420
[11/20/2020 03:09:48 PM] Run 259/420
[11/20/2020 03:09:48 PM] Run 260/420
[11/20/2020 03:09:48 PM] Run 261/420
[11/20/2020 03:09:48 PM] Run 262/420
[11/20/2020 03:09:48 PM] Run 263/420
[11/20/2020 03:09:48 PM] Run 264/420
[11/20/2020 03:09:48 PM] Run 265/420
[11/20/2020 03:09:48 PM] Run 266/420
[11/20/2020 03:09:49 PM] Run 267/420
[11/20/2020 03:09:49 PM] Run 268/420
[11/20/2020 03:09:49 PM] Run 269/420
[11/20/2020 03:09:49 PM] Run 270/420
[11/20/2020 03:09:49 PM] Run 271/420
[11/20/2020 03:09:49 PM] Run 272/420
[11/20/2020 03:09:50 PM] Run 273/420
[11/20/2020 03:09:50 PM] Run 274/420
[11/20/2020 03:09:50 PM] Run 275/420
[11/20/2020 03:09:50 PM] Run 276/420
[11/20/2020 03:09:50 PM] Run 277/420
[11/20/2020 03:09:50 PM] Run 278/420
[11/20/2020 03:09:50 PM] Run 279/420
[11/20/2020 03:09:50 PM] Run 280/420
[11/20/2020 03:09:50 PM] Run 281/420
[11/20/2020 03:09:50 PM] Run 282/420
[11/20/2020 03:09:50 PM] Run 283/420
[11/20/2020 03:09:50 PM] Run 284/420
[11/20/2020 03:09:50 PM] Run 285/420
[11/20/2020 03:09:51 PM] Run 286/420
[11/20/2020 03:09:51 PM] Run 287/420
[11/20/2020 03:09:51 PM] Run 288/420
[11/20/2020 03:09:51 PM] Run 289/420
[11/20/2020 03:09:51 PM] Run 290/420
[11/20/2020 03:09:51 PM] Run 291/420
[11/20/2020 03:09:51 PM] Run 292/420
[11/20/2020 03:09:51 PM] Run 293/420
[11/20/2020 03:09:51 PM] Run 294/420
[11/20/2020 03:09:51 PM] Run 295/420
[11/20/2020 03:09:52 PM] Run 296/420
[11/20/2020 03:09:52 PM] Run 297/420
[11/20/2020 03:09:52 PM] Run 298/420
[11/20/2020 03:09:52 PM] Run 299/420
[11/20/2020 03:09:52 PM] Run 300/420
[11/20/2020 03:09:52 PM] Run 301/420
[11/20/2020 03:09:52 PM] Run 302/420
[11/20/2020 03:09:52 PM] Run 303/420
[11/20/2020 03:09:52 PM] Run 304/420
[11/20/2020 03:09:53 PM] Run 305/420
[11/20/2020 03:09:53 PM] Run 306/420
[11/20/2020 03:09:53 PM] Run 307/420
[11/20/2020 03:09:53 PM] Run 308/420
[11/20/2020 03:09:53 PM] Run 309/420
[11/20/2020 03:09:53 PM] Run 310/420
[11/20/2020 03:09:53 PM] Run 311/420
[11/20/2020 03:09:53 PM] Run 312/420
[11/20/2020 03:09:53 PM] Run 313/420
[11/20/2020 03:09:53 PM] Run 314/420
[11/20/2020 03:09:53 PM] Run 315/420
[11/20/2020 03:09:53 PM] Run 316/420
[11/20/2020 03:09:53 PM] Run 317/420
[11/20/2020 03:09:53 PM] Run 318/420
[11/20/2020 03:09:54 PM] Run 319/420
[11/20/2020 03:09:54 PM] Run 320/420
[11/20/2020 03:09:54 PM] Run 321/420
[11/20/2020 03:09:54 PM] Run 322/420
[11/20/2020 03:09:54 PM] Run 323/420
[11/20/2020 03:09:54 PM] Run 324/420
[11/20/2020 03:09:54 PM] Run 325/420
[11/20/2020 03:09:55 PM] Run 326/420
[11/20/2020 03:09:55 PM] Run 327/420
[11/20/2020 03:09:55 PM] Run 328/420
[11/20/2020 03:09:55 PM] Run 329/420
[11/20/2020 03:09:55 PM] Run 330/420
[11/20/2020 03:09:55 PM] Run 331/420
[11/20/2020 03:09:55 PM] Run 332/420
[11/20/2020 03:09:55 PM] Run 333/420
[11/20/2020 03:09:55 PM] Run 334/420
[11/20/2020 03:09:55 PM] Run 335/420
[11/20/2020 03:09:55 PM] Run 336/420
[11/20/2020 03:09:55 PM] Run 337/420
[11/20/2020 03:09:55 PM] Run 338/420
[11/20/2020 03:09:55 PM] Run 339/420
[11/20/2020 03:09:56 PM] Run 340/420
[11/20/2020 03:09:56 PM] Run 341/420
[11/20/2020 03:09:56 PM] Run 342/420
[11/20/2020 03:09:56 PM] Run 343/420
[11/20/2020 03:09:56 PM] Run 344/420
[11/20/2020 03:09:56 PM] Run 345/420
[11/20/2020 03:09:56 PM] Run 346/420
[11/20/2020 03:09:56 PM] Run 347/420
[11/20/2020 03:09:56 PM] Run 348/420
[11/20/2020 03:09:56 PM] Run 349/420
[11/20/2020 03:09:56 PM] Run 350/420
[11/20/2020 03:09:57 PM] Run 351/420
[11/20/2020 03:09:57 PM] Run 352/420
[11/20/2020 03:09:57 PM] Run 353/420
[11/20/2020 03:09:57 PM] Run 354/420
[11/20/2020 03:09:57 PM] Run 355/420
[11/20/2020 03:09:57 PM] Run 356/420
[11/20/2020 03:09:57 PM] Run 357/420
[11/20/2020 03:09:57 PM] Run 358/420
[11/20/2020 03:09:57 PM] Run 359/420
[11/20/2020 03:09:57 PM] Run 360/420
[11/20/2020 03:09:57 PM] Run 361/420
[11/20/2020 03:09:57 PM] Run 362/420
[11/20/2020 03:09:58 PM] Run 363/420
[11/20/2020 03:09:58 PM] Run 364/420
[11/20/2020 03:09:58 PM] Run 365/420
[11/20/2020 03:09:58 PM] Run 366/420
[11/20/2020 03:09:58 PM] Run 367/420
[11/20/2020 03:09:58 PM] Run 368/420
[11/20/2020 03:09:59 PM] Run 369/420
[11/20/2020 03:09:59 PM] Run 370/420
[11/20/2020 03:09:59 PM] Run 371/420
[11/20/2020 03:09:59 PM] Run 372/420
[11/20/2020 03:09:59 PM] Run 373/420
[11/20/2020 03:09:59 PM] Run 374/420
[11/20/2020 03:09:59 PM] Run 375/420
[11/20/2020 03:09:59 PM] Run 376/420
[11/20/2020 03:09:59 PM] Run 377/420
[11/20/2020 03:09:59 PM] Run 378/420
[11/20/2020 03:09:59 PM] Run 379/420
[11/20/2020 03:09:59 PM] Run 380/420
[11/20/2020 03:09:59 PM] Run 381/420
[11/20/2020 03:10:00 PM] Run 382/420
[11/20/2020 03:10:00 PM] Run 383/420
[11/20/2020 03:10:00 PM] Run 384/420
[11/20/2020 03:10:00 PM] Run 385/420
[11/20/2020 03:10:00 PM] Run 386/420
[11/20/2020 03:10:00 PM] Run 387/420
[11/20/2020 03:10:00 PM] Run 388/420
[11/20/2020 03:10:00 PM] Run 389/420
[11/20/2020 03:10:01 PM] Run 390/420
[11/20/2020 03:10:01 PM] Run 391/420
[11/20/2020 03:10:01 PM] Run 392/420
[11/20/2020 03:10:01 PM] Run 393/420
[11/20/2020 03:10:01 PM] Run 394/420
[11/20/2020 03:10:01 PM] Run 395/420
[11/20/2020 03:10:02 PM] Run 396/420
[11/20/2020 03:10:02 PM] Run 397/420
[11/20/2020 03:10:02 PM] Run 398/420
[11/20/2020 03:10:02 PM] Run 399/420
[11/20/2020 03:10:02 PM] Run 400/420
[11/20/2020 03:10:02 PM] Run 401/420
[11/20/2020 03:10:02 PM] Run 402/420
[11/20/2020 03:10:03 PM] Run 403/420
[11/20/2020 03:10:03 PM] Run 404/420
[11/20/2020 03:10:03 PM] Run 405/420
[11/20/2020 03:10:04 PM] Run 406/420
[11/20/2020 03:10:04 PM] Run 407/420
[11/20/2020 03:10:04 PM] Run 408/420
[11/20/2020 03:10:05 PM] Run 409/420
[11/20/2020 03:10:05 PM] Run 410/420
[11/20/2020 03:10:06 PM] Run 411/420
[11/20/2020 03:10:08 PM] Run 412/420
[11/20/2020 03:10:08 PM] Run 413/420
[11/20/2020 03:10:08 PM] Run 414/420
[11/20/2020 03:10:10 PM] Run 415/420
[11/20/2020 03:10:51 PM] Run 416/420
[11/20/2020 03:10:54 PM] Run 417/420
[11/20/2020 03:10:55 PM] Run 418/420
[11/20/2020 03:10:56 PM] Run 419/420
[11/20/2020 03:11:05 PM] Run 420/420
[11/20/2020 03:11:05 PM] Processed Run 1/420
[11/20/2020 03:11:06 PM] Processed Run 2/420
[11/20/2020 03:11:07 PM] Processed Run 3/420
[11/20/2020 03:11:08 PM] Processed Run 4/420
[11/20/2020 03:11:09 PM] Processed Run 5/420
[11/20/2020 03:11:11 PM] Processed Run 6/420
[11/20/2020 03:11:11 PM] Processed Run 7/420
[11/20/2020 03:11:11 PM] Processed Run 8/420
[11/20/2020 03:11:11 PM] Processed Run 9/420
[11/20/2020 03:11:11 PM] Processed Run 10/420
[11/20/2020 03:11:12 PM] Processed Run 11/420
[11/20/2020 03:11:12 PM] Processed Run 12/420
[11/20/2020 03:11:12 PM] Processed Run 13/420
[11/20/2020 03:11:12 PM] Processed Run 14/420
[11/20/2020 03:11:12 PM] Processed Run 15/420
[11/20/2020 03:11:12 PM] Processed Run 16/420
[11/20/2020 03:11:12 PM] Processed Run 17/420
[11/20/2020 03:11:12 PM] Processed Run 18/420
[11/20/2020 03:11:12 PM] Processed Run 19/420
[11/20/2020 03:11:12 PM] Processed Run 20/420
[11/20/2020 03:11:12 PM] Processed Run 21/420
[11/20/2020 03:11:12 PM] Processed Run 22/420
[11/20/2020 03:11:13 PM] Processed Run 23/420
[11/20/2020 03:11:13 PM] Processed Run 24/420
[11/20/2020 03:11:13 PM] Processed Run 25/420
[11/20/2020 03:11:13 PM] Processed Run 26/420
[11/20/2020 03:11:13 PM] Processed Run 27/420
[11/20/2020 03:11:13 PM] Processed Run 28/420
[11/20/2020 03:11:13 PM] Processed Run 29/420
[11/20/2020 03:11:13 PM] Processed Run 30/420
[11/20/2020 03:11:13 PM] Processed Run 31/420
[11/20/2020 03:11:21 PM] Processed Run 32/420
[11/20/2020 03:11:33 PM] Processed Run 33/420
[11/20/2020 03:11:41 PM] Processed Run 34/420
[11/20/2020 03:11:50 PM] Processed Run 35/420
[11/20/2020 03:12:01 PM] Processed Run 36/420
[11/20/2020 03:12:02 PM] Processed Run 37/420
[11/20/2020 03:12:02 PM] Processed Run 38/420
[11/20/2020 03:12:02 PM] Processed Run 39/420
[11/20/2020 03:12:10 PM] Processed Run 40/420
[11/20/2020 03:12:10 PM] Processed Run 41/420
[11/20/2020 03:12:10 PM] Processed Run 42/420
[11/20/2020 03:12:10 PM] Processed Run 43/420
[11/20/2020 03:12:10 PM] Processed Run 44/420
[11/20/2020 03:12:10 PM] Processed Run 45/420
[11/20/2020 03:12:10 PM] Processed Run 46/420
[11/20/2020 03:12:10 PM] Processed Run 47/420
[11/20/2020 03:12:10 PM] Processed Run 48/420
[11/20/2020 03:12:10 PM] Processed Run 49/420
[11/20/2020 03:12:11 PM] Processed Run 50/420
[11/20/2020 03:12:11 PM] Processed Run 51/420
[11/20/2020 03:12:11 PM] Processed Run 52/420
[11/20/2020 03:12:11 PM] Processed Run 53/420
[11/20/2020 03:12:11 PM] Processed Run 54/420
[11/20/2020 03:12:11 PM] Processed Run 55/420
[11/20/2020 03:12:11 PM] Processed Run 56/420
[11/20/2020 03:12:11 PM] Processed Run 57/420
[11/20/2020 03:12:11 PM] Processed Run 58/420
[11/20/2020 03:12:11 PM] Processed Run 59/420
[11/20/2020 03:12:11 PM] Processed Run 60/420
[11/20/2020 03:12:11 PM] Processed Run 61/420
[11/20/2020 03:12:11 PM] Processed Run 62/420
[11/20/2020 03:12:12 PM] Processed Run 63/420
[11/20/2020 03:12:12 PM] Processed Run 64/420
[11/20/2020 03:12:13 PM] Processed Run 65/420
[11/20/2020 03:12:14 PM] Processed Run 66/420
[11/20/2020 03:12:14 PM] Processed Run 67/420
[11/20/2020 03:12:14 PM] Processed Run 68/420
[11/20/2020 03:12:14 PM] Processed Run 69/420
[11/20/2020 03:12:14 PM] Processed Run 70/420
[11/20/2020 03:12:14 PM] Processed Run 71/420
[11/20/2020 03:12:14 PM] Processed Run 72/420
[11/20/2020 03:12:14 PM] Processed Run 73/420
[11/20/2020 03:12:14 PM] Processed Run 74/420
[11/20/2020 03:12:14 PM] Processed Run 75/420
[11/20/2020 03:12:14 PM] Processed Run 76/420
[11/20/2020 03:12:14 PM] Processed Run 77/420
[11/20/2020 03:12:14 PM] Processed Run 78/420
[11/20/2020 03:12:14 PM] Processed Run 79/420
[11/20/2020 03:12:14 PM] Processed Run 80/420
[11/20/2020 03:12:14 PM] Processed Run 81/420
[11/20/2020 03:12:14 PM] Processed Run 82/420
[11/20/2020 03:12:14 PM] Processed Run 83/420
[11/20/2020 03:12:14 PM] Processed Run 84/420
[11/20/2020 03:12:14 PM] Processed Run 85/420
[11/20/2020 03:12:14 PM] Processed Run 86/420
[11/20/2020 03:12:14 PM] Processed Run 87/420
[11/20/2020 03:12:14 PM] Processed Run 88/420
[11/20/2020 03:12:14 PM] Processed Run 89/420
[11/20/2020 03:12:14 PM] Processed Run 90/420
[11/20/2020 03:12:14 PM] Processed Run 91/420
[11/20/2020 03:12:25 PM] Processed Run 92/420
[11/20/2020 03:12:25 PM] Processed Run 93/420
[11/20/2020 03:12:26 PM] Processed Run 94/420
[11/20/2020 03:12:26 PM] Processed Run 95/420
[11/20/2020 03:12:27 PM] Processed Run 96/420
[11/20/2020 03:12:27 PM] Processed Run 97/420
[11/20/2020 03:12:27 PM] Processed Run 98/420
[11/20/2020 03:12:27 PM] Processed Run 99/420
[11/20/2020 03:12:27 PM] Processed Run 100/420
[11/20/2020 03:12:27 PM] Processed Run 101/420
[11/20/2020 03:12:27 PM] Processed Run 102/420
[11/20/2020 03:12:27 PM] Processed Run 103/420
[11/20/2020 03:12:27 PM] Processed Run 104/420
[11/20/2020 03:12:27 PM] Processed Run 105/420
[11/20/2020 03:12:27 PM] Processed Run 106/420
[11/20/2020 03:12:27 PM] Processed Run 107/420
[11/20/2020 03:12:28 PM] Processed Run 108/420
[11/20/2020 03:12:28 PM] Processed Run 109/420
[11/20/2020 03:12:28 PM] Processed Run 110/420
[11/20/2020 03:12:28 PM] Processed Run 111/420
[11/20/2020 03:12:28 PM] Processed Run 112/420
[11/20/2020 03:12:28 PM] Processed Run 113/420
[11/20/2020 03:12:28 PM] Processed Run 114/420
[11/20/2020 03:12:28 PM] Processed Run 115/420
[11/20/2020 03:12:28 PM] Processed Run 116/420
[11/20/2020 03:12:28 PM] Processed Run 117/420
[11/20/2020 03:12:28 PM] Processed Run 118/420
[11/20/2020 03:12:28 PM] Processed Run 119/420
[11/20/2020 03:12:28 PM] Processed Run 120/420
[11/20/2020 03:12:28 PM] Processed Run 121/420
[11/20/2020 03:12:28 PM] Processed Run 122/420
[11/20/2020 03:12:29 PM] Processed Run 123/420
[11/20/2020 03:12:29 PM] Processed Run 124/420
[11/20/2020 03:12:30 PM] Processed Run 125/420
[11/20/2020 03:12:30 PM] Processed Run 126/420
[11/20/2020 03:12:30 PM] Processed Run 127/420
[11/20/2020 03:12:30 PM] Processed Run 128/420
[11/20/2020 03:12:30 PM] Processed Run 129/420
[11/20/2020 03:12:31 PM] Processed Run 130/420
[11/20/2020 03:12:31 PM] Processed Run 131/420
[11/20/2020 03:12:31 PM] Processed Run 132/420
[11/20/2020 03:12:31 PM] Processed Run 133/420
[11/20/2020 03:12:31 PM] Processed Run 134/420
[11/20/2020 03:12:31 PM] Processed Run 135/420
[11/20/2020 03:12:31 PM] Processed Run 136/420
[11/20/2020 03:12:31 PM] Processed Run 137/420
[11/20/2020 03:12:31 PM] Processed Run 138/420
[11/20/2020 03:12:31 PM] Processed Run 139/420
[11/20/2020 03:12:31 PM] Processed Run 140/420
[11/20/2020 03:12:41 PM] Processed Run 141/420
[11/20/2020 03:12:41 PM] Processed Run 142/420
[11/20/2020 03:12:41 PM] Processed Run 143/420
[11/20/2020 03:12:41 PM] Processed Run 144/420
[11/20/2020 03:12:41 PM] Processed Run 145/420
[11/20/2020 03:12:42 PM] Processed Run 146/420
[11/20/2020 03:12:42 PM] Processed Run 147/420
[11/20/2020 03:12:42 PM] Processed Run 148/420
[11/20/2020 03:12:42 PM] Processed Run 149/420
[11/20/2020 03:12:42 PM] Processed Run 150/420
[11/20/2020 03:12:42 PM] Processed Run 151/420
[11/20/2020 03:12:42 PM] Processed Run 152/420
[11/20/2020 03:12:42 PM] Processed Run 153/420
[11/20/2020 03:12:43 PM] Processed Run 154/420
[11/20/2020 03:12:43 PM] Processed Run 155/420
[11/20/2020 03:12:44 PM] Processed Run 156/420
[11/20/2020 03:12:44 PM] Processed Run 157/420
[11/20/2020 03:12:44 PM] Processed Run 158/420
[11/20/2020 03:12:44 PM] Processed Run 159/420
[11/20/2020 03:12:44 PM] Processed Run 160/420
[11/20/2020 03:12:44 PM] Processed Run 161/420
[11/20/2020 03:12:44 PM] Processed Run 162/420
[11/20/2020 03:12:44 PM] Processed Run 163/420
[11/20/2020 03:12:44 PM] Processed Run 164/420
[11/20/2020 03:12:44 PM] Processed Run 165/420
[11/20/2020 03:12:44 PM] Processed Run 166/420
[11/20/2020 03:12:44 PM] Processed Run 167/420
[11/20/2020 03:12:44 PM] Processed Run 168/420
[11/20/2020 03:12:44 PM] Processed Run 169/420
[11/20/2020 03:12:44 PM] Processed Run 170/420
[11/20/2020 03:12:44 PM] Processed Run 171/420
[11/20/2020 03:12:45 PM] Processed Run 172/420
[11/20/2020 03:12:45 PM] Processed Run 173/420
[11/20/2020 03:12:45 PM] Processed Run 174/420
[11/20/2020 03:12:45 PM] Processed Run 175/420
[11/20/2020 03:12:45 PM] Processed Run 176/420
[11/20/2020 03:12:45 PM] Processed Run 177/420
[11/20/2020 03:12:45 PM] Processed Run 178/420
[11/20/2020 03:12:45 PM] Processed Run 179/420
[11/20/2020 03:12:45 PM] Processed Run 180/420
[11/20/2020 03:12:45 PM] Processed Run 181/420
[11/20/2020 03:12:45 PM] Processed Run 182/420
[11/20/2020 03:12:46 PM] Processed Run 183/420
[11/20/2020 03:12:46 PM] Processed Run 184/420
[11/20/2020 03:12:47 PM] Processed Run 185/420
[11/20/2020 03:12:47 PM] Processed Run 186/420
[11/20/2020 03:12:47 PM] Processed Run 187/420
[11/20/2020 03:12:47 PM] Processed Run 188/420
[11/20/2020 03:12:47 PM] Processed Run 189/420
[11/20/2020 03:12:47 PM] Processed Run 190/420
[11/20/2020 03:12:47 PM] Processed Run 191/420
[11/20/2020 03:12:48 PM] Processed Run 192/420
[11/20/2020 03:12:48 PM] Processed Run 193/420
[11/20/2020 03:12:48 PM] Processed Run 194/420
[11/20/2020 03:12:48 PM] Processed Run 195/420
[11/20/2020 03:12:48 PM] Processed Run 196/420
[11/20/2020 03:12:48 PM] Processed Run 197/420
[11/20/2020 03:12:48 PM] Processed Run 198/420
[11/20/2020 03:12:48 PM] Processed Run 199/420
[11/20/2020 03:12:48 PM] Processed Run 200/420
[11/20/2020 03:12:48 PM] Processed Run 201/420
[11/20/2020 03:12:48 PM] Processed Run 202/420
[11/20/2020 03:12:48 PM] Processed Run 203/420
[11/20/2020 03:12:48 PM] Processed Run 204/420
[11/20/2020 03:12:48 PM] Processed Run 205/420
[11/20/2020 03:12:48 PM] Processed Run 206/420
[11/20/2020 03:12:48 PM] Processed Run 207/420
[11/20/2020 03:12:48 PM] Processed Run 208/420
[11/20/2020 03:12:48 PM] Processed Run 209/420
[11/20/2020 03:12:48 PM] Processed Run 210/420
[11/20/2020 03:12:48 PM] Processed Run 211/420
[11/20/2020 03:12:48 PM] Processed Run 212/420
[11/20/2020 03:13:02 PM] Processed Run 213/420
[11/20/2020 03:13:02 PM] Processed Run 214/420
[11/20/2020 03:13:03 PM] Processed Run 215/420
[11/20/2020 03:13:03 PM] Processed Run 216/420
[11/20/2020 03:13:03 PM] Processed Run 217/420
[11/20/2020 03:13:03 PM] Processed Run 218/420
[11/20/2020 03:13:03 PM] Processed Run 219/420
[11/20/2020 03:13:03 PM] Processed Run 220/420
[11/20/2020 03:13:04 PM] Processed Run 221/420
[11/20/2020 03:13:04 PM] Processed Run 222/420
[11/20/2020 03:13:04 PM] Processed Run 223/420
[11/20/2020 03:13:04 PM] Processed Run 224/420
[11/20/2020 03:13:04 PM] Processed Run 225/420
[11/20/2020 03:13:04 PM] Processed Run 226/420
[11/20/2020 03:13:04 PM] Processed Run 227/420
[11/20/2020 03:13:04 PM] Processed Run 228/420
[11/20/2020 03:13:04 PM] Processed Run 229/420
[11/20/2020 03:13:04 PM] Processed Run 230/420
[11/20/2020 03:13:04 PM] Processed Run 231/420
[11/20/2020 03:13:04 PM] Processed Run 232/420
[11/20/2020 03:13:04 PM] Processed Run 233/420
[11/20/2020 03:13:04 PM] Processed Run 234/420
[11/20/2020 03:13:04 PM] Processed Run 235/420
[11/20/2020 03:13:04 PM] Processed Run 236/420
[11/20/2020 03:13:04 PM] Processed Run 237/420
[11/20/2020 03:13:04 PM] Processed Run 238/420
[11/20/2020 03:13:04 PM] Processed Run 239/420
[11/20/2020 03:13:04 PM] Processed Run 240/420
[11/20/2020 03:13:04 PM] Processed Run 241/420
[11/20/2020 03:13:05 PM] Processed Run 242/420
[11/20/2020 03:13:05 PM] Processed Run 243/420
[11/20/2020 03:13:06 PM] Processed Run 244/420
[11/20/2020 03:13:06 PM] Processed Run 245/420
[11/20/2020 03:13:07 PM] Processed Run 246/420
[11/20/2020 03:13:07 PM] Processed Run 247/420
[11/20/2020 03:13:07 PM] Processed Run 248/420
[11/20/2020 03:13:07 PM] Processed Run 249/420
[11/20/2020 03:13:07 PM] Processed Run 250/420
[11/20/2020 03:13:07 PM] Processed Run 251/420
[11/20/2020 03:13:07 PM] Processed Run 252/420
[11/20/2020 03:13:07 PM] Processed Run 253/420
[11/20/2020 03:13:07 PM] Processed Run 254/420
[11/20/2020 03:13:07 PM] Processed Run 255/420
[11/20/2020 03:13:07 PM] Processed Run 256/420
[11/20/2020 03:13:07 PM] Processed Run 257/420
[11/20/2020 03:13:07 PM] Processed Run 258/420
[11/20/2020 03:13:07 PM] Processed Run 259/420
[11/20/2020 03:13:07 PM] Processed Run 260/420
[11/20/2020 03:13:07 PM] Processed Run 261/420
[11/20/2020 03:13:07 PM] Processed Run 262/420
[11/20/2020 03:13:08 PM] Processed Run 263/420
[11/20/2020 03:13:08 PM] Processed Run 264/420
[11/20/2020 03:13:08 PM] Processed Run 265/420
[11/20/2020 03:13:08 PM] Processed Run 266/420
[11/20/2020 03:13:08 PM] Processed Run 267/420
[11/20/2020 03:13:08 PM] Processed Run 268/420
[11/20/2020 03:13:08 PM] Processed Run 269/420
[11/20/2020 03:13:08 PM] Processed Run 270/420
[11/20/2020 03:13:08 PM] Processed Run 271/420
[11/20/2020 03:13:08 PM] Processed Run 272/420
[11/20/2020 03:13:09 PM] Processed Run 273/420
[11/20/2020 03:13:09 PM] Processed Run 274/420
[11/20/2020 03:13:10 PM] Processed Run 275/420
[11/20/2020 03:13:26 PM] Processed Run 276/420
[11/20/2020 03:13:26 PM] Processed Run 277/420
[11/20/2020 03:13:26 PM] Processed Run 278/420
[11/20/2020 03:13:26 PM] Processed Run 279/420
[11/20/2020 03:13:26 PM] Processed Run 280/420
[11/20/2020 03:13:27 PM] Processed Run 281/420
[11/20/2020 03:13:27 PM] Processed Run 282/420
[11/20/2020 03:13:27 PM] Processed Run 283/420
[11/20/2020 03:13:27 PM] Processed Run 284/420
[11/20/2020 03:13:27 PM] Processed Run 285/420
[11/20/2020 03:13:27 PM] Processed Run 286/420
[11/20/2020 03:13:27 PM] Processed Run 287/420
[11/20/2020 03:13:27 PM] Processed Run 288/420
[11/20/2020 03:13:27 PM] Processed Run 289/420
[11/20/2020 03:13:27 PM] Processed Run 290/420
[11/20/2020 03:13:27 PM] Processed Run 291/420
[11/20/2020 03:13:27 PM] Processed Run 292/420
[11/20/2020 03:13:27 PM] Processed Run 293/420
[11/20/2020 03:13:27 PM] Processed Run 294/420
[11/20/2020 03:13:27 PM] Processed Run 295/420
[11/20/2020 03:13:27 PM] Processed Run 296/420
[11/20/2020 03:13:27 PM] Processed Run 297/420
[11/20/2020 03:13:27 PM] Processed Run 298/420
[11/20/2020 03:13:27 PM] Processed Run 299/420
[11/20/2020 03:13:27 PM] Processed Run 300/420
[11/20/2020 03:13:27 PM] Processed Run 301/420
[11/20/2020 03:13:27 PM] Processed Run 302/420
[11/20/2020 03:13:28 PM] Processed Run 303/420
[11/20/2020 03:13:28 PM] Processed Run 304/420
[11/20/2020 03:13:29 PM] Processed Run 305/420
[11/20/2020 03:13:29 PM] Processed Run 306/420
[11/20/2020 03:13:29 PM] Processed Run 307/420
[11/20/2020 03:13:30 PM] Processed Run 308/420
[11/20/2020 03:13:30 PM] Processed Run 309/420
[11/20/2020 03:13:30 PM] Processed Run 310/420
[11/20/2020 03:13:30 PM] Processed Run 311/420
[11/20/2020 03:13:30 PM] Processed Run 312/420
[11/20/2020 03:13:30 PM] Processed Run 313/420
[11/20/2020 03:13:30 PM] Processed Run 314/420
[11/20/2020 03:13:30 PM] Processed Run 315/420
[11/20/2020 03:13:30 PM] Processed Run 316/420
[11/20/2020 03:13:30 PM] Processed Run 317/420
[11/20/2020 03:13:30 PM] Processed Run 318/420
[11/20/2020 03:13:30 PM] Processed Run 319/420
[11/20/2020 03:13:30 PM] Processed Run 320/420
[11/20/2020 03:13:30 PM] Processed Run 321/420
[11/20/2020 03:13:30 PM] Processed Run 322/420
[11/20/2020 03:13:30 PM] Processed Run 323/420
[11/20/2020 03:13:30 PM] Processed Run 324/420
[11/20/2020 03:13:30 PM] Processed Run 325/420
[11/20/2020 03:13:30 PM] Processed Run 326/420
[11/20/2020 03:13:30 PM] Processed Run 327/420
[11/20/2020 03:13:30 PM] Processed Run 328/420
[11/20/2020 03:13:30 PM] Processed Run 329/420
[11/20/2020 03:13:30 PM] Processed Run 330/420
[11/20/2020 03:13:30 PM] Processed Run 331/420
[11/20/2020 03:13:31 PM] Processed Run 332/420
[11/20/2020 03:13:31 PM] Processed Run 333/420
[11/20/2020 03:13:32 PM] Processed Run 334/420
[11/20/2020 03:13:32 PM] Processed Run 335/420
[11/20/2020 03:13:33 PM] Processed Run 336/420
[11/20/2020 03:13:33 PM] Processed Run 337/420
[11/20/2020 03:13:33 PM] Processed Run 338/420
[11/20/2020 03:13:33 PM] Processed Run 339/420
[11/20/2020 03:13:33 PM] Processed Run 340/420
[11/20/2020 03:13:33 PM] Processed Run 341/420
[11/20/2020 03:13:33 PM] Processed Run 342/420
[11/20/2020 03:13:33 PM] Processed Run 343/420
[11/20/2020 03:13:33 PM] Processed Run 344/420
[11/20/2020 03:13:33 PM] Processed Run 345/420
[11/20/2020 03:13:33 PM] Processed Run 346/420
[11/20/2020 03:13:33 PM] Processed Run 347/420
[11/20/2020 03:13:33 PM] Processed Run 348/420
[11/20/2020 03:13:33 PM] Processed Run 349/420
[11/20/2020 03:13:33 PM] Processed Run 350/420
[11/20/2020 03:13:33 PM] Processed Run 351/420
[11/20/2020 03:13:33 PM] Processed Run 352/420
[11/20/2020 03:13:33 PM] Processed Run 353/420
[11/20/2020 03:13:33 PM] Processed Run 354/420
[11/20/2020 03:13:33 PM] Processed Run 355/420
[11/20/2020 03:13:33 PM] Processed Run 356/420
[11/20/2020 03:13:33 PM] Processed Run 357/420
[11/20/2020 03:13:33 PM] Processed Run 358/420
[11/20/2020 03:13:33 PM] Processed Run 359/420
[11/20/2020 03:13:33 PM] Processed Run 360/420
[11/20/2020 03:13:33 PM] Processed Run 361/420
[11/20/2020 03:13:34 PM] Processed Run 362/420
[11/20/2020 03:13:35 PM] Processed Run 363/420
[11/20/2020 03:13:35 PM] Processed Run 364/420
[11/20/2020 03:13:36 PM] Processed Run 365/420
[11/20/2020 03:13:36 PM] Processed Run 366/420
[11/20/2020 03:13:36 PM] Processed Run 367/420
[11/20/2020 03:13:36 PM] Processed Run 368/420
[11/20/2020 03:13:36 PM] Processed Run 369/420
[11/20/2020 03:13:37 PM] Processed Run 370/420
[11/20/2020 03:13:37 PM] Processed Run 371/420
[11/20/2020 03:13:37 PM] Processed Run 372/420
[11/20/2020 03:13:37 PM] Processed Run 373/420
[11/20/2020 03:13:37 PM] Processed Run 374/420
[11/20/2020 03:13:37 PM] Processed Run 375/420
[11/20/2020 03:13:37 PM] Processed Run 376/420
[11/20/2020 03:13:37 PM] Processed Run 377/420
[11/20/2020 03:13:37 PM] Processed Run 378/420
[11/20/2020 03:13:37 PM] Processed Run 379/420
[11/20/2020 03:13:37 PM] Processed Run 380/420
[11/20/2020 03:13:37 PM] Processed Run 381/420
[11/20/2020 03:13:37 PM] Processed Run 382/420
[11/20/2020 03:13:37 PM] Processed Run 383/420
[11/20/2020 03:13:37 PM] Processed Run 384/420
[11/20/2020 03:13:37 PM] Processed Run 385/420
[11/20/2020 03:13:37 PM] Processed Run 386/420
[11/20/2020 03:13:37 PM] Processed Run 387/420
[11/20/2020 03:13:37 PM] Processed Run 388/420
[11/20/2020 03:13:37 PM] Processed Run 389/420
[11/20/2020 03:13:37 PM] Processed Run 390/420
[11/20/2020 03:13:37 PM] Processed Run 391/420
[11/20/2020 03:13:56 PM] Processed Run 392/420
[11/20/2020 03:13:56 PM] Processed Run 393/420
[11/20/2020 03:13:57 PM] Processed Run 394/420
[11/20/2020 03:13:57 PM] Processed Run 395/420
[11/20/2020 03:13:58 PM] Processed Run 396/420
[11/20/2020 03:13:58 PM] Processed Run 397/420
[11/20/2020 03:13:58 PM] Processed Run 398/420
[11/20/2020 03:13:58 PM] Processed Run 399/420
[11/20/2020 03:13:58 PM] Processed Run 400/420
[11/20/2020 03:13:58 PM] Processed Run 401/420
[11/20/2020 03:13:58 PM] Processed Run 402/420
[11/20/2020 03:13:58 PM] Processed Run 403/420
[11/20/2020 03:13:58 PM] Processed Run 404/420
[11/20/2020 03:13:58 PM] Processed Run 405/420
[11/20/2020 03:13:58 PM] Processed Run 406/420
[11/20/2020 03:13:58 PM] Processed Run 407/420
[11/20/2020 03:13:58 PM] Processed Run 408/420
[11/20/2020 03:13:58 PM] Processed Run 409/420
[11/20/2020 03:13:58 PM] Processed Run 410/420
[11/20/2020 03:13:58 PM] Processed Run 411/420
[11/20/2020 03:13:58 PM] Processed Run 412/420
[11/20/2020 03:13:58 PM] Processed Run 413/420
[11/20/2020 03:13:58 PM] Processed Run 414/420
[11/20/2020 03:13:59 PM] Processed Run 415/420
[11/20/2020 03:13:59 PM] Processed Run 416/420
[11/20/2020 03:13:59 PM] Processed Run 417/420
[11/20/2020 03:13:59 PM] Processed Run 418/420
[11/20/2020 03:13:59 PM] Processed Run 419/420
[11/20/2020 03:13:59 PM] Processed Run 420/420
[11/20/2020 03:14:43 PM] Run 1/420
[11/20/2020 03:14:44 PM] Run 2/420
[11/20/2020 03:14:44 PM] Run 3/420
[11/20/2020 03:14:45 PM] Run 4/420
[11/20/2020 03:14:45 PM] Run 5/420
[11/20/2020 03:14:45 PM] Run 6/420
[11/20/2020 03:14:45 PM] Run 7/420
[11/20/2020 03:14:45 PM] Run 8/420
[11/20/2020 03:14:45 PM] Run 9/420
[11/20/2020 03:14:45 PM] Run 10/420
[11/20/2020 03:14:46 PM] Run 11/420
[11/20/2020 03:14:46 PM] Run 12/420
[11/20/2020 03:14:46 PM] Run 13/420
[11/20/2020 03:14:46 PM] Run 14/420
[11/20/2020 03:14:46 PM] Run 15/420
[11/20/2020 03:14:46 PM] Run 16/420
[11/20/2020 03:14:47 PM] Run 17/420
[11/20/2020 03:14:47 PM] Run 18/420
[11/20/2020 03:14:47 PM] Run 19/420
[11/20/2020 03:14:47 PM] Run 20/420
[11/20/2020 03:14:47 PM] Run 21/420
[11/20/2020 03:14:47 PM] Run 22/420
[11/20/2020 03:14:48 PM] Run 23/420
[11/20/2020 03:14:48 PM] Run 24/420
[11/20/2020 03:14:48 PM] Run 25/420
[11/20/2020 03:14:48 PM] Run 26/420
[11/20/2020 03:14:48 PM] Run 27/420
[11/20/2020 03:14:48 PM] Run 28/420
[11/20/2020 03:14:48 PM] Run 29/420
[11/20/2020 03:14:49 PM] Run 30/420
[11/20/2020 03:14:49 PM] Run 31/420
[11/20/2020 03:14:49 PM] Run 32/420
[11/20/2020 03:14:49 PM] Run 33/420
[11/20/2020 03:14:49 PM] Run 34/420
[11/20/2020 03:14:49 PM] Run 35/420
[11/20/2020 03:14:49 PM] Run 36/420
[11/20/2020 03:14:49 PM] Run 37/420
[11/20/2020 03:14:49 PM] Run 38/420
[11/20/2020 03:14:49 PM] Run 39/420
[11/20/2020 03:14:50 PM] Run 40/420
[11/20/2020 03:14:50 PM] Run 41/420
[11/20/2020 03:14:50 PM] Run 42/420
[11/20/2020 03:14:50 PM] Run 43/420
[11/20/2020 03:14:50 PM] Run 44/420
[11/20/2020 03:14:50 PM] Run 45/420
[11/20/2020 03:14:51 PM] Run 46/420
[11/20/2020 03:14:51 PM] Run 47/420
[11/20/2020 03:14:51 PM] Run 48/420
[11/20/2020 03:14:51 PM] Run 49/420
[11/20/2020 03:14:52 PM] Run 50/420
[11/20/2020 03:14:52 PM] Run 51/420
[11/20/2020 03:14:52 PM] Run 52/420
[11/20/2020 03:14:52 PM] Run 53/420
[11/20/2020 03:14:52 PM] Run 54/420
[11/20/2020 03:14:52 PM] Run 55/420
[11/20/2020 03:14:52 PM] Run 56/420
[11/20/2020 03:14:52 PM] Run 57/420
[11/20/2020 03:14:52 PM] Run 58/420
[11/20/2020 03:14:52 PM] Run 59/420
[11/20/2020 03:14:53 PM] Run 60/420
[11/20/2020 03:14:53 PM] Run 61/420
[11/20/2020 03:14:53 PM] Run 62/420
[11/20/2020 03:14:53 PM] Run 63/420
[11/20/2020 03:14:54 PM] Run 64/420
[11/20/2020 03:14:54 PM] Run 65/420
[11/20/2020 03:14:54 PM] Run 66/420
[11/20/2020 03:14:54 PM] Run 67/420
[11/20/2020 03:14:55 PM] Run 68/420
[11/20/2020 03:14:55 PM] Run 69/420
[11/20/2020 03:14:55 PM] Run 70/420
[11/20/2020 03:14:55 PM] Run 71/420
[11/20/2020 03:14:55 PM] Run 72/420
[11/20/2020 03:14:55 PM] Run 73/420
[11/20/2020 03:14:55 PM] Run 74/420
[11/20/2020 03:14:55 PM] Run 75/420
[11/20/2020 03:14:55 PM] Run 76/420
[11/20/2020 03:14:56 PM] Run 77/420
[11/20/2020 03:14:56 PM] Run 78/420
[11/20/2020 03:14:56 PM] Run 79/420
[11/20/2020 03:14:56 PM] Run 80/420
[11/20/2020 03:14:56 PM] Run 81/420
[11/20/2020 03:14:57 PM] Run 82/420
[11/20/2020 03:14:57 PM] Run 83/420
[11/20/2020 03:14:57 PM] Run 84/420
[11/20/2020 03:14:57 PM] Run 85/420
[11/20/2020 03:14:57 PM] Run 86/420
[11/20/2020 03:14:57 PM] Run 87/420
[11/20/2020 03:14:57 PM] Run 88/420
[11/20/2020 03:14:57 PM] Run 89/420
[11/20/2020 03:14:57 PM] Run 90/420
[11/20/2020 03:14:58 PM] Run 91/420
[11/20/2020 03:14:58 PM] Run 92/420
[11/20/2020 03:14:58 PM] Run 93/420
[11/20/2020 03:14:58 PM] Run 94/420
[11/20/2020 03:14:58 PM] Run 95/420
[11/20/2020 03:14:58 PM] Run 96/420
[11/20/2020 03:14:58 PM] Run 97/420
[11/20/2020 03:14:58 PM] Run 98/420
[11/20/2020 03:14:58 PM] Run 99/420
[11/20/2020 03:14:58 PM] Run 100/420
[11/20/2020 03:14:59 PM] Run 101/420
[11/20/2020 03:14:59 PM] Run 102/420
[11/20/2020 03:14:59 PM] Run 103/420
[11/20/2020 03:14:59 PM] Run 104/420
[11/20/2020 03:14:59 PM] Run 105/420
[11/20/2020 03:14:59 PM] Run 106/420
[11/20/2020 03:15:00 PM] Run 107/420
[11/20/2020 03:15:00 PM] Run 108/420
[11/20/2020 03:15:00 PM] Run 109/420
[11/20/2020 03:15:00 PM] Run 110/420
[11/20/2020 03:15:00 PM] Run 111/420
[11/20/2020 03:15:00 PM] Run 112/420
[11/20/2020 03:15:01 PM] Run 113/420
[11/20/2020 03:15:01 PM] Run 114/420
[11/20/2020 03:15:01 PM] Run 115/420
[11/20/2020 03:15:01 PM] Run 116/420
[11/20/2020 03:15:01 PM] Run 117/420
[11/20/2020 03:15:01 PM] Run 118/420
[11/20/2020 03:15:01 PM] Run 119/420
[11/20/2020 03:15:01 PM] Run 120/420
[11/20/2020 03:15:01 PM] Run 121/420
[11/20/2020 03:15:02 PM] Run 122/420
[11/20/2020 03:15:02 PM] Run 123/420
[11/20/2020 03:15:02 PM] Run 124/420
[11/20/2020 03:15:02 PM] Run 125/420
[11/20/2020 03:15:02 PM] Run 126/420
[11/20/2020 03:15:02 PM] Run 127/420
[11/20/2020 03:15:02 PM] Run 128/420
[11/20/2020 03:15:02 PM] Run 129/420
[11/20/2020 03:15:03 PM] Run 130/420
[11/20/2020 03:15:03 PM] Run 131/420
[11/20/2020 03:15:03 PM] Run 132/420
[11/20/2020 03:15:03 PM] Run 133/420
[11/20/2020 03:15:03 PM] Run 134/420
[11/20/2020 03:15:03 PM] Run 135/420
[11/20/2020 03:15:03 PM] Run 136/420
[11/20/2020 03:15:03 PM] Run 137/420
[11/20/2020 03:15:03 PM] Run 138/420
[11/20/2020 03:15:03 PM] Run 139/420
[11/20/2020 03:15:03 PM] Run 140/420
[11/20/2020 03:15:04 PM] Run 141/420
[11/20/2020 03:15:04 PM] Run 142/420
[11/20/2020 03:15:04 PM] Run 143/420
[11/20/2020 03:15:04 PM] Run 144/420
[11/20/2020 03:15:04 PM] Run 145/420
[11/20/2020 03:15:05 PM] Run 146/420
[11/20/2020 03:15:05 PM] Run 147/420
[11/20/2020 03:15:05 PM] Run 148/420
[11/20/2020 03:15:05 PM] Run 149/420
[11/20/2020 03:15:05 PM] Run 150/420
[11/20/2020 03:15:05 PM] Run 151/420
[11/20/2020 03:15:06 PM] Run 152/420
[11/20/2020 03:15:06 PM] Run 153/420
[11/20/2020 03:15:06 PM] Run 154/420
[11/20/2020 03:15:06 PM] Run 155/420
[11/20/2020 03:15:06 PM] Run 156/420
[11/20/2020 03:15:06 PM] Run 157/420
[11/20/2020 03:15:06 PM] Run 158/420
[11/20/2020 03:15:06 PM] Run 159/420
[11/20/2020 03:15:06 PM] Run 160/420
[11/20/2020 03:15:06 PM] Run 161/420
[11/20/2020 03:15:06 PM] Run 162/420
[11/20/2020 03:15:06 PM] Run 163/420
[11/20/2020 03:15:07 PM] Run 164/420
[11/20/2020 03:15:07 PM] Run 165/420
[11/20/2020 03:15:07 PM] Run 166/420
[11/20/2020 03:15:07 PM] Run 167/420
[11/20/2020 03:15:07 PM] Run 168/420
[11/20/2020 03:15:07 PM] Run 169/420
[11/20/2020 03:15:08 PM] Run 170/420
[11/20/2020 03:15:08 PM] Run 171/420
[11/20/2020 03:15:08 PM] Run 172/420
[11/20/2020 03:15:08 PM] Run 173/420
[11/20/2020 03:15:08 PM] Run 174/420
[11/20/2020 03:15:08 PM] Run 175/420
[11/20/2020 03:15:08 PM] Run 176/420
[11/20/2020 03:15:09 PM] Run 177/420
[11/20/2020 03:15:09 PM] Run 178/420
[11/20/2020 03:15:09 PM] Run 179/420
[11/20/2020 03:15:09 PM] Run 180/420
[11/20/2020 03:15:09 PM] Run 181/420
[11/20/2020 03:15:09 PM] Run 182/420
[11/20/2020 03:15:09 PM] Run 183/420
[11/20/2020 03:15:10 PM] Run 184/420
[11/20/2020 03:15:10 PM] Run 185/420
[11/20/2020 03:15:10 PM] Run 186/420
[11/20/2020 03:15:10 PM] Run 187/420
[11/20/2020 03:15:10 PM] Run 188/420
[11/20/2020 03:15:10 PM] Run 189/420
[11/20/2020 03:15:10 PM] Run 190/420
[11/20/2020 03:15:10 PM] Run 191/420
[11/20/2020 03:15:10 PM] Run 192/420
[11/20/2020 03:15:10 PM] Run 193/420
[11/20/2020 03:15:10 PM] Run 194/420
[11/20/2020 03:15:11 PM] Run 195/420
[11/20/2020 03:15:11 PM] Run 196/420
[11/20/2020 03:15:12 PM] Run 197/420
[11/20/2020 03:15:12 PM] Run 198/420
[11/20/2020 03:15:12 PM] Run 199/420
[11/20/2020 03:15:12 PM] Run 200/420
[11/20/2020 03:15:13 PM] Run 201/420
[11/20/2020 03:15:13 PM] Run 202/420
[11/20/2020 03:15:13 PM] Run 203/420
[11/20/2020 03:15:13 PM] Run 204/420
[11/20/2020 03:15:13 PM] Run 205/420
[11/20/2020 03:15:13 PM] Run 206/420
[11/20/2020 03:15:14 PM] Run 207/420
[11/20/2020 03:15:14 PM] Run 208/420
[11/20/2020 03:15:14 PM] Run 209/420
[11/20/2020 03:15:14 PM] Run 210/420
[11/20/2020 03:15:14 PM] Run 211/420
[11/20/2020 03:15:14 PM] Run 212/420
[11/20/2020 03:15:14 PM] Run 213/420
[11/20/2020 03:15:14 PM] Run 214/420
[11/20/2020 03:15:15 PM] Run 215/420
[11/20/2020 03:15:15 PM] Run 216/420
[11/20/2020 03:15:15 PM] Run 217/420
[11/20/2020 03:15:15 PM] Run 218/420
[11/20/2020 03:15:15 PM] Run 219/420
[11/20/2020 03:15:15 PM] Run 220/420
[11/20/2020 03:15:15 PM] Run 221/420
[11/20/2020 03:15:16 PM] Run 222/420
[11/20/2020 03:15:16 PM] Run 223/420
[11/20/2020 03:15:16 PM] Run 224/420
[11/20/2020 03:15:16 PM] Run 225/420
[11/20/2020 03:15:16 PM] Run 226/420
[11/20/2020 03:15:16 PM] Run 227/420
[11/20/2020 03:15:16 PM] Run 228/420
[11/20/2020 03:15:16 PM] Run 229/420
[11/20/2020 03:15:17 PM] Run 230/420
[11/20/2020 03:15:17 PM] Run 231/420
[11/20/2020 03:15:17 PM] Run 232/420
[11/20/2020 03:15:17 PM] Run 233/420
[11/20/2020 03:15:17 PM] Run 234/420
[11/20/2020 03:15:17 PM] Run 235/420
[11/20/2020 03:15:17 PM] Run 236/420
[11/20/2020 03:15:18 PM] Run 237/420
[11/20/2020 03:15:18 PM] Run 238/420
[11/20/2020 03:15:18 PM] Run 239/420
[11/20/2020 03:15:18 PM] Run 240/420
[11/20/2020 03:15:18 PM] Run 241/420
[11/20/2020 03:15:18 PM] Run 242/420
[11/20/2020 03:15:19 PM] Run 243/420
[11/20/2020 03:15:19 PM] Run 244/420
[11/20/2020 03:15:19 PM] Run 245/420
[11/20/2020 03:15:19 PM] Run 246/420
[11/20/2020 03:15:19 PM] Run 247/420
[11/20/2020 03:15:19 PM] Run 248/420
[11/20/2020 03:15:19 PM] Run 249/420
[11/20/2020 03:15:19 PM] Run 250/420
[11/20/2020 03:15:19 PM] Run 251/420
[11/20/2020 03:15:19 PM] Run 252/420
[11/20/2020 03:15:20 PM] Run 253/420
[11/20/2020 03:15:20 PM] Run 254/420
[11/20/2020 03:15:20 PM] Run 255/420
[11/20/2020 03:15:20 PM] Run 256/420
[11/20/2020 03:15:20 PM] Run 257/420
[11/20/2020 03:15:20 PM] Run 258/420
[11/20/2020 03:15:20 PM] Run 259/420
[11/20/2020 03:15:20 PM] Run 260/420
[11/20/2020 03:15:21 PM] Run 261/420
[11/20/2020 03:15:21 PM] Run 262/420
[11/20/2020 03:15:21 PM] Run 263/420
[11/20/2020 03:15:21 PM] Run 264/420
[11/20/2020 03:15:22 PM] Run 265/420
[11/20/2020 03:15:22 PM] Run 266/420
[11/20/2020 03:15:22 PM] Run 267/420
[11/20/2020 03:15:22 PM] Run 268/420
[11/20/2020 03:15:22 PM] Run 269/420
[11/20/2020 03:15:22 PM] Run 270/420
[11/20/2020 03:15:22 PM] Run 271/420
[11/20/2020 03:15:23 PM] Run 272/420
[11/20/2020 03:15:23 PM] Run 273/420
[11/20/2020 03:15:23 PM] Run 274/420
[11/20/2020 03:15:23 PM] Run 275/420
[11/20/2020 03:15:23 PM] Run 276/420
[11/20/2020 03:15:23 PM] Run 277/420
[11/20/2020 03:15:23 PM] Run 278/420
[11/20/2020 03:15:23 PM] Run 279/420
[11/20/2020 03:15:24 PM] Run 280/420
[11/20/2020 03:15:24 PM] Run 281/420
[11/20/2020 03:15:24 PM] Run 282/420
[11/20/2020 03:15:24 PM] Run 283/420
[11/20/2020 03:15:24 PM] Run 284/420
[11/20/2020 03:15:24 PM] Run 285/420
[11/20/2020 03:15:24 PM] Run 286/420
[11/20/2020 03:15:24 PM] Run 287/420
[11/20/2020 03:15:25 PM] Run 288/420
[11/20/2020 03:15:25 PM] Run 289/420
[11/20/2020 03:15:25 PM] Run 290/420
[11/20/2020 03:15:25 PM] Run 291/420
[11/20/2020 03:15:25 PM] Run 292/420
[11/20/2020 03:15:26 PM] Run 293/420
[11/20/2020 03:15:26 PM] Run 294/420
[11/20/2020 03:15:26 PM] Run 295/420
[11/20/2020 03:15:26 PM] Run 296/420
[11/20/2020 03:15:26 PM] Run 297/420
[11/20/2020 03:15:26 PM] Run 298/420
[11/20/2020 03:15:27 PM] Run 299/420
[11/20/2020 03:15:27 PM] Run 300/420
[11/20/2020 03:15:27 PM] Run 301/420
[11/20/2020 03:15:27 PM] Run 302/420
[11/20/2020 03:15:27 PM] Run 303/420
[11/20/2020 03:15:27 PM] Run 304/420
[11/20/2020 03:15:27 PM] Run 305/420
[11/20/2020 03:15:27 PM] Run 306/420
[11/20/2020 03:15:27 PM] Run 307/420
[11/20/2020 03:15:27 PM] Run 308/420
[11/20/2020 03:15:28 PM] Run 309/420
[11/20/2020 03:15:28 PM] Run 310/420
[11/20/2020 03:15:28 PM] Run 311/420
[11/20/2020 03:15:28 PM] Run 312/420
[11/20/2020 03:15:28 PM] Run 313/420
[11/20/2020 03:15:28 PM] Run 314/420
[11/20/2020 03:15:28 PM] Run 315/420
[11/20/2020 03:15:29 PM] Run 316/420
[11/20/2020 03:15:29 PM] Run 317/420
[11/20/2020 03:15:29 PM] Run 318/420
[11/20/2020 03:15:29 PM] Run 319/420
[11/20/2020 03:15:29 PM] Run 320/420
[11/20/2020 03:15:29 PM] Run 321/420
[11/20/2020 03:15:29 PM] Run 322/420
[11/20/2020 03:15:29 PM] Run 323/420
[11/20/2020 03:15:29 PM] Run 324/420
[11/20/2020 03:15:30 PM] Run 325/420
[11/20/2020 03:15:30 PM] Run 326/420
[11/20/2020 03:15:30 PM] Run 327/420
[11/20/2020 03:15:30 PM] Run 328/420
[11/20/2020 03:15:30 PM] Run 329/420
[11/20/2020 03:15:31 PM] Run 330/420
[11/20/2020 03:15:31 PM] Run 331/420
[11/20/2020 03:15:31 PM] Run 332/420
[11/20/2020 03:15:31 PM] Run 333/420
[11/20/2020 03:15:31 PM] Run 334/420
[11/20/2020 03:15:31 PM] Run 335/420
[11/20/2020 03:15:32 PM] Run 336/420
[11/20/2020 03:15:32 PM] Run 337/420
[11/20/2020 03:15:32 PM] Run 338/420
[11/20/2020 03:15:32 PM] Run 339/420
[11/20/2020 03:15:32 PM] Run 340/420
[11/20/2020 03:15:32 PM] Run 341/420
[11/20/2020 03:15:32 PM] Run 342/420
[11/20/2020 03:15:33 PM] Run 343/420
[11/20/2020 03:15:33 PM] Run 344/420
[11/20/2020 03:15:33 PM] Run 345/420
[11/20/2020 03:15:34 PM] Run 346/420
[11/20/2020 03:15:34 PM] Run 347/420
[11/20/2020 03:15:34 PM] Run 348/420
[11/20/2020 03:15:34 PM] Run 349/420
[11/20/2020 03:15:34 PM] Run 350/420
[11/20/2020 03:15:34 PM] Run 351/420
[11/20/2020 03:15:34 PM] Run 352/420
[11/20/2020 03:15:34 PM] Run 353/420
[11/20/2020 03:15:34 PM] Run 354/420
[11/20/2020 03:15:35 PM] Run 355/420
[11/20/2020 03:15:35 PM] Run 356/420
[11/20/2020 03:15:35 PM] Run 357/420
[11/20/2020 03:15:35 PM] Run 358/420
[11/20/2020 03:15:35 PM] Run 359/420
[11/20/2020 03:15:35 PM] Run 360/420
[11/20/2020 03:15:36 PM] Run 361/420
[11/20/2020 03:15:36 PM] Run 362/420
[11/20/2020 03:15:36 PM] Run 363/420
[11/20/2020 03:15:36 PM] Run 364/420
[11/20/2020 03:15:36 PM] Run 365/420
[11/20/2020 03:15:36 PM] Run 366/420
[11/20/2020 03:15:36 PM] Run 367/420
[11/20/2020 03:15:37 PM] Run 368/420
[11/20/2020 03:15:37 PM] Run 369/420
[11/20/2020 03:15:37 PM] Run 370/420
[11/20/2020 03:15:37 PM] Run 371/420
[11/20/2020 03:15:37 PM] Run 372/420
[11/20/2020 03:15:37 PM] Run 373/420
[11/20/2020 03:15:38 PM] Run 374/420
[11/20/2020 03:15:38 PM] Run 375/420
[11/20/2020 03:15:38 PM] Run 376/420
[11/20/2020 03:15:38 PM] Run 377/420
[11/20/2020 03:15:38 PM] Run 378/420
[11/20/2020 03:15:38 PM] Run 379/420
[11/20/2020 03:15:38 PM] Run 380/420
[11/20/2020 03:15:38 PM] Run 381/420
[11/20/2020 03:15:39 PM] Run 382/420
[11/20/2020 03:15:39 PM] Run 383/420
[11/20/2020 03:15:39 PM] Run 384/420
[11/20/2020 03:15:39 PM] Run 385/420
[11/20/2020 03:15:39 PM] Run 386/420
[11/20/2020 03:15:39 PM] Run 387/420
[11/20/2020 03:15:39 PM] Run 388/420
[11/20/2020 03:15:40 PM] Run 389/420
[11/20/2020 03:15:40 PM] Run 390/420
[11/20/2020 03:15:40 PM] Run 391/420
[11/20/2020 03:15:40 PM] Run 392/420
[11/20/2020 03:15:40 PM] Run 393/420
[11/20/2020 03:15:40 PM] Run 394/420
[11/20/2020 03:15:40 PM] Run 395/420
[11/20/2020 03:15:40 PM] Run 396/420
[11/20/2020 03:15:41 PM] Run 397/420
[11/20/2020 03:15:41 PM] Run 398/420
[11/20/2020 03:15:41 PM] Run 399/420
[11/20/2020 03:15:41 PM] Run 400/420
[11/20/2020 03:15:41 PM] Run 401/420
[11/20/2020 03:15:41 PM] Run 402/420
[11/20/2020 03:15:41 PM] Run 403/420
[11/20/2020 03:15:41 PM] Run 404/420
[11/20/2020 03:15:42 PM] Run 405/420
[11/20/2020 03:15:43 PM] Run 406/420
[11/20/2020 03:15:43 PM] Run 407/420
[11/20/2020 03:15:43 PM] Run 408/420
[11/20/2020 03:15:44 PM] Run 409/420
[11/20/2020 03:15:45 PM] Run 410/420
[11/20/2020 03:15:46 PM] Run 411/420
[11/20/2020 03:15:47 PM] Run 412/420
[11/20/2020 03:15:49 PM] Run 413/420
[11/20/2020 03:15:50 PM] Run 414/420
[11/20/2020 03:15:50 PM] Run 415/420
[11/20/2020 03:19:28 PM] Run 416/420
[11/20/2020 03:19:28 PM] Run 417/420
[11/20/2020 03:19:30 PM] Run 418/420
[11/20/2020 03:19:38 PM] Run 419/420
[11/20/2020 03:19:41 PM] Run 420/420
[11/20/2020 03:19:41 PM] Processed Run 1/420
[11/20/2020 03:19:43 PM] Processed Run 2/420
[11/20/2020 03:19:43 PM] Processed Run 3/420
[11/20/2020 03:19:44 PM] Processed Run 4/420
[11/20/2020 03:19:45 PM] Processed Run 5/420
[11/20/2020 03:19:46 PM] Processed Run 6/420
[11/20/2020 03:19:46 PM] Processed Run 7/420
[11/20/2020 03:19:46 PM] Processed Run 8/420
[11/20/2020 03:19:46 PM] Processed Run 9/420
[11/20/2020 03:19:46 PM] Processed Run 10/420
[11/20/2020 03:19:46 PM] Processed Run 11/420
[11/20/2020 03:19:46 PM] Processed Run 12/420
[11/20/2020 03:19:46 PM] Processed Run 13/420
[11/20/2020 03:19:46 PM] Processed Run 14/420
[11/20/2020 03:19:46 PM] Processed Run 15/420
[11/20/2020 03:19:46 PM] Processed Run 16/420
[11/20/2020 03:19:46 PM] Processed Run 17/420
[11/20/2020 03:19:47 PM] Processed Run 18/420
[11/20/2020 03:19:47 PM] Processed Run 19/420
[11/20/2020 03:19:47 PM] Processed Run 20/420
[11/20/2020 03:19:47 PM] Processed Run 21/420
[11/20/2020 03:19:47 PM] Processed Run 22/420
[11/20/2020 03:19:47 PM] Processed Run 23/420
[11/20/2020 03:19:47 PM] Processed Run 24/420
[11/20/2020 03:19:47 PM] Processed Run 25/420
[11/20/2020 03:19:47 PM] Processed Run 26/420
[11/20/2020 03:19:47 PM] Processed Run 27/420
[11/20/2020 03:19:47 PM] Processed Run 28/420
[11/20/2020 03:19:47 PM] Processed Run 29/420
[11/20/2020 03:19:47 PM] Processed Run 30/420
[11/20/2020 03:19:47 PM] Processed Run 31/420
[11/20/2020 03:19:53 PM] Processed Run 32/420
distributed.core - INFO - Event loop was unresponsive in Worker for 3.05s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
[11/20/2020 03:20:04 PM] Processed Run 33/420
distributed.core - INFO - Event loop was unresponsive in Worker for 3.01s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
[11/20/2020 03:20:20 PM] Processed Run 34/420
distributed.core - INFO - Event loop was unresponsive in Worker for 3.15s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
[11/20/2020 03:20:32 PM] Processed Run 35/420
distributed.core - INFO - Event loop was unresponsive in Worker for 3.05s.  This is often caused by long-running GIL-holding functions or moving large chunks of data. This can cause timeouts and instability.
[11/20/2020 03:20:46 PM] Processed Run 36/420
[11/20/2020 03:20:46 PM] Processed Run 37/420
[11/20/2020 03:20:47 PM] Processed Run 38/420
[11/20/2020 03:20:47 PM] Processed Run 39/420
[11/20/2020 03:20:48 PM] Processed Run 40/420
[11/20/2020 03:20:48 PM] Processed Run 41/420
[11/20/2020 03:20:48 PM] Processed Run 42/420
[11/20/2020 03:20:48 PM] Processed Run 43/420
[11/20/2020 03:20:48 PM] Processed Run 44/420
[11/20/2020 03:20:48 PM] Processed Run 45/420
[11/20/2020 03:20:48 PM] Processed Run 46/420
[11/20/2020 03:20:48 PM] Processed Run 47/420
[11/20/2020 03:20:48 PM] Processed Run 48/420
[11/20/2020 03:20:48 PM] Processed Run 49/420
[11/20/2020 03:20:48 PM] Processed Run 50/420
[11/20/2020 03:20:48 PM] Processed Run 51/420
[11/20/2020 03:20:49 PM] Processed Run 52/420
[11/20/2020 03:20:49 PM] Processed Run 53/420
[11/20/2020 03:20:49 PM] Processed Run 54/420
[11/20/2020 03:20:49 PM] Processed Run 55/420
[11/20/2020 03:20:49 PM] Processed Run 56/420
[11/20/2020 03:20:49 PM] Processed Run 57/420
[11/20/2020 03:20:49 PM] Processed Run 58/420
[11/20/2020 03:20:49 PM] Processed Run 59/420
[11/20/2020 03:20:49 PM] Processed Run 60/420
[11/20/2020 03:20:49 PM] Processed Run 61/420
[11/20/2020 03:20:49 PM] Processed Run 62/420
[11/20/2020 03:20:50 PM] Processed Run 63/420
[11/20/2020 03:20:51 PM] Processed Run 64/420
[11/20/2020 03:20:52 PM] Processed Run 65/420
[11/20/2020 03:21:02 PM] Processed Run 66/420
[11/20/2020 03:21:02 PM] Processed Run 67/420
[11/20/2020 03:21:03 PM] Processed Run 68/420
[11/20/2020 03:21:03 PM] Processed Run 69/420
[11/20/2020 03:21:03 PM] Processed Run 70/420
[11/20/2020 03:21:03 PM] Processed Run 71/420
[11/20/2020 03:21:03 PM] Processed Run 72/420
[11/20/2020 03:21:03 PM] Processed Run 73/420
[11/20/2020 03:21:03 PM] Processed Run 74/420
[11/20/2020 03:21:03 PM] Processed Run 75/420
[11/20/2020 03:21:03 PM] Processed Run 76/420
[11/20/2020 03:21:03 PM] Processed Run 77/420
[11/20/2020 03:21:03 PM] Processed Run 78/420
[11/20/2020 03:21:03 PM] Processed Run 79/420
[11/20/2020 03:21:03 PM] Processed Run 80/420
[11/20/2020 03:21:03 PM] Processed Run 81/420
[11/20/2020 03:21:03 PM] Processed Run 82/420
[11/20/2020 03:21:03 PM] Processed Run 83/420
[11/20/2020 03:21:03 PM] Processed Run 84/420
[11/20/2020 03:21:03 PM] Processed Run 85/420
[11/20/2020 03:21:03 PM] Processed Run 86/420
[11/20/2020 03:21:03 PM] Processed Run 87/420
[11/20/2020 03:21:03 PM] Processed Run 88/420
[11/20/2020 03:21:03 PM] Processed Run 89/420
[11/20/2020 03:21:03 PM] Processed Run 90/420
[11/20/2020 03:21:03 PM] Processed Run 91/420
[11/20/2020 03:21:05 PM] Processed Run 92/420
[11/20/2020 03:21:06 PM] Processed Run 93/420
[11/20/2020 03:21:07 PM] Processed Run 94/420
[11/20/2020 03:21:07 PM] Processed Run 95/420
[11/20/2020 03:21:08 PM] Processed Run 96/420
[11/20/2020 03:21:08 PM] Processed Run 97/420
[11/20/2020 03:21:08 PM] Processed Run 98/420
[11/20/2020 03:21:09 PM] Processed Run 99/420
[11/20/2020 03:21:09 PM] Processed Run 100/420
[11/20/2020 03:21:09 PM] Processed Run 101/420
[11/20/2020 03:21:09 PM] Processed Run 102/420
[11/20/2020 03:21:09 PM] Processed Run 103/420
[11/20/2020 03:21:09 PM] Processed Run 104/420
[11/20/2020 03:21:09 PM] Processed Run 105/420
[11/20/2020 03:21:09 PM] Processed Run 106/420
[11/20/2020 03:21:09 PM] Processed Run 107/420
[11/20/2020 03:21:09 PM] Processed Run 108/420
[11/20/2020 03:21:09 PM] Processed Run 109/420
[11/20/2020 03:21:09 PM] Processed Run 110/420
[11/20/2020 03:21:09 PM] Processed Run 111/420
[11/20/2020 03:21:09 PM] Processed Run 112/420
[11/20/2020 03:21:09 PM] Processed Run 113/420
[11/20/2020 03:21:09 PM] Processed Run 114/420
[11/20/2020 03:21:09 PM] Processed Run 115/420
[11/20/2020 03:21:09 PM] Processed Run 116/420
[11/20/2020 03:21:09 PM] Processed Run 117/420
[11/20/2020 03:21:09 PM] Processed Run 118/420
[11/20/2020 03:21:09 PM] Processed Run 119/420
[11/20/2020 03:21:09 PM] Processed Run 120/420
[11/20/2020 03:21:09 PM] Processed Run 121/420
[11/20/2020 03:21:10 PM] Processed Run 122/420
[11/20/2020 03:21:22 PM] Processed Run 123/420
[11/20/2020 03:21:24 PM] Processed Run 124/420
[11/20/2020 03:21:25 PM] Processed Run 125/420
[11/20/2020 03:21:26 PM] Processed Run 126/420
[11/20/2020 03:21:26 PM] Processed Run 127/420
[11/20/2020 03:21:26 PM] Processed Run 128/420
[11/20/2020 03:21:26 PM] Processed Run 129/420
[11/20/2020 03:21:26 PM] Processed Run 130/420
[11/20/2020 03:21:26 PM] Processed Run 131/420
[11/20/2020 03:21:26 PM] Processed Run 132/420
[11/20/2020 03:21:26 PM] Processed Run 133/420
[11/20/2020 03:21:26 PM] Processed Run 134/420
[11/20/2020 03:21:26 PM] Processed Run 135/420
[11/20/2020 03:21:26 PM] Processed Run 136/420
[11/20/2020 03:21:26 PM] Processed Run 137/420
[11/20/2020 03:21:26 PM] Processed Run 138/420
[11/20/2020 03:21:27 PM] Processed Run 139/420
[11/20/2020 03:21:27 PM] Processed Run 140/420
[11/20/2020 03:21:27 PM] Processed Run 141/420
[11/20/2020 03:21:27 PM] Processed Run 142/420
[11/20/2020 03:21:27 PM] Processed Run 143/420
[11/20/2020 03:21:27 PM] Processed Run 144/420
[11/20/2020 03:21:27 PM] Processed Run 145/420
[11/20/2020 03:21:27 PM] Processed Run 146/420
[11/20/2020 03:21:27 PM] Processed Run 147/420
[11/20/2020 03:21:27 PM] Processed Run 148/420
[11/20/2020 03:21:27 PM] Processed Run 149/420
[11/20/2020 03:21:27 PM] Processed Run 150/420
[11/20/2020 03:21:27 PM] Processed Run 151/420
[11/20/2020 03:21:27 PM] Processed Run 152/420
[11/20/2020 03:21:28 PM] Processed Run 153/420
[11/20/2020 03:21:29 PM] Processed Run 154/420
[11/20/2020 03:21:30 PM] Processed Run 155/420
[11/20/2020 03:21:30 PM] Processed Run 156/420
[11/20/2020 03:21:31 PM] Processed Run 157/420
[11/20/2020 03:21:31 PM] Processed Run 158/420
[11/20/2020 03:21:31 PM] Processed Run 159/420
[11/20/2020 03:21:31 PM] Processed Run 160/420
[11/20/2020 03:21:31 PM] Processed Run 161/420
[11/20/2020 03:21:31 PM] Processed Run 162/420
[11/20/2020 03:21:31 PM] Processed Run 163/420
[11/20/2020 03:21:31 PM] Processed Run 164/420
[11/20/2020 03:21:31 PM] Processed Run 165/420
[11/20/2020 03:21:31 PM] Processed Run 166/420
[11/20/2020 03:21:31 PM] Processed Run 167/420
[11/20/2020 03:21:31 PM] Processed Run 168/420
[11/20/2020 03:21:31 PM] Processed Run 169/420
[11/20/2020 03:21:31 PM] Processed Run 170/420
[11/20/2020 03:21:31 PM] Processed Run 171/420
[11/20/2020 03:21:31 PM] Processed Run 172/420
[11/20/2020 03:21:31 PM] Processed Run 173/420
[11/20/2020 03:21:32 PM] Processed Run 174/420
[11/20/2020 03:21:32 PM] Processed Run 175/420
[11/20/2020 03:21:32 PM] Processed Run 176/420
[11/20/2020 03:21:32 PM] Processed Run 177/420
[11/20/2020 03:21:32 PM] Processed Run 178/420
[11/20/2020 03:21:32 PM] Processed Run 179/420
[11/20/2020 03:21:32 PM] Processed Run 180/420
[11/20/2020 03:21:32 PM] Processed Run 181/420
[11/20/2020 03:21:32 PM] Processed Run 182/420
[11/20/2020 03:21:46 PM] Processed Run 183/420
[11/20/2020 03:21:47 PM] Processed Run 184/420
[11/20/2020 03:21:48 PM] Processed Run 185/420
[11/20/2020 03:21:48 PM] Processed Run 186/420
[11/20/2020 03:21:48 PM] Processed Run 187/420
[11/20/2020 03:21:49 PM] Processed Run 188/420
[11/20/2020 03:21:49 PM] Processed Run 189/420
[11/20/2020 03:21:49 PM] Processed Run 190/420
[11/20/2020 03:21:49 PM] Processed Run 191/420
[11/20/2020 03:21:49 PM] Processed Run 192/420
[11/20/2020 03:21:49 PM] Processed Run 193/420
[11/20/2020 03:21:49 PM] Processed Run 194/420
[11/20/2020 03:21:49 PM] Processed Run 195/420
[11/20/2020 03:21:49 PM] Processed Run 196/420
[11/20/2020 03:21:49 PM] Processed Run 197/420
[11/20/2020 03:21:49 PM] Processed Run 198/420
[11/20/2020 03:21:49 PM] Processed Run 199/420
[11/20/2020 03:21:49 PM] Processed Run 200/420
[11/20/2020 03:21:49 PM] Processed Run 201/420
[11/20/2020 03:21:49 PM] Processed Run 202/420
[11/20/2020 03:21:49 PM] Processed Run 203/420
[11/20/2020 03:21:49 PM] Processed Run 204/420
[11/20/2020 03:21:49 PM] Processed Run 205/420
[11/20/2020 03:21:49 PM] Processed Run 206/420
[11/20/2020 03:21:49 PM] Processed Run 207/420
[11/20/2020 03:21:49 PM] Processed Run 208/420
[11/20/2020 03:21:49 PM] Processed Run 209/420
[11/20/2020 03:21:50 PM] Processed Run 210/420
[11/20/2020 03:21:50 PM] Processed Run 211/420
[11/20/2020 03:21:50 PM] Processed Run 212/420
[11/20/2020 03:21:51 PM] Processed Run 213/420
[11/20/2020 03:21:51 PM] Processed Run 214/420
[11/20/2020 03:21:52 PM] Processed Run 215/420
[11/20/2020 03:21:53 PM] Processed Run 216/420
[11/20/2020 03:21:53 PM] Processed Run 217/420
[11/20/2020 03:21:53 PM] Processed Run 218/420
[11/20/2020 03:21:53 PM] Processed Run 219/420
[11/20/2020 03:21:53 PM] Processed Run 220/420
[11/20/2020 03:21:54 PM] Processed Run 221/420
[11/20/2020 03:21:54 PM] Processed Run 222/420
[11/20/2020 03:21:54 PM] Processed Run 223/420
[11/20/2020 03:21:54 PM] Processed Run 224/420
[11/20/2020 03:21:54 PM] Processed Run 225/420
[11/20/2020 03:21:54 PM] Processed Run 226/420
[11/20/2020 03:21:54 PM] Processed Run 227/420
[11/20/2020 03:21:54 PM] Processed Run 228/420
[11/20/2020 03:21:54 PM] Processed Run 229/420
[11/20/2020 03:21:54 PM] Processed Run 230/420
[11/20/2020 03:21:54 PM] Processed Run 231/420
[11/20/2020 03:21:54 PM] Processed Run 232/420
[11/20/2020 03:21:54 PM] Processed Run 233/420
[11/20/2020 03:21:54 PM] Processed Run 234/420
[11/20/2020 03:21:54 PM] Processed Run 235/420
[11/20/2020 03:21:54 PM] Processed Run 236/420
[11/20/2020 03:21:54 PM] Processed Run 237/420
[11/20/2020 03:21:54 PM] Processed Run 238/420
[11/20/2020 03:21:54 PM] Processed Run 239/420
[11/20/2020 03:21:54 PM] Processed Run 240/420
[11/20/2020 03:21:54 PM] Processed Run 241/420
[11/20/2020 03:21:55 PM] Processed Run 242/420
[11/20/2020 03:21:56 PM] Processed Run 243/420
[11/20/2020 03:21:56 PM] Processed Run 244/420
[11/20/2020 03:21:57 PM] Processed Run 245/420
[11/20/2020 03:21:58 PM] Processed Run 246/420
[11/20/2020 03:22:15 PM] Processed Run 247/420
[11/20/2020 03:22:15 PM] Processed Run 248/420
[11/20/2020 03:22:15 PM] Processed Run 249/420
[11/20/2020 03:22:15 PM] Processed Run 250/420
[11/20/2020 03:22:15 PM] Processed Run 251/420
[11/20/2020 03:22:15 PM] Processed Run 252/420
[11/20/2020 03:22:15 PM] Processed Run 253/420
[11/20/2020 03:22:15 PM] Processed Run 254/420
[11/20/2020 03:22:15 PM] Processed Run 255/420
[11/20/2020 03:22:15 PM] Processed Run 256/420
[11/20/2020 03:22:15 PM] Processed Run 257/420
[11/20/2020 03:22:15 PM] Processed Run 258/420
[11/20/2020 03:22:15 PM] Processed Run 259/420
[11/20/2020 03:22:15 PM] Processed Run 260/420
[11/20/2020 03:22:15 PM] Processed Run 261/420
[11/20/2020 03:22:16 PM] Processed Run 262/420
[11/20/2020 03:22:16 PM] Processed Run 263/420
[11/20/2020 03:22:16 PM] Processed Run 264/420
[11/20/2020 03:22:16 PM] Processed Run 265/420
[11/20/2020 03:22:16 PM] Processed Run 266/420
[11/20/2020 03:22:16 PM] Processed Run 267/420
[11/20/2020 03:22:16 PM] Processed Run 268/420
[11/20/2020 03:22:16 PM] Processed Run 269/420
[11/20/2020 03:22:16 PM] Processed Run 270/420
[11/20/2020 03:22:16 PM] Processed Run 271/420
[11/20/2020 03:22:16 PM] Processed Run 272/420
[11/20/2020 03:22:17 PM] Processed Run 273/420
[11/20/2020 03:22:18 PM] Processed Run 274/420
[11/20/2020 03:22:19 PM] Processed Run 275/420
[11/20/2020 03:22:19 PM] Processed Run 276/420
[11/20/2020 03:22:19 PM] Processed Run 277/420
[11/20/2020 03:22:20 PM] Processed Run 278/420
[11/20/2020 03:22:20 PM] Processed Run 279/420
[11/20/2020 03:22:20 PM] Processed Run 280/420
[11/20/2020 03:22:20 PM] Processed Run 281/420
[11/20/2020 03:22:20 PM] Processed Run 282/420
[11/20/2020 03:22:20 PM] Processed Run 283/420
[11/20/2020 03:22:20 PM] Processed Run 284/420
[11/20/2020 03:22:20 PM] Processed Run 285/420
[11/20/2020 03:22:20 PM] Processed Run 286/420
[11/20/2020 03:22:20 PM] Processed Run 287/420
[11/20/2020 03:22:20 PM] Processed Run 288/420
[11/20/2020 03:22:20 PM] Processed Run 289/420
[11/20/2020 03:22:20 PM] Processed Run 290/420
[11/20/2020 03:22:20 PM] Processed Run 291/420
[11/20/2020 03:22:20 PM] Processed Run 292/420
[11/20/2020 03:22:20 PM] Processed Run 293/420
[11/20/2020 03:22:20 PM] Processed Run 294/420
[11/20/2020 03:22:20 PM] Processed Run 295/420
[11/20/2020 03:22:20 PM] Processed Run 296/420
[11/20/2020 03:22:20 PM] Processed Run 297/420
[11/20/2020 03:22:20 PM] Processed Run 298/420
[11/20/2020 03:22:20 PM] Processed Run 299/420
[11/20/2020 03:22:20 PM] Processed Run 300/420
[11/20/2020 03:22:20 PM] Processed Run 301/420
[11/20/2020 03:22:21 PM] Processed Run 302/420
[11/20/2020 03:22:22 PM] Processed Run 303/420
[11/20/2020 03:22:23 PM] Processed Run 304/420
[11/20/2020 03:22:24 PM] Processed Run 305/420
[11/20/2020 03:22:24 PM] Processed Run 306/420
[11/20/2020 03:22:24 PM] Processed Run 307/420
[11/20/2020 03:22:25 PM] Processed Run 308/420
[11/20/2020 03:22:25 PM] Processed Run 309/420
[11/20/2020 03:22:25 PM] Processed Run 310/420
[11/20/2020 03:22:25 PM] Processed Run 311/420
[11/20/2020 03:22:25 PM] Processed Run 312/420
[11/20/2020 03:22:25 PM] Processed Run 313/420
[11/20/2020 03:22:25 PM] Processed Run 314/420
[11/20/2020 03:22:25 PM] Processed Run 315/420
[11/20/2020 03:22:25 PM] Processed Run 316/420
[11/20/2020 03:22:25 PM] Processed Run 317/420
[11/20/2020 03:22:25 PM] Processed Run 318/420
[11/20/2020 03:22:25 PM] Processed Run 319/420
[11/20/2020 03:22:25 PM] Processed Run 320/420
[11/20/2020 03:22:25 PM] Processed Run 321/420
[11/20/2020 03:22:25 PM] Processed Run 322/420
[11/20/2020 03:22:25 PM] Processed Run 323/420
[11/20/2020 03:22:25 PM] Processed Run 324/420
[11/20/2020 03:22:25 PM] Processed Run 325/420
[11/20/2020 03:22:26 PM] Processed Run 326/420
[11/20/2020 03:22:26 PM] Processed Run 327/420
[11/20/2020 03:22:26 PM] Processed Run 328/420
[11/20/2020 03:22:26 PM] Processed Run 329/420
[11/20/2020 03:22:26 PM] Processed Run 330/420
[11/20/2020 03:22:26 PM] Processed Run 331/420
[11/20/2020 03:22:26 PM] Processed Run 332/420
[11/20/2020 03:22:27 PM] Processed Run 333/420
[11/20/2020 03:22:28 PM] Processed Run 334/420
[11/20/2020 03:22:48 PM] Processed Run 335/420
[11/20/2020 03:22:49 PM] Processed Run 336/420
[11/20/2020 03:22:49 PM] Processed Run 337/420
[11/20/2020 03:22:49 PM] Processed Run 338/420
[11/20/2020 03:22:50 PM] Processed Run 339/420
[11/20/2020 03:22:50 PM] Processed Run 340/420
[11/20/2020 03:22:50 PM] Processed Run 341/420
[11/20/2020 03:22:50 PM] Processed Run 342/420
[11/20/2020 03:22:50 PM] Processed Run 343/420
[11/20/2020 03:22:50 PM] Processed Run 344/420
[11/20/2020 03:22:50 PM] Processed Run 345/420
[11/20/2020 03:22:50 PM] Processed Run 346/420
[11/20/2020 03:22:50 PM] Processed Run 347/420
[11/20/2020 03:22:50 PM] Processed Run 348/420
[11/20/2020 03:22:50 PM] Processed Run 349/420
[11/20/2020 03:22:50 PM] Processed Run 350/420
[11/20/2020 03:22:50 PM] Processed Run 351/420
[11/20/2020 03:22:50 PM] Processed Run 352/420
[11/20/2020 03:22:50 PM] Processed Run 353/420
[11/20/2020 03:22:50 PM] Processed Run 354/420
[11/20/2020 03:22:50 PM] Processed Run 355/420
[11/20/2020 03:22:50 PM] Processed Run 356/420
[11/20/2020 03:22:50 PM] Processed Run 357/420
[11/20/2020 03:22:50 PM] Processed Run 358/420
[11/20/2020 03:22:50 PM] Processed Run 359/420
[11/20/2020 03:22:50 PM] Processed Run 360/420
[11/20/2020 03:22:50 PM] Processed Run 361/420
[11/20/2020 03:22:51 PM] Processed Run 362/420
[11/20/2020 03:22:52 PM] Processed Run 363/420
[11/20/2020 03:22:52 PM] Processed Run 364/420
[11/20/2020 03:22:53 PM] Processed Run 365/420
[11/20/2020 03:22:54 PM] Processed Run 366/420
[11/20/2020 03:22:54 PM] Processed Run 367/420
[11/20/2020 03:22:54 PM] Processed Run 368/420
[11/20/2020 03:22:54 PM] Processed Run 369/420
[11/20/2020 03:22:54 PM] Processed Run 370/420
[11/20/2020 03:22:54 PM] Processed Run 371/420
[11/20/2020 03:22:54 PM] Processed Run 372/420
[11/20/2020 03:22:54 PM] Processed Run 373/420
[11/20/2020 03:22:54 PM] Processed Run 374/420
[11/20/2020 03:22:54 PM] Processed Run 375/420
[11/20/2020 03:22:54 PM] Processed Run 376/420
[11/20/2020 03:22:54 PM] Processed Run 377/420
[11/20/2020 03:22:54 PM] Processed Run 378/420
[11/20/2020 03:22:54 PM] Processed Run 379/420
[11/20/2020 03:22:55 PM] Processed Run 380/420
[11/20/2020 03:22:55 PM] Processed Run 381/420
[11/20/2020 03:22:55 PM] Processed Run 382/420
[11/20/2020 03:22:55 PM] Processed Run 383/420
[11/20/2020 03:22:55 PM] Processed Run 384/420
[11/20/2020 03:22:55 PM] Processed Run 385/420
[11/20/2020 03:22:55 PM] Processed Run 386/420
[11/20/2020 03:22:55 PM] Processed Run 387/420
[11/20/2020 03:22:55 PM] Processed Run 388/420
[11/20/2020 03:22:55 PM] Processed Run 389/420
[11/20/2020 03:22:55 PM] Processed Run 390/420
[11/20/2020 03:22:55 PM] Processed Run 391/420
[11/20/2020 03:22:55 PM] Processed Run 392/420
[11/20/2020 03:22:56 PM] Processed Run 393/420
[11/20/2020 03:22:57 PM] Processed Run 394/420
[11/20/2020 03:22:57 PM] Processed Run 395/420
[11/20/2020 03:22:58 PM] Processed Run 396/420
[11/20/2020 03:22:58 PM] Processed Run 397/420
[11/20/2020 03:22:58 PM] Processed Run 398/420
[11/20/2020 03:22:59 PM] Processed Run 399/420
[11/20/2020 03:22:59 PM] Processed Run 400/420
[11/20/2020 03:22:59 PM] Processed Run 401/420
[11/20/2020 03:22:59 PM] Processed Run 402/420
[11/20/2020 03:22:59 PM] Processed Run 403/420
[11/20/2020 03:22:59 PM] Processed Run 404/420
[11/20/2020 03:22:59 PM] Processed Run 405/420
[11/20/2020 03:22:59 PM] Processed Run 406/420
[11/20/2020 03:22:59 PM] Processed Run 407/420
[11/20/2020 03:22:59 PM] Processed Run 408/420
[11/20/2020 03:22:59 PM] Processed Run 409/420
[11/20/2020 03:22:59 PM] Processed Run 410/420
[11/20/2020 03:22:59 PM] Processed Run 411/420
[11/20/2020 03:22:59 PM] Processed Run 412/420
[11/20/2020 03:22:59 PM] Processed Run 413/420
[11/20/2020 03:22:59 PM] Processed Run 414/420
[11/20/2020 03:22:59 PM] Processed Run 415/420
[11/20/2020 03:22:59 PM] Processed Run 416/420
[11/20/2020 03:22:59 PM] Processed Run 417/420
[11/20/2020 03:22:59 PM] Processed Run 418/420
[11/20/2020 03:22:59 PM] Processed Run 419/420
[11/20/2020 03:22:59 PM] Processed Run 420/420
distributed.scheduler - INFO - Receive client connection: Client-14d1f8cd-2b7f-11eb-97eb-7cd30ab15d36
distributed.core - INFO - Starting established connection
distributed.worker - INFO - Run out-of-band function 'stop'
distributed.scheduler - INFO - Scheduler closing...
distributed.scheduler - INFO - Scheduler closing all comms
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:42101
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:37798
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:46204
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:40996
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:44953
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:40832
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:43113', name: 10, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:43113
distributed.batched - INFO - Batched Comm Closed: 
distributed.worker - INFO - Connection to scheduler broken.  Reconnecting...
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:38768', name: 8, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:38768
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:39284', name: 11, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:39284
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:34559', name: 9, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:34559
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:45376', name: 6, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:45376
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:44944', name: 16, memory: 0, processing: 0>
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:45994
distributed.core - INFO - Removing comms to tcp://10.225.6.104:44944
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:40013', name: 17, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:40013
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:33488', name: 21, memory: 0, processing: 0>
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:45938
distributed.core - INFO - Removing comms to tcp://10.225.6.104:33488
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:43447', name: 5, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:43447
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:35031', name: 7, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:35031
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:43447
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:43432', name: 20, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:43432
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:42101', name: 22, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:42101
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:33415', name: 13, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:33415
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:39284
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:33746', name: 3, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:33746
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:38000', name: 19, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:38000
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:42900', name: 23, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:42900
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:36236', name: 4, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:36236
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:36233', name: 2, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:36233
distributed.batched - INFO - Batched Comm Closed: 
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:44390', name: 15, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:44390
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:35201
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:34029', name: 14, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:34029
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:40996', name: 18, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:40996
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.104:40832', name: 12, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.104:40832
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:40590', name: 34, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:40590
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:43228', name: 33, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:43228
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:44103', name: 26, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:44103
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:44927', name: 32, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:44927
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:44333', name: 39, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:44333
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:44440', name: 27, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:44440
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:44953', name: 38, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:44953
distributed.batched - INFO - Batched Comm Closed: 
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:37663', name: 29, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:37663
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:41592', name: 28, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:41592
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:36325', name: 30, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:36325
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:34208', name: 44, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:34208
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:42154', name: 42, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:42154
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:34466', name: 45, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:34466
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:34808', name: 36, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:34808
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:45994', name: 46, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:45994
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:42255', name: 40, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:42255
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:37798', name: 35, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:37798
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:46204', name: 43, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:46204
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:35120', name: 37, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:35120
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:42744', name: 25, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:42744
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:33401', name: 47, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:33401
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:35201', name: 41, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:35201
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:34808
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:40599', name: 31, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:40599
distributed.scheduler - INFO - Remove worker <Worker 'tcp://10.225.6.159:45938', name: 24, memory: 0, processing: 0>
distributed.core - INFO - Removing comms to tcp://10.225.6.159:45938
distributed.scheduler - INFO - Lost all workers
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:34029
distributed.batched - INFO - Batched Comm Closed: 
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:33746
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:40599
distributed.batched - INFO - Batched Comm Closed: 
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:35120
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:44390
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:41592
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:43432
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:33488
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:38768
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:36236
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:33415
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:40590
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:34466
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:42154
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:44333
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:37663
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:36325
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:44440
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:44927
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:34208
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:35031
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:34559
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:44944
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:42744
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:36233
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:42255
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:44103
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:43228
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:45376
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:43113
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:38000
distributed.worker - INFO - Stopping worker at tcp://10.225.6.104:42900
distributed.worker - INFO - Stopping worker at tcp://10.225.6.159:33401
distributed.worker - INFO - -------------------------------------------------
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
distributed.worker - INFO - Waiting to connect to:   tcp://10.225.6.104:44912
slurmstepd: error: *** STEP 6215493.0 ON shas0352 CANCELLED AT 2020-11-20T15:37:38 DUE TO TIME LIMIT ***
slurmstepd: error: *** JOB 6215493 ON shas0352 CANCELLED AT 2020-11-20T15:37:38 DUE TO TIME LIMIT ***
srun: Job step aborted: Waiting up to 32 seconds for job step to finish.
@kjgoodrick
Copy link
Contributor Author

I've duplicated this again with a smaller version of the problem that eliminates the unresponsive loop warnings, so it appears that is not the problem. I've run this with a debug level logger as well which is below, the number between the logger name and the level is the processid. One thing I notice on this run is that when the scheduler shuts down the worker that fails to shutdown tries to reconnect to the scheduler afterwards unlike the other workers: [16:09:33] distributed.core - 59123 - DEBUG - Lost connection to 'tcp://10.225.102.247:59454' while reading message: in <closed TCP>: Stream is closed. Last operation: get_data on line 7957. None of the other workers do this. This also happened once in the earlier case, but I was not logging the processid, so it wasn't clear that it was coming from the worker that did not stop, but I imagine this is the case.

As a temporary workaround, is there a way to have the worker quit if it disconnects from the scheduler for some amount of time?

Log is too big to paste, so it's attached. I've also attached the previous full size log. The log size is referencing the size of the problem not the size of the log, just FYI because they are backwards as the full size log is info only.

smaller_log.txt
Full_size_log.txt

@kmpaul
Copy link
Collaborator

kmpaul commented Nov 23, 2020

Hey, @kjgoodrick. Thanks for the issue! I haven't had time to look into this, but I've noticed some worker hanging and other issues in a different set of work I am doing. Maybe the two issues are related, but maybe not. Both problems appear on Linux, so maybe there's something related.

@kmpaul
Copy link
Collaborator

kmpaul commented Nov 30, 2020

@kjgoodrick: I can't reproduce this issue. Can you post a working minimal example script?

@kjgoodrick
Copy link
Contributor Author

kjgoodrick commented Nov 30, 2020

Hi @kmpaul: Thanks for looking into this! I will see what I can do to make this reproduceable with some generic code. If I can get it to happen consistently, I'll let you know. Based on the second log I posted (smaller_log.txt), I think it may be difficult though as I think this is caused by a race condition. If the scheduler shuts down while the worker is communicating with it the worker tries to send a heartbeat and then will try to reconnect forever.

Based on the log this is what I think is happening:

1.) A worker is reading the message from the scheduler telling it to close:
https://github.com/dask/distributed/blob/b4dfc925bac32a488be2016a5930a9b7dd95cec5/distributed/core.py#L565

2.) While this is happening, the scheduler closes and the worker raises a CommClosedError which is handled in the handle_stream function that is reading the messages from the scheduler: https://github.com/dask/distributed/blob/b4dfc925bac32a488be2016a5930a9b7dd95cec5/distributed/core.py#L591

3.) handle_stream returns without any errors and we enter the finally block of the try statement in handle_scheduler: https://github.com/dask/distributed/blob/b4dfc925bac32a488be2016a5930a9b7dd95cec5/distributed/worker.py#L977

4.) The connection was broken when the worker was sent the close message, so the status is still running which gives us our first log message on line 7957: [16:09:33] distributed.worker - 59123 - INFO - Connection to scheduler broken. Reconnecting....

5.) The worker then runs its heartbeat and for some reason the channel is busy so it skips it (log line 7961). Things get a little fuzzy for me here, but maybe a normal heartbeat was running at the same time? For whatever reason, at some point a heartbeat gets a response with a status of missing and then tries to reregister with the scheduler. When this happens, we get an EnvironmentError in the _register_with_scheduler function and get stuck in that while loop forever: https://github.com/dask/distributed/blob/b4dfc925bac32a488be2016a5930a9b7dd95cec5/distributed/worker.py#L853-L896

As a hacky solution I should just be able to set reconnect as False when I'm setting up the workers, but that could cause problems if there's a real disconnection. As a more permanent fix is there a way to tell the scheduler to wait until the workers close before it closes itself?

@kmpaul
Copy link
Collaborator

kmpaul commented Dec 1, 2020

@kjgoodrick: If you are correct, then it seems like this error should occur with non-MPI Dask, right? I can't see anything that is MPI-specific cropping up in the chain of events. Perhaps this should be a Distributed issue?

@mrocklin: What do you think? (Or should I ask someone else for advice?)

@kjgoodrick
Copy link
Contributor Author

@kmpaul If I am right it does seem like the issue does not have to do with the MPI side of things. I have only seen this problem when running in an MPI environment though. If I run the same code on just one CPU without the MPI initialization it stops as it should every time. I don't know if this is caused by some part the MPI setup or if it's just a coincidence though.

@kmpaul
Copy link
Collaborator

kmpaul commented Dec 1, 2020

Yeah. It's hard to diagnose, for sure. The test without the MPI initialization is suspicious, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants