You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The Core and particularly the singcvmfs work could really use the exclusive flag in SLURM. It means that the job will be the only job running on a given node. Information below:
--exclusive[={user|mcs}]
The job allocation can not share nodes with other running jobs (or just other users with the "=user" option or with the "=mcs" option). If user/mcs are not specified (i.e. the job allocation can not share nodes with other running jobs), the job is allocated all CPUs and GRES on all nodes in the allocation, but is only allocated as much memory as it requested. This is by design to support gang scheduling, because suspended jobs still reside in memory. To request all the memory on a node, use --mem=0. The default shared/exclusive behavior depends on system configuration and the partition's OverSubscribe option takes precedence over the job's option. NOTE: Since shared GRES (MPS) cannot be allocated at the same time as a sharing GRES (GPU) this option only allocates all sharing GRES and no underlying shared GRES.
NOTE: This option is mutually exclusive with --oversubscribe.
The Core and particularly the singcvmfs work could really use the exclusive flag in SLURM. It means that the job will be the only job running on a given node. Information below:
More info: https://slurm.schedmd.com/sbatch.html
The text was updated successfully, but these errors were encountered: