-
Notifications
You must be signed in to change notification settings - Fork 0
/
form.yml.erb
152 lines (133 loc) · 8.93 KB
/
form.yml.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
# Batch Connect app configuration file
#
# @note Used to define the submitted cluster, title, description, and
cluster: "quests"
form:
# Different containers have different kernels installed
- default_kernel
# This allows the user to select which partition they submit to
- slurm_partition
# This allows a user to request a GPU
- gres_value
# This allows a user to specify the account they are submitting under
- slurm_account
# This allows a user to use more than 1 core
- num_cores
# This checkbox is to encourage folks to consider before asking for more than 1 node
- request_more_than_one_node
# how many nodes do you want to request
- number_of_nodes
# This allows a user to request RAM
- memory_per_node
# How many hours do you want to run this job for
- bc_num_hours
# This allows a user to decide if they want to run jupyter lab or jupyter notebook
- jupyterlab_switch
# What folder do you want to server as the notebook "root" directory
- notebook_directory_root
# User can supply e-mail if they would like to be e-mailed when the session begins
- user_email
# Allow the user to provide a Slurm "constraint" option
- constraint
# Name of the Slurm job
- job_name
# these variables are for the JavaScript
- raw_data
- raw_group_data
attributes:
default_kernel:
label: "Pre-Installed Kernel"
help: |
The Quest OnDemand Jupyter instances come with a pre-installed kernel/virtual environment. You can use the drop down below to determine which pre-installed kernel you would like to have available for your Jupyter session (in addition to all of you user installed kernels). We recommend leaving this as the default value of `ml-data-science-kernel-py310`.
widget: select
options:
- [ "ml-data-science-kernel-py310", "/software/openondemand/jupyter/jupyter-ood.sif" ]
- [ "ml-data-science-kernel-py311", "/software/rhel8/quest_ondemand/quest_ood_jupyter/rhel8-jupyter-new-ml-data-science.sif" ]
- [ "NicheCompass", "/software/rhel8/quest_ondemand/quest_ood_jupyter/niche-compass/nichecompass_jupyter.sif" ]
jupyterlab_switch:
widget: "check_box"
label: "Use JupyterLab instead of Jupyter Notebook?"
help: |
JupyterLab is the next generation of Jupyter, and is completely compatible with existing Jupyter Notebooks.
job_name:
label: "Name of the Job"
value: "Jupyter Notebook"
notebook_directory_root:
label: "Jupyter root directory (Optional)"
value: ""
help: |
If you leave this blank, Jupyter will run from your HOME folder. Otherwise, specify a full PATH to a folder on Quest. This is important to consider if running Jupyter Notebook as it limits the folders and files that Jupyter can see.
bc_num_hours:
label: "Wall Time (in number of hours)"
help: |
Select the maximum number of *hours* you want this session/job to last. You can always terminate your session/job early, but it will automatically be terminated by Slurm once the walltime is hit. This will set the value of the `--time` flag for your job. Your available options for walltime are effected by the partition that you have selected. For more information on walltimes, please see the [walltime section](https://services.northwestern.edu/TDClient/30/Portal/KB/ArticleDet?ID=1964#section-walltimes) of our Everything You Need to Know about Using Slurm on Quest page.
value: 1
slurm_partition:
label: "SLURM Partition"
help: |
Select the SLURM partition you will submit this job to. This will set the value of the `-p` or `--partition` flag for your job. This selection will impact a number of other options on this form, including which accounts/allocations and maximum walltimes you can choose and whether or not you have the option to select GPUs. For more information on partitions, please see the [partitions section](https://services.northwestern.edu/TDClient/30/Portal/KB/ArticleDet?ID=1964#section-partitions) of our Everything You Need to Know about Using Slurm on Quest page.
widget: select
slurm_account:
label: "SLURM Account"
help: |
The Quest allocation/account under which you will be submitting this job. This will set the value of the `-A` or `--account` flag for your job. *Please note that if you do not see an allocation option you expect, this means one of the following:* that it is a storage only allocation, that your allocation is expired, in which case please request to renew it using the appropriate form that can be [found here](https://www.it.northwestern.edu/secure/forms/research/allocation-request-forms.html), or that the partition that you have selected cannot be submitted to from that allocation. More information on this parameter can be found in the [account section](https://services.northwestern.edu/TDClient/30/Portal/KB/ArticleDet?ID=1964#section-account) of our Everything You Need to Know about Using Slurm on Quest page.
widget: select
gres_value:
label: "GPU(s) that you would like to request"
help: |
The number and type of GPUs you are requesting for this job, only available for certain partitions. This will set the value of the `--gres` flag for your job. More information on GPUs can be found on the [GPUs on Quest page](https://services.northwestern.edu/TDClient/30/Portal/KB/ArticleDet?ID=1112).
widget: select
default: ""
num_cores:
widget: "number_field"
label: "Number of CPUs/cores/processors"
value: 1
help: |
The number of CPUs you want for this session. This will set the value of the `--ntasks-per-node` flag for your job. Please note that you likely do not need to request more than a single CPU unless you know that your application can be parallelized. Also, please note that the more cores requested, the longer the wait will be for the session to start. For more information on setting the number of cores, please see the [number of cores section](https://services.northwestern.edu/TDClient/30/Portal/KB/ArticleDet?ID=1964#section-number-of-cores) of our Everything You Need to Know about Using Slurm on Quest page.
min: 1
max: 64
step: 1
id: 'num_cores'
request_more_than_one_node:
widget: "check_box"
label: "Request more than a single node (Optional)"
help: |
Only check this box if you are confident that your application of choice can utilize CPUs across more than one node. In Python, some of these applications include MPI4py, Dask, and PySpark. Once checked, you will be shown another field where you can request multiple nodes.
number_of_nodes:
widget: "number_field"
label: "Number of nodes being requested."
value: 1
min: 1
step: 1
id: 'number_of_nodes'
help: |
How many nodes you want for this session. This will set the value of the `-N/--nodes` flag for your job. A reminder to check that your application of choice can utilize CPUs across more than one node. These applications/packages would include MPI4py, Dask, PySpark, etc. For more information on setting the number of nodes, please see the [number of nodes section](https://services.northwestern.edu/TDClient/30/Portal/KB/ArticleDet?ID=1964#section-number-of-nodes) of our Everything You Need to Know about Using Slurm on Quest page.
memory_per_node:
widget: "number_field"
label: "Total memory or RAM do you need in GB."
value: 5
help: |
How much total memory/RAM (in GBs) you want for this session. This will set the value of the `--mem` flag for your job. Knowing ahead of time how much memory you need can be difficult. That said, you can use the utility `seff` to profile *completed* jobs on Slurm which give the maximum memory used by that job. For more information on setting memory, please see the [memory section](https://services.northwestern.edu/TDClient/30/Portal/KB/ArticleDet?ID=1964#section-required-memory) of our Everything You Need to Know about Using Slurm on Quest page.
min: 1
max: 243
step: 1
id: 'memory_per_node'
constraint:
label: "Constraint (Optional)"
help: |
Not all Quest compute nodes are the same. We currently have four different generations or architectures of compute nodes which we refer to as quest9, quest10, quest11, and quest12. This will set the value of the `--constraint` flag for your job. For more information on setting the constraint option, please see the [constraint section](https://services.northwestern.edu/TDClient/30/Portal/KB/ArticleDet?ID=1964#section-constraints) of our Everything You Need to Know about Using Slurm on Quest page.
widget: "select"
user_email:
label: "Email Address (Optional)"
help: |
Enter your email address if you would like to receive an email when the session starts. Leave blank for no email.
raw_data:
label: "Account associations for use by JavaScript"
widget: hidden_field
value: "<%= RawSlurmData.read_sinfo_from_file %>"
cacheable: false
raw_group_data:
label: "linux groups for use by JavaScript"
widget: hidden_field
value: "<%= RawSlurmData.rawgroupdata %>"
cacheable: false