-
-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Specify default address to look for scheduler #239
Comments
This is already possible today using the
Clearly, I should put a bit of effort into docs here... |
Is there an environment variable that I could set somewhere? See
dask/distributed#6737 for context
…On Mon, Jul 18, 2022 at 11:30 AM Ian Rose ***@***.***> wrote:
This is already possible today using the defaultURL setting value. Doing
this as part of a deployment would look like:
1. Identify the relevant server address
2. Prior to users loading the page (not necessarily prior to the
jupyter server startup, but might as well be), put the setting value in an
overrides.json
<https://jupyterlab.readthedocs.io/en/stable/user/directories.html#overrides-json>
file for JupyterLab to pick up. This could be baked in to the environment
if it's a stable URL, or done as part of some setup script.
Clearly, I should put a bit of effort into docs here...
—
Reply to this email directly, view it on GitHub
<#239 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACKZTEIHMFDUBRTCU7SZWDVUWBA5ANCNFSM53YYX24Q>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Not with the current design -- the default URL to populate the search bar with is decided on the frontend, and feeding information to that goes through the config system (i.e., env variables aren't directly visible to the frontend). Is there an issue with writing a small config file in that case, or is it just more convenient to set an env variable? |
So I would do something like the following before starting up the Jupyter server? with open("overrides.json", mode="w") as f:
f.write(json.dumps(...)) |
Yes, something like that, at least for a proof-of-concept. A more complete solution might be to use json5 and merge with other possible config options. To be clear, we could have some kind of translation layer between the dask config system and the JupyterLab one, but we'd have to build it. I'm a little reticent to build out a new set of special-case environment variable rather than go through the existing path. I know that some JupyterHubs/QHubs/2i2c-deployments also have needs to distribute custom settings. |
The frontend chooses in order:
I also noticed when kicking the tires on this that the user-populated URL can be a bit too sticky at the moment (you can reset it with a |
@ian-r-rose and I spoke. There is some possibility of using the system that currently sends the default at-start-time clusters up to the frontend. This is low enough priority though that we're going to wait until jupyter-on-dask becomes more of a major thing (maybe never). |
If we switched out the internals for |
Recall Jacob that in this case we don't have any Cluster objects, just a
scheduler address
…On Tue, Jul 19, 2022 at 7:54 AM Jacob Tomlinson ***@***.***> wrote:
If we switched out the internals for dask-ctl this would be handled
automatically by the cluster discovery. Discovered clusters would be listed
automatically in the sidebar. xref #189
<#189>
—
Reply to this email directly, view it on GitHub
<#239 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACKZTFUNA5BK7XEWRPDM2LVU2QSFANCNFSM53YYX24Q>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I am not sure that this would be insurmountable in a refactor to use |
@mrocklin that should be fine.
I've been down the same thought process too. The trouble is the cluster objects are generally the only place that we can actually represent the abstract concept of a cluster, Dask Gateway and the Dask Kubernetes Operator both have other ways to store and represent this internally, but most other deployment mechanism's don't. My goal with |
This seems it could bee a good solution -- thanks for the explanation @jacobtomlinson. I'll see if I can put together an example using dask/distributed#6737 and I'm getting more excited about the possibility of integrating |
I'm also interested in providing a default address. I tried the following in
Thanks for your help. |
So, I'm in an interesting situation where I'm running a Jupyter server and I know that it will have exactly one Dask cluster attached to it. I would like to populate the Dask labextension with that scheduler address on startup. Is this easy to do?
The text was updated successfully, but these errors were encountered: