Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Config from file outside home directory #953

Open
ctcjab opened this issue Aug 13, 2024 · 17 comments
Open

Config from file outside home directory #953

ctcjab opened this issue Aug 13, 2024 · 17 comments

Comments

@ctcjab
Copy link
Contributor

ctcjab commented Aug 13, 2024

Context: I am developing a custom image for my JupyterHub users that has jupyter-ai installed and configured out-of-the-box. The custom image is built on top of the jupyter-minimal base image.

Right now I can only get this to work if I put the configuration in $HOME/.local/share/jupyter/jupyter_ai/config.json. It would be preferable if I could put this in a jupyter data dir outside the home directory and still have jupyter-ai pick it up (e.g. /usr/share/jupyter/jupyter_ai/config.json), but that doesn't work.

I also tried following the docs I found here and moving the config to /etc/jupyter/jupyter_jupyter_ai_config.json, but that didn't work either. Any help would be much appreciated.

Thanks, and thanks for maintaining Jupyter-AI!

@krassowski
Copy link
Member

What is your end goal? To hide the tokens?

@ctcjab
Copy link
Contributor Author

ctcjab commented Aug 13, 2024

No, it's fine for users to be able to view the config file (including tokens).

The goal is to have all default config that I provide living outside users' home directories, i.e. coming from my custom image. Config within users' home directories should be all their own.

When my system-level config has to live in users' home directories, it is persisted to their backing persistent volume in Kubernetes, which shadows the home directory in my custom image. This means that if I have to push an update out to my system-level config, I can't just change a file in my custom image, I instead have to run some migration script at container startup to first check their potentially-customized config in their home directory, and ideally detect and preserve any customizations they made that don't conflict with the changes I need to make.

@ctcjab
Copy link
Contributor Author

ctcjab commented Aug 13, 2024

If this isn't currently supported, is there a workaround you'd suggest? E.g. some way to pass a CLI param to specify a custom path where jupyter-ai should look for its config?

Also meant to say before, thanks for the quick response!

@krassowski
Copy link
Member

I am surprised that moving it to /etc/jupyter/jupyter_jupyter_ai_config.json does not work for you. If this is in jupyter --paths, it should be picked up. Do users have read access to /etc/jupyter/jupyter_jupyter_ai_config.json?

Of note, jupyter-ai has two types of config files which can be put in a JSON, one is the standard traitlets config and one is an internal one which should not be modified by the user.

@ctcjab
Copy link
Contributor Author

ctcjab commented Aug 22, 2024

/etc/jupyter does appear in jupyter --paths, and the container user does have read access to /etc/jupyter/jupyter_jupyter_ai_config.json. However, when I move $HOME/.local/share/jupyter/jupyter_ai/config.json to /etc/jupyter/jupyter_jupyter_ai_config.json, the AI extension no longer picks up the config. Here is a screenshot where you can see this, as well as the contents of my config:
Screenshot 2024-08-22 at 10 35 21 AM
I also see nothing logged about the AI extension's config output by the server (including whether it was even found) under the above setup.

I also tried wrapping this config inside the two extra layers that appear here ({"AiExtension": "model_parameters": {...}}), but that resulted in the following error logged by the server:

[C 2024-08-22 14:47:17.211 AiExtension] Bad config encountered during initialization: Values of the 'model_parameters' trait of an AiExtension instance must be a dict, but a value of 'openai-chat:dls-gpt-4o' <class 'str'> was specified.

I don't understand this error message, since the JSON I'm providing does associate the "model_parameters" key with a dict (JSON object), not a string.

Can you please confirm what you expect the contents of my /etc/jupyter/jupyter_jupyter_ai_config.json should be in order for this to work? If there is anything else I should be doing to debug this and report back (e.g. enabling more logging output), please let me know.

Again, when /etc/jupyter/jupyter_jupyter_ai_config.json is moved back to $HOME/.local/share/jupyter/jupyter_ai/config.json, the AI extension loads it as expected, and the user gets the extension already-configured the first time they click into it:
Screenshot 2024-08-22 at 11 47 50 AM

Thanks again for your help with this.

@krassowski
Copy link
Member

The screenshot confirms what I mentioned about two config file formats. You are mixing them up (not that documentation is super clear here). The jupyter_jupyter_ai_config.json is the trailtets approach as documented in https://jupyter-ai.readthedocs.io/en/latest/users/index.html#configuration and only these options in the format specified are supported; this format lives in the directories as reported under config section in jupyter --paths.

The config.json is a separate internal format which lives under a sub-directory of directories in data paths as reported by jupyter --paths

[C 2024-08-22 14:47:17.211 AiExtension] Bad config encountered during initialization: Values of the 'model_parameters' trait of an AiExtension instance must be a dict, but a value of 'openai-chat:dls-gpt-4o' <class 'str'> was specified.

Can you show the exact config that you tried?

@ctcjab
Copy link
Contributor Author

ctcjab commented Aug 23, 2024

Thanks @krassowski, I thought that might be the case. Here is my working config.json:

(base) jovyan@6a1f0f3964bf:~$ cat ~/.local/share/jupyter/jupyter_ai/config.json
{
  "model_provider_id": "openai-chat:dls-gpt-4o",
  "embeddings_provider_id": null,
  "send_with_shift_enter": false,
  "fields": {
    "openai-chat:dls-gpt-4o": {
      "openai_api_base": "https://ailab-litellm-proxy-dev.chicagotrading.io",
      "openai_proxy": ""
    }
  },
  "api_keys": {
    "OPENAI_API_KEY": "<redacted>"
  },
  "completions_model_provider_id": "openai-chat:dls-gpt-4o",
  "completions_fields": {}
}

...except that it doesn't work in JupyterHub on Kubernetes due to the home directory being shadowed by the persistent volume.

I tried moving this config file into a subdirectory of one of the data paths that is outside the home directory, but then the AI extension no longer picks it up:

$ jupyter --paths
config:
    /home/jovyan/.jupyter
    /home/jovyan/.local/etc/jupyter
    /opt/conda/etc/jupyter
    /usr/local/etc/jupyter
    /etc/jupyter
data:
    /home/jovyan/.local/share/jupyter
    /opt/conda/share/jupyter
    /usr/local/share/jupyter
    /usr/share/jupyter
runtime:
    /home/jovyan/.local/share/jupyter/runtime

$ mkdir /opt/conda/share/jupyter/jupyter_ai

$ mv -iv .local/share/jupyter/jupyter_ai/config.json /opt/conda/share/jupyter/jupyter_ai/
renamed '.local/share/jupyter/jupyter_ai/config.json' -> '/opt/conda/share/jupyter/jupyter_ai/config.json'

$ start-notebook.py  # AI extension is no longer pre-configured, did not find the config file in the new location

@krassowski
Copy link
Member

krassowski commented Aug 24, 2024

I would recommend using jupyter_jupyter_ai_config.json because config.json is not the supported way to configure the extension but an internal configuration storage which may change, as per #503 (see also my comment advocating for making it possible to use jupyter_jupyter_ai_config.json in use cases like yours in this PR discussion).

As for the path, here is the code:

from jupyter_core.paths import jupyter_data_dir
from traitlets import Integer, Unicode
from traitlets.config import Configurable
Logger = Union[logging.Logger, logging.LoggerAdapter]
# default path to config
DEFAULT_CONFIG_PATH = os.path.join(jupyter_data_dir(), "jupyter_ai", "config.json")

The data dir is documented in more details here: https://docs.jupyter.org/en/latest/use/jupyter-directories.html#data-files

Briefly, you can check which of the data dirs is "the" selected data dir with:

jupyter --data-dir

and you can adjust it by setting JUPYTER_DATA_DIR environment variable.

@ctcjab
Copy link
Contributor Author

ctcjab commented Aug 26, 2024

Thank you for the warning about using config.json. I would switch to jupyter_jupyter_ai_config.json, but it appears that it does not support all the settings that are required in order for the user to get a fully-configured AI extension out-of-the-box. I guess that is tracked by #505? In the meantime, I can keep using config.json and pinning the jupyter-ai version we're using to control when I pick up any breaking changes. Does that sound right?

Running jupyter --data-dir (in any of the jupyter-docker-stacks containers) results in /home/jovyan/.local/share/jupyter. Wouldn't want to use JUPYTER_DATA_DIR to change this, as the user should still be able to store their data in their home directory and have it persisted in their PV on Kubernetes. The need here is for administrators to be able to supply a configuration file in any of the data dirs that is outside the home directory which the AI extension picks up when no config file is found in the home directory data dir. But it seems this is not supported based on the results I shared in my previous comment, is that right?

I am already using /opt/conda/share/jupyter/lab/settings/overrides.json to accomplish this with all the other extensions whose settings I need to provide custom defaults for for my users[1], and it works great – users get all these extensions configured out-of-the-box, but if they need to customize their settings further, they can do so (including through the JupyterLab settings UI) and their customizations get persisted in their PV-backed home directory which takes precedence over my overrides.json. If jupyter-ai could look in overrides.json like these other extensions, that would make things simpler and more consistent for users and for administrators like me. What do you think?

[1] Here are all the other extensions that this approach works with, in case it's helpful:

❯ grep '@' overrides.json
  "@jupyterlab-contrib/spellchecker:plugin": {
  "@jupyterlab/apputils-extension:notification": {
  "@jupyterlab/completer-extension:inline-completer": {
      "@jupyterlab/inline-completer:history": {
      "@jupyterlab/jupyter-ai": {
  "@jupyterlab/completer-extension:manager": {
  "@jupyterlab/console-extension:tracker": {
  "@jupyterlab/extensionmanager-extension:plugin": {
  "@jupyterlab/filebrowser-extension:browser": {
  "@jupyterlab/fileeditor-extension:plugin": {
  "@jupyterlab/notebook-extension:tracker": {
  "@jupyterlab/shortcuts-extension:shortcuts": {
  "@jupyterlab/terminal-extension:plugin": {

@krassowski
Copy link
Member

krassowski commented Aug 26, 2024

. I would switch to jupyter_jupyter_ai_config.json, but it appears that it does not support all the settings that are required in order for the user to get a fully-configured AI extension out-of-the-box

Which specific ones?

@krassowski
Copy link
Member

If jupyter-ai could look in overrides.json like these other extensions, that would make things simpler and more consistent for users and for administrators like me. What do you think?

Yes and not. I do think that some settings like the current provider/model should be configured via JupyterLab configuration system (which then feeds into overrides.json). However, the API keys should never be sent to the frontend (and all other JupyterLab configuration settings are); I think this is why folks who originally developed jupyter-ai created config.json. Only later jupyter_jupyter_ai_config.json approach was integrated (which is the supported jupyter way of configuring server-side settings which do not get propagated to the frontend).

Wouldn't want to use JUPYTER_DATA_DIR to change this, as the user should still be able to store their data in their home directory and have it persisted in their PV on Kubernetes. The need here is for administrators to be able to supply a configuration file in any of the data dirs that is outside the home directory which the AI extension picks up when no config file is found in the home directory data dir. But it seems this is not supported based on the results I shared in my previous comment, is that right?

Yes, because config.json is a private implementation detail which is not meant to be manipulated by system administrators. Instead, you should use jupyter_jupyter_ai_config.json which is the supported way to configure jupyter-ai. If specific things are not yet configurable that way please open issues for them and contribute a patch (or enable someone to contribute a patch).

@ctcjab
Copy link
Contributor Author

ctcjab commented Aug 27, 2024

Yes and not. I do think that some settings like the current provider/model should be configured via JupyterLab configuration system (which then feeds into overrides.json). However, the API keys should never be sent to the frontend (and all other JupyterLab configuration settings are); I think this is why folks who originally developed jupyter-ai created config.json. Only later jupyter_jupyter_ai_config.json approach was integrated (which is the supported jupyter way of configuring server-side settings which do not get propagated to the frontend).

Thanks for explaining. As long as there is some way for administrators to provide default values for all the extension's required settings in a file outside each user's home directory, that would satisfy the need here.

If specific things are not yet configurable that way please open issues for them and contribute a patch (or enable someone to contribute a patch).

Will do. To facilitate this (and answering your previous question), could the https://jupyter-ai.readthedocs.io/en/latest/users/index.html#configuring-as-a-config-file documentation please be updated to document the schema for this configuration? Currently it only provides a small example config that is apparently an incomplete example of all the settings that can be configured via this file. In particular, it is not clear how to translate the settings I shared previously into this other format. And when you provide invalid config, the error message makes it hard to figure out what's wrong:

[C 2024-08-22 14:47:17.211 AiExtension] Bad config encountered during initialization: Values of the 'model_parameters' trait of an AiExtension instance must be a dict, but a value of 'openai-chat:dls-gpt-4o' <class 'str'> was specified.

I don't understand this error message, since the JSON I'm providing does associate the "model_parameters" key with a dict (JSON object), not a string.

Thank you for your ongoing help with this!

@krassowski
Copy link
Member

could the https://jupyter-ai.readthedocs.io/en/latest/users/index.html#configuring-as-a-config-file documentation please be updated to document the schema for this configuration

This is a fair ask. Traitlets has a way to auto-generate documentation for traits (.document_config_options()), I think this should be added to the jupyter-ai docs. For example this is how jupyter-server does it.

As a user you can either submit a PR for the above, or use command line:

python -c 'import jupyter_ai; print(jupyter_ai.AiExtension().document_config_options())'

@ctcjab
Copy link
Contributor Author

ctcjab commented Aug 30, 2024

Thanks. I'm still having trouble getting this to work, but the trick you shared helped me get farther:

I figured out that if I replace the ~/.local/share/jupyter/config.json that I shared above with an /opt/conda/etc/jupyter/jupyter_jupyter_ai_config.json that contains the following...

{
  "AiExtension": {
    "default_language_model": "openai-chat:dls-gpt-4o",
    "default_api_keys": {"OPENAI_API_KEY": "<redacted>"},
    "default_url": "https://ailab-litellm-proxy-dev.chicagotrading.io",
    "model_parameters": {
      "openai-chat:dls-gpt-4o": {
        "model_kwargs": {
        }
      }
    }
  }
}

...then the AI extension UI starts out appearing to be configured (i.e. it looks like the second screenshot in this comment above rather than the first one). That is good progress.

However, when I try to use the chatbot, I get errors like:

[E 2024-08-30 00:53:38.671 AiExtension] Error code: 401 - {'error': {'message': 'Incorrect API key provided: <redacted>. You can find your API key at https://platform.openai.com/account/api-keys.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_api_key'}}

This occurs even though I'm setting OPENAI_API_KEY to the same value I used in the old, working config.json that I shared above. So there must be something about the new config that isn't properly translated over from the old. Can you tell what's wrong?

I'll paste the old, working config again here, for convenience:

$ cat ~/.local/share/jupyter/jupyter_ai/config.json
{
  "model_provider_id": "openai-chat:dls-gpt-4o",
  "embeddings_provider_id": null,
  "send_with_shift_enter": false,
  "fields": {
    "openai-chat:dls-gpt-4o": {
      "openai_api_base": "https://ailab-litellm-proxy-dev.chicagotrading.io",
      "openai_proxy": ""
    }
  },
  "api_keys": {
    "OPENAI_API_KEY": "<redacted>"
  },
  "completions_model_provider_id": "openai-chat:dls-gpt-4o",
  "completions_fields": {}
}

I combed through the AiExtension().document_config_options() output, but it still leaves several questions unanswered. For example, some fields have no description:

AiExtension.default_url : Unicode
    Default: ``''``

    No description

Other fields just document an underspecified shape like Dict, without describing the schema within that dict or giving any example:

AiExtension.model_parameters : Dict
    Default: ``{}``

    Key-value pairs for model id and corresponding parameters that
            are passed to the provider class. The values are unpacked and passed to
            the provider class as-is.


AiExtension.settings : Dict
    Default: ``{}``

    Settings that will passed to the server.

Thanks again for your ongoing help with this.

@krassowski
Copy link
Member

I do not see anything wrong with how you specify default_api_keys - I can say that I know that it works as I am using it in production. I would suggest you to check what gets populated in the internal config.json.

default_url gets inherited from ExtensionApp class, it is irrelevant for most use cases (unless you are running the extensions as standalone, which has some security benefits). Please feel welcome to submit a PR improving the model_parameters documentation, it is defined here:

model_parameters = Dict(
key_trait=Unicode(),
value_trait=Dict(),
default_value={},
help="""Key-value pairs for model id and corresponding parameters that
are passed to the provider class. The values are unpacked and passed to
the provider class as-is.""",
allow_none=True,
config=True,
)

@dcieslak19973
Copy link

I work with @ctcjab and have made some progress based on your suggestions. The last issue seems to be getting the in-line completions configured this way.

the current configuration that I have is (some strings altered w/ REDACTED, if the altered strings look potenially mis-aligned it is more likely a copy-paste error than a configuration error):

{
  "AiExtension": {
    "completions_model_provider_id": "azure-chat-openai:REDACTED",
    "completion_fields": {
      "azure-chat-openai:REDACTED": {
        "azure_endpoint": "https://lite-llmproxy.REDACTED.REDACTED",
        "openai_api_version": "2024-05-01-preview",
        "openai_api_key": "REDACTED"
      }
    },
    "default_language_model": "azure-chat-openai:REDACTED",
    "default_api_keys": {"AZURE_OPENAI_API_KEY":"REDACTED"},
    "embeddings_provider_id": null,
    "fields": {
      "azure-chat-openai:REDACTED": {
        "azure_endpoint": "https://lite-llmproxy.REDACTED.REDACTED",
        "openai_api_version": "2024-05-01-preview",
        "openai_api_key": "REDACTED"
      }
    },
    "model_provider_id": "azure-chat-openai:REDACTED",
    "model_parameters": {
      "azure-chat-openai:REDACTED": {
        "azure_endpoint": "https://lite-llmproxy.REDACTED.REDACTED",
        "openai_api_version": "2024-05-01-preview",
        "openai_api_key": "REDACTED"
      }
    },
    "send_with_shift_enter": false
  }
}

It seems like completions_model_provider_id may only get picked up from the $HOME/.local/share/jupyter/jupyter_ai/config.json file and not from the jupyter_jupyter_ai_config.json file.

@krassowski
Copy link
Member

That's correct, this is tracked in #781 - please see the discussion - your feedback on what the configuration should look like will help to move it forward (as would opening a PR or enabling someone to spend time on it).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants