-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update model fields immediately on save #1125
Conversation
906887b
to
27116fc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Tested the
deepcopy
change by removing the code for it and failingpytest
. Reinstating it works as intended. - Tested that the changes to model fields are saved and there is no need to restart Jupyter AI. The system logs report the change as intended. Resolves Issue Provider text fields values saved from Chat UI are only applied after Jupyter Lab is restarted. #1118 .
- Issue
profile_name
field ignored for Amazon Bedrock Chat Models in jupyter-ai Extension #1007 : used blankprofile_name
and new region and the saved model works well. - Issue Missing Request Schema for SageMaker Endpoint in jupyter-ai Extension #1006 (SageMaker endpoint request schema error). This still needs to be tried though there may not be JAI users who are calling SageMaker for the models.
- Issue api_version in GUI not considered, need to set OPENAI_API_VERSION #807 to be tested but no access to Azure. Hopefully @OliverKleinBST can test this.
I think this should go in, but I suspect that I do not have a way to test it (no access to Amazon models nor Azure which are the ones that are parametrized). Maybe it would be good to expose some model parameters (temperature) as fields so that this could be tested by folks easily? |
To clarify, I think this is not a result of this PR but since #711. Yet, it might be that fixing these together could be easier than separately. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All LGTM.
@krassowski Thank you for the callout. I agree it would be good to also include a fix for completion fields before the next patch release. However, I think it should be done in a separate PR since it can be done independently. I can help with these changes. It shouldn't be too much work; the Proceeding to merge. Thank you @krassowski and @srdas for the testing & feedback! |
27116fc
to
5697888
Compare
5697888
to
61a4de7
Compare
I'm unlikely to find time this week, if you see how to fix it easily feel free to go ahead. Also, agree a separate PR would be better. |
@krassowski No worries! Take care. |
@meeseeksdev please backport to v3-dev |
Co-authored-by: david qiu <[email protected]>
Confirm that it solves also the problem reported in #807 |
Description
profile_name
field ignored for Amazon Bedrock Chat Models in jupyter-ai Extension #1007.Fixes usage of model fields by respecting the updated values immediately on save (without requiring a restart).
Model fields are used to specify provider-specific keyword arguments directly from Jupyter AI's Settings UI. These include
Base API URL
for OpenAI,Region name
for Amazon Bedrock, etc.Demo
Screen.Recording.2024-11-27.at.10.33.12.AM.mov
Details for contributors
Fixes a bug introduced by Setting default model providers #421:
BaseProvider.get_model_parameters()
would return the fields set by the existingconfig.json
on init, instead of the user-specified overrides in theAiExtension.model_parameters
trait.ConfigManager
accidentally merging the existing config intoself.settings["model_parameters"]
instead of a deep copy of that dictionary.Fixes a bug introduced by Distinguish between completion and chat models #711: Model fields were not being returned at all from the dictionary returned by
ConfigManager.lm_provider_params
.config.json
on init still got passed to the LLM due to the bug introduced by Setting default model providers #421. Essentially, the bug introduced by Setting default model providers #421 "covered up" the bug introduced by Distinguish between completion and chat models #711. This explains why the issue was difficult to reproduce and why a restart fixed it.Makes the
allowed_providers
,blocked_providers
,allowed_models
,blocked_models
arguments toConfigManager
optional and default toNone
, as indicated by their type signatures.Adds unit test coverage for both aforementioned bugs.
Testing instructions
config.py
:deepcopy(self._defaults.get(config_key))
toself._defaults.get(config_key)
near line 240.**fields,
near line 475.pytest
, and verify that the 2 added tests fail. This asserts that the tests actually capture the bugs introduced by Setting default model providers #421 and Distinguish between completion and chat models #711.pytest
, and verify the 2 added tests now pass.