-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to swap the DefaultInlineCompletionHandler
#702
Comments
The traitlet approach is commonly used for swapping classes and can be configured in standard jupyter config files. For example, it is used by scheduler_class = Type(
default_value="jupyter_scheduler.scheduler.Scheduler",
klass="jupyter_scheduler.scheduler.BaseScheduler",
config=True,
help=_i18n("The scheduler class to use."),
) and extensively in While
However, currently these entry points act as extension points rather than points for overriding the default handlers. In particular, a long standing issue is that we cannot use @dlqqq @JasonWeill any thoughts/preferences? |
Thinking more about it, there is a clear case to favour the traitlets approach:
|
sorry if im intruding but as a user who has been extending jupyter-ai for use in my organisation, my personal preference would be for an entry point approach with different completion providers. mostly for consistency with the llm and embedding providers. (I've been a bit more detailed in #669). there are customisations in the current default handler that would make more sense to me to implement them in some completion provider subclass instead. my preference would be to generalise the completion handler and have most of the current logic (prompt templates and llm_chain) be part of a subclass of a base completions provider class where you can easily apply with the current llm providers. while the base completions provider class would be more general allowing extensions to implement some other logic (potentially non-langchain) to provide completions. thanks for your consideration. |
I think agree. FYI I previously moved the prompt templates from handlers to providers in #581. The provider could get two methods with default implementations:
This would be backward compatible. The only downside is extending the API surface of the |
although i am not totally against overloading the same llm provider class, as i can see how you can get it to work, i would have a preference for having of having a separate provider for inline completion logic. i think it would be cleaner to separate the provider class for chat and the provider class for completion. |
I do not disagree.
True
Right, this is getting addressed by the ability to mark specific models as suitable for completion-only and suitable for chat-only, or both in #711
Sure. It does not seem pretty to me, but of course we can code anything up. All this is to say that while I mostly agree with you, I also think that the way to get things merged and released today is by making small incremental changes in non-breaking fashion, and once it is time for a new major release, then rewrite the class hierarchy following the lessons we learned along the way. |
Problem
DefaultInlineCompletionHandler
makes some choices which may be sub-optimal when using a models tuned for code infill tasks.For example, the reliance on langchain template system here:
jupyter-ai/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py
Lines 45 to 47 in 4722dc7
makes it hard to use structured prefix/suffix queries as described in
#669.
Further, the post processing assumptions may be different when using the infill model:
jupyter-ai/packages/jupyter-ai/jupyter_ai/completions/handlers/default.py
Lines 130 to 132 in 4722dc7
Proposed Solution
Either:
DefaultInlineCompletionHandler
to a different classDefaultInlineCompletionHandler
to a different classAny preferences?
Additional context
Chat slash command handlers can be added/(swapped?) by using entry points:
The text was updated successfully, but these errors were encountered: