-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Code completion for jupyter lab/notebook #419
Conversation
A simple version that completes the continuation
perfect readme
Fixed wrong method directive
Add request animation
Format code
Fix bug when getting all cell codes
…auto-completion
As jupyterlab/jupyterlab#15160 is not merge nor released yet. There might still be modification on this PR. But you can take a quick look to see if the overall implementation logic make sense |
* Add E2E tests (jupyterlab#350) * add basic e2e testing setup * adjust execute test step name * test sidebar chat icon, add testing class * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * add sidebar snapshot * test chat sidepanel, extend helper class * adjust welcome message test, add snapshot * [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci * adjust naming * removeempty line * move ui-tests to packages/jupyter-ai/ * update e2e ci workflow for ui-tests folder move * update ui-tests folder location for yarn.lock hash * run lint locally * Add "Update Playwright Snapshots" CI workflow * change if clause * specify npm client * remove report and artifact specifiers * Update README.md to have correct commands and folders * update e2e/integration test README * Add Integration / E2E testing section to the docs * update wording of docs on snapshots * Ignore all non-linux snapshots * Update packages/jupyter-ai/ui-tests/README.md Co-authored-by: Piyush Jain <[email protected]> * remove cd command that would return users back to root * remove cd ../../../ * Remove repeating setup instructions * Add suggestion to generate snapshots before the 1st run * remove unnecessary link anchor * remove rudimentary jlpm build --------- Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Piyush Jain <[email protected]> * Adds chat anthropic provider, new models (jupyterlab#391) * Adds chat anthropic provider, new models * Added docs for anthropic chat * Upgraded LangChain, fixed prompts for Bedrock * Updated docs for bedrock-chat * Added bedrock embeddings, refactored chat vs reg models * Fixed magics * Removed unused import * Updated provider list. Added bedrock and bedrock-chat to provider list. * Publish 2.3.0 SHA256 hashes: jupyter-ai-core-2.3.0.tgz: 8f37fe0f15b6f09b2eeb649ac972d2749427ed3668a03ffae9cf5b5f8f37a8ce jupyter_ai-2.3.0-py3-none-any.whl: 09e264c40f05ef34cd188dd5804d22afe43a91e4c82ed729428377bd5c581263 jupyter_ai-2.3.0.tar.gz: 8ce44b88528195e6de1f9086994d68731b1dbbc03f0e8709baf5a8819c254462 jupyter_ai_magics-2.3.0-py3-none-any.whl: 714cf33746c121ef2b5d5c45b4460e690d6815b306a3f5f7224008866e794602 jupyter_ai_magics-2.3.0.tar.gz: 37554e53d3576a6c8938e5812764efe7749dfeda2f47d5e551111d2f6d8a5c48 * [pre-commit.ci] pre-commit autoupdate (jupyterlab#344) updates: - [github.com/pre-commit/pre-commit-hooks: v4.4.0 → v4.5.0](pre-commit/pre-commit-hooks@v4.4.0...v4.5.0) - [github.com/psf/black: 23.7.0 → 23.9.1](psf/black@23.7.0...23.9.1) - [github.com/asottile/pyupgrade: v3.10.1 → v3.15.0](asottile/pyupgrade@v3.10.1...v3.15.0) - [github.com/sirosen/check-jsonschema: 0.23.3 → 0.27.0](python-jsonschema/check-jsonschema@0.23.3...0.27.0) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Add key completion code completion * Add developer readme * Fix the bug of remote call not using stram * Comment out useless methods and improve development documentation * Updates for more stable generate feature * Refactored generate for better stability with all providers/models. * Upgraded LangChain to 0.0.318 * Updated to use memory instead of chat history, fix for Bedrock Anthropic * Optimize code naming * Fixed the issue where enter would intercept automatic requests * Fixed the issue where enter fails when pressing the ghost text to start. * Allow to define block and allow lists for providers (jupyterlab#415) * Allow to block or allow-list providers by id * Add tests for block/allow-lists * Fix "No language model is associated with" issue This was appearing because the models which are blocked were not returned (correctly!) but the previous validation logic did not know that sometimes models may be missing for a valid reason even if there are existing settings for these. * Add docs for allow listing and block listing providers * Updated docs * Added an intro block to docs * Updated the docs --------- Co-authored-by: Piyush Jain <[email protected]> * Publish 2.4.0 SHA256 hashes: jupyter-ai-core-2.4.0.tgz: 04773e2b888853cd1c27785ac3c8434226e9a279a2fd253962cb20e5e9f72c1d jupyter_ai-2.4.0-py3-none-any.whl: a5880cc108a107c746935d7eaa2513dffa29d2812e6628fd22a972a97aba4e2a jupyter_ai-2.4.0.tar.gz: 0d065b18f4985fb726010e76d9c6059932e21327ea2951ccaa18b6e7b5189240 jupyter_ai_magics-2.4.0-py3-none-any.whl: 585bd960ac5c254e28ea165db840276883155a0a720720aa850e3272edc2001e jupyter_ai_magics-2.4.0.tar.gz: 2cdfb1e084aad46cdbbfb4eed64b4e7abc96ad7fde31da2ddb6899225dfa0684 * Complete autocomplete * Animations now also exist on automatic requests * Remove DEVELOP.md * Added toggle for selecting mock tests * Make the code neater --------- Co-authored-by: Andrii Ieroshenko <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Piyush Jain <[email protected]> Co-authored-by: dlqqq <[email protected]> Co-authored-by: cjq <[email protected]> Co-authored-by: Michał Krassowski <[email protected]> Co-authored-by: 3coins <[email protected]>
Hi everyone, I am currently working on this PR that requires simultaneous development with JupyterLab core. I would like to ensure a smooth integration process and have a query regarding dependency management. Would it be suitable to reference JupyterLab's internal
Or is there a better way you folks recommend? Thanks! |
…cceptable, and re-plan the partition for the sidebar.
Code changesMain details:We complete this PR mainly in the following steps:
Major code changes or additions to code files:
Simple code structure diagramindex.ts// Add inline code completion extension
const inlineCompletionPlugin: JupyterFrontEndPlugin<void> = {
id: 'jupyter_ai:plugin:inline-completion',
description: 'Adding an inline completion provider suggestion comes from jupyter-ai.',
requires: [ICompletionProviderManager],
optional: [ILayoutRestorer],
autoStart: true,
activate: (
app: JupyterFrontEnd,
completionManager: ICompletionProviderManager,
restorer: ILayoutRestorer | null,
translator: ITranslator | null
): void: {
// This extension will complete the loading of the sidebar and the user's keyboard request
pass...
}
};
const plugins: JupyterFrontEndPlugin<void>[] = [..., inlineCompletionPlugin];
export default plugins; inline-completion-handler.ts/**
* The main function to handle code completion on keydown events.
* It initializes the keydown handlers after ensuring that the notebook is fully loaded.
* @param {JupyterFrontEnd} app - The JupyterFrontEnd application instance.
* @returns {Promise<void>}
*/
export const handleCodeCompletionKeyDown = async (
app: JupyterFrontEnd,
completionManager: ICompletionProviderManager,
bigcodeInlineCompletionProvider: BigcodeInlineCompletionProvider
): Promise<void> => {
// Wait for the notebook to finish initializing
await app.start();
initializeKeyDownHandlers();
};
// Generates a keydown extension for handling various keypress events.
const generateKeyDownExtension = (
app: JupyterFrontEnd,
completionManager: ICompletionProviderManager,
bigcodeInlineCompletionProvider: BigcodeInlineCompletionProvider
): Extension => {
return Prec.highest(
keymap.of([
{
any: (view: EditorView, event: KeyboardEvent) => {
// Only the main logic is written here(code changes).
// Get the request shortcut key set by the user
const parsedShortcut = parseKeyboardEventToShortcut(event);
// If the keyboard combination pressed by the user is a set shortcut key, "invoke" function at inline-completion is performed.
if (parsedShortcut === CodeCompletionContextStore.shortcutStr) {
completionManager.inline?.invoke(app.shell.currentWidget?.id);
return true;
}
// When enter is pressed, "accept" function is performed.
if (event.code === 'Enter' && ...) {
completionManager.inline?.accept(app.shell.currentWidget?.id);
return true;
}
return false;
}
}
])
);
};
/**
* Initializes keydown event handlers for the JupyterFrontEnd application.
* This function sets up listeners for changes in the current widget and mounts the editor accordingly.
* @param {JupyterFrontEnd} app - The JupyterFrontEnd application instance.
*/
const initializeKeyDownHandlers = (
app: JupyterFrontEnd,
completionManager: ICompletionProviderManager,
bigcodeInlineCompletionProvider: BigcodeInlineCompletionProvider
) => {
const extension = generateKeyDownExtension();
// Inject this extension into the cell's editor when switching cells
}; type/cell.tsexport type ICellType = null | 'markdown' | 'code' | 'output';
// This cell is used as logic for code completion
export interface ICell {
content: string;
type: ICellType;
} utils/cell-context.tsimport { ICell, ICellType } from '../types/cell';
/**
* Fetches the content of the notebook up to the current cursor position.
*
* @param widget - The notebook panel widget.
* @returns An array of cell contents up to the cursor, or null.
*/
export const retrieveNotebookContentUntilCursor = (
widget: NotebookPanel
): ICell[] | null; utils/bigcode-request.tsimport { makeObservable, computed } from 'mobx';
export type BigCodeServiceStreamResponseItem = {
token: {
id: number;
text: string;
logprob: number;
special: boolean;
};
generated_text: string | null;
details: null;
};
export type BigCodeServiceNotStreamResponse = {
generated_text: string;
}[];
class Bigcode {
// prompt for request
private _prompt: string;
constructor(private store: IGlobalStore) {
makeObservable(this);
this._prompt = '';
}
@computed get bigcodeUrl() {
return this.store.bigcodeUrl;
}
@computed get accessToken() {
return this.store.accessToken;
}
@computed get maxTokens() {
return this.store.maxPromptToken;
}
get prompt() {
return this._prompt;
}
// For stream requests
async send(stream: true): Promise<ReadableStream<Uint8Array>>;
// For not stream requests
async send(stream: false): Promise<BigCodeServiceNotStreamResponse>;
// Construct prompt using "ICell" in types/cells type
constructContinuationPrompt(context: ICell[] | null): string;
} bigcode-Inline-completion-provider.tsimport { retrieveNotebookContentUntilCursor } from './utils/cell-context';
import bigcodeRequestInstance, { BigCodeServiceStreamResponseItem } from './utils/bigcode-request';
export class BigcodeInlineCompletionProvider
implements IInlineCompletionProvider
{
// Used to record information from the last user request
private _lastRequestInfo: {
insertText: string;
cellCode: string;
} = {
insertText: '',
cellCode: ''
};
private _requesting = false;
// Change it in "fetch" and then use this field in "stream" to call a different function
private _requestMode: InlineCompletionTriggerKind = 0;
// Whether to stop the stream immediately, When the stream is in progress, the user requests to change this field to true, and then interrupts the stream request.
private _streamStop = false;
// Whether to finish for request
private _finish = false;
// Debounce use
private _timeoutId: number | null = null;
// Debounce use
private _callCounter = 0;
// Construct the next request prompt
private constructContinuationPrompt(
context: IInlineCompletionContext
): string {
if (context.widget instanceof NotebookPanel) {
const widget = context.widget as NotebookPanel;
// Get the code in the cell before the current mouse pointer(including the current cell)
const notebookCellContent = retrieveNotebookContentUntilCursor(widget);
// Construct prompt
bigcodeRequestInstance.constructContinuationPrompt(notebookCellContent);
return bigcodeRequestInstance.prompt;
}
return '';
}
/**
* This function implements the fetch of IInlineCompletionProvider.
* Its main logic is to call different handlers based on context.triggerKind and return stream=true and token for the upper API to call the completionHandler function.
*
*/
async fetch(
request: CompletionHandler.IRequest,
context: IInlineCompletionContext
): Promise<IInlineCompletionList<IInlineCompletionItem>>{
// shortCutCompletionHandler() if context.triggerKind else autoCompletionHandler()
}
/**
* Handle requests when the user presses the keyboard
*/
async shortCutCompletionHandler(
request: CompletionHandler.IRequest,
context: IInlineCompletionContext
): Promise<IInlineCompletionList<IInlineCompletionItem>>{
const prompt = this.constructContinuationPrompt()
...
return {
items: [
{
token: prompt,
isIncomplete: true, // All requests in the "provider" are completed by the streaming function
insertText: ''
}
]
}
}
/**
* Handle requests when upstream api automatic events occur,
* This function will determine whether to make an automatic request based on debounceAutoRequest.
* And this function will also filter the ghost text when the user presses the same start point as the ghost text when displaying the ghost text (no request will be made)
*/
async autoCompletionHandler(
request: CompletionHandler.IRequest,
context: IInlineCompletionContext
): Promise<IInlineCompletionList<IInlineCompletionItem>> {
// if debounceAutoRequest() === '<auto_stream>':
// The function return value is also the same as shortCutCompletionHandler
// pass...
// else:
// pass...
}
/**
* Post-debounce strategy, the last request within the specified time returns <auto_stream>
*/
debounceAutoRequest(): Promise<'<auto_stream>' | '<debounce>'>;
/**
* If the request ends, need to call this function to write the status for "accept" use
*/
setRequestFinish(error: boolean): void {
this._requesting = false;
this._streamStop = false;
this._finish = !error;
}
/**
* When the user executes the "accept" function, this function will be called to clear the status.
*/
clearState(): void {
this._streamStop = true;
this._finish = false;
this._requesting = false;
this._lastRequestInfo = {
insertText: '',
cellCode: ''
};
}
/**
* This function implements the stream of IInlineCompletionProvider.
* The following different functions are called according to user configuration and automatic triggering.
*/
async *stream(
token: string
): AsyncGenerator<{ response: IInlineCompletionItem }, undefined, unknown>;
private async *completionStream(
token: string
): AsyncGenerator<{ response: IInlineCompletionItem }, undefined, unknown>;
private async *keypressCompletionStream(
token: string
): AsyncGenerator<{ response: IInlineCompletionItem }, undefined, unknown>{
const reponseData = await bigcodeRequestInstance.fetchStream(true);
// Parse responseData and return IInlineCompletionItem structure(stream)
}
private async *automaticCompletionStream(
token: string
): AsyncGenerator<{ response: IInlineCompletionItem }, undefined, unknown>{
const reponseData = await bigcodeRequestInstance.fetchStream(false);
// Parse responseData and return IInlineCompletionItem structure(only once)
}
private async *mockCompletionStream(
token: string
): AsyncGenerator<{ response: IInlineCompletionItem }, undefined, unknown>;
private async *mockKeypressCompletionStream(
token: string
): AsyncGenerator<{ response: IInlineCompletionItem }, undefined, unknown>;
private async *mockAutomaticCompletionStream(
token: string
): AsyncGenerator<{ response: IInlineCompletionItem }, undefined, unknown>;
} contexts/code-completion-context-store.tsimport { makeObservable, observable, action } from 'mobx';
class CodeCompletionContextStore {
/**
* Whether to enable code completion function.
*/
@observable enableCodeCompletion = false;
/**
* Observable huggingface token for authentication purposes.
*/
@observable accessToken = '';
/**
* Observable URL for the BigCode service.
*/
@observable bigcodeUrl = '';
/**
* Whether simulation testing is enabled (without using the real API)
*/
@observable enableMockTest = false;
/**
* Observable string representing the shortcut key combination for triggering code completion.
* Default is set to 'Ctrl + Space'.
*/
@observable shortcutStr = 'Ctrl + Space';
/**
* Maximum prompt tokens when requested
*/
@observable maxPromptToken = 400;
/**
* Maximum response tokens when requested
*/
@observable maxResponseToken = 20;
constructor() {
makeObservable(this);
const dataPersistenceStr = localStorage.getItem(
'@jupyterlab-ai/CodeCompletionState'
);
if (dataPersistenceStr) {
const dataPersistence: IGlobalStore = JSON.parse(dataPersistenceStr);
this.enableCodeCompletion = dataPersistence.enableCodeCompletion;
this.bigcodeUrl = dataPersistence.bigcodeUrl;
this.shortcutStr = dataPersistence.shortcutStr;
this.maxPromptToken = dataPersistence.maxPromptToken;
this.maxResponseToken = dataPersistence.maxResponseToken;
this.enableMockTest = dataPersistence.enableMockTest;
}
}
// This function is called every time the settings change
saveDataToLoaclStorage() {
// Do not store sensitive information
localStorage.setItem(
'@jupyterlab-ai/CodeCompletionState',
JSON.stringify({
enableCodeCompletion: this.enableCodeCompletion,
bigcodeUrl: this.bigcodeUrl,
shortcutStr: this.shortcutStr,
maxPromptToken: this.maxPromptToken,
maxResponseToken: this.maxResponseToken,
enableMockTest: this.enableMockTest
})
);
}
} |
I have a few concerns on this PR:
In other words, while this PR does a great job in offering a PoC it does not integrate with the existing jupyterlab nor jupyter-ai systems to detriment of the user. I will open a pull request proposing an infrastructure for supporting multiple providers/models for inline completions, which should help to inform how to move forward with this PR afterwards. |
Thank you very much for starting the conversation on this feature. This has been implemented in #582, and is now available in Jupyter AI v2.10.0 with JupyterLab 4.1+. 🎉 |
Where is this feature in the docs? I don't see it in the README or in the ReadTheDocs main page. |
@nick-youngblut The code completion feature is documented in the JupyterLab docs here: https://jupyterlab.readthedocs.io/en/latest/user/completer.html To request improvements to Jupyter AI's README or docs, please open a new Jupyter AI issue. Thanks! |
And if you are looking for developer documentation for completer in jupyter-ai it is here: https://jupyter-ai.readthedocs.io/en/latest/developers/index.html#custom-completion-providers Indeed a showcase of this feature in the docs could be useful, but in the latest version it should be more discoverable since it is mentioned and linked in the chat settings panel. |
Thanks @JasonWeill! I will submit a issue, given that I think users would greatly benefit from at least some mention of the completion feature in the Jupyter-AI docs, and not just in the Jupyter Lab docs. Also, it's not clear (to a naive user) if the completions feature is compatible with a GitHub Copilot license. My institute has GitHub copilot licenses, which works great with the Jupyter Notebook extension in VS Code (very easy to setup); however, ssh-remote in VS Code kills all running kernels if the ssh connection is lost (unlike what can be done with running Jupyter directly). Due to this "killed-kernels" issue, large-scale data analysis is not practical with VS Code + Jupyter, since one must re-run their notebooks if they lose their ssh connection. So, we would like to use Jupyter Lab directly (running on a Slurm cluster), instead of running inside VS Code, but it appears that our GitHub Copilot licenses will then be a waste of money, if we cannot use GitHub Copilot when running Jupyter outside of VS Code. |
It would be lovely if Copilot provided integration with JupyterLab. Make sure to upvote https://github.com/orgs/community/discussions/63345 |
Implementing #290, a second iteration from #378
In this PR, we leverage the inline completer jupyterlab/jupyterlab#15160 to provide copilot like code completion to jupyter ai.
This plugin currently support Starcoder, which is a open source code completion model finetuned on jupyter notebooks
Here is the expected behavior
stream_demo.mp4