-
-
Notifications
You must be signed in to change notification settings - Fork 341
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update /generate
to not split classes & functions across cells
#1158
Conversation
/generate
to ensure no splitting of class or function code across cells in generated notebooks/generate
to not split classes & functions across cells
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srdas Awesome work! Love the creativity applied here. Few minor comments below.
@srdas One other question: with this change, are you able to run every notebook generated via |
for more information, see https://pre-commit.ci
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srdas Awesome work, this is fantastic! I used this to generate 3 notebooks and they all ran well. By contrast, the existing implementation only generates a runnable notebook <50% of the time.
The only issue I noticed was that /generate
doesn't add a %pip install ...
command for 3P libraries, so sometimes you need to add a cell to install that.
Left some minor points of feedback below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@srdas Awesome work, thank you!! ❤️
@meeseeksdev please backport to v3-dev |
…& functions across cells
…s across cells (#1164) Co-authored-by: Sanjiv Das <[email protected]>
* Backport PR #1049: Added new Anthropic Sonnet3.5 v2 models (#1050) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1051: Added Developer documentation for streaming responses (#1058) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1048: Implement streaming for `/fix` (#1059) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1057: [pre-commit.ci] pre-commit autoupdate (#1060) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Backport PR #1064: Added Ollama to the providers table in user docs (#1066) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1056: Add examples of using Fields and EnvAuthStrategy to developer documentation (#1073) Co-authored-by: Alan Meeson <[email protected]> * Backport PR #1069: Merge Anthropic language model providers (#1076) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1068: Allow `$` to literally denote quantities of USD in chat (#1079) Co-authored-by: david qiu <[email protected]> * Backport PR #1075: Fix magic commands when using non-chat providers w/ history (#1080) Co-authored-by: Alan Meeson <[email protected]> * Backport PR #1077: Fix `/export` by including streamed agent messages (#1081) Co-authored-by: Mahmut CAVDAR <[email protected]> * Backport PR #1072: Reduced padding in cell around code icons in code toolbar (#1084) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1087: Improve installation documentation and clarify provider dependencies (#1091) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1092: Remove retired models and add new `Haiku-3.5` model in Anthropic (#1093) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1094: Continue to allow `$` symbols to delimit inline math in human messages (#1095) Co-authored-by: david qiu <[email protected]> * Backport PR #1097: Update `faiss-cpu` version range (#1101) Co-authored-by: david qiu <[email protected]> * Backport PR #1104: Fix rendering of code blocks in JupyterLab 4.3.0+ (#1105) Co-authored-by: david qiu <[email protected]> * Backport PR #1106: Catch error on non plaintext files in `@file` and reply gracefully in chat (#1110) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1109: Bump LangChain minimum versions (#1112) Co-authored-by: david qiu <[email protected]> * Backport PR #1119: Downgrade spurious 'error' logs (#1124) Co-authored-by: ctcjab <[email protected]> * Backport PR #1127: Removes outdated OpenAI models and adds new ones (#1130) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1131: [pre-commit.ci] pre-commit autoupdate (#1132) Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> * Backport PR #1125: Update model fields immediately on save (#1133) Co-authored-by: david qiu <[email protected]> * Backport PR #1139: Fix install step in CI (#1140) Co-authored-by: david qiu <[email protected]> * Backport PR #1129: Fix JSON serialization error in Ollama models (#1141) Co-authored-by: Mr.W <[email protected]> * Backport PR #1137: Update completion model fields immediately on save (#1142) Co-authored-by: david qiu <[email protected]> * [v3-dev] Initial migration to `jupyterlab-chat` (#1043) * Very first version of the AI working in jupyterlab_collaborative_chat * Allows both collaborative and regular chat to work with AI * handle the help message in the chat too * Autocompletion (#2) * Fix handler methods' parameters * Add slash commands (autocompletion) to the chat input * Stream messages (#3) * Allow for stream messages * update jupyter collaborative chat dependency * AI settings (#4) * Add a menu option to open the AI settings * Remove the input option from the setting widget * pre-commit * linting * Homogeneize typing for optional arguments * Fix import * Showing that the bot is writing (answering) (#5) * Show that the bot is writing (answering) * Update jupyter chat dependency * Some typing * Update extension to jupyterlab_chat (0.6.0) (#8) * Fix linting * Remove try/except to import jupyterlab_chat (not optional anymore), and fix typing * linter * Python unit tests * Fix typing * lint * Fix lint and mypy all together * Fix web_app settings accessor * Fix jupyter_collaboration version Co-authored-by: david qiu <[email protected]> * Remove unecessary try/except * Dedicate one set of chat handlers per room (#9) * create new set of chat handlers per room * make YChat an instance attribute on BaseChatHandler * revert changes to chat handlers * pre-commit * use room_id local var Co-authored-by: Nicolas Brichet <[email protected]> --------- Co-authored-by: Nicolas Brichet <[email protected]> --------- Co-authored-by: david qiu <[email protected]> Co-authored-by: david qiu <[email protected]> * Backport PR #1134: Improve user messaging and documentation for Cross-Region Inference on Amazon Bedrock (#1143) Co-authored-by: Sanjiv Das <[email protected]> * Backport PR #1136: Add base API URL field for Ollama and OpenAI embedding models (#1149) Co-authored-by: Sanjiv Das <[email protected]> * [v3-dev] Remove `/export`, `/clear`, and `/fix` (#1148) * remove /export * remove /clear * remove /fix * Fix CI in `v3-dev` branch (#1154) * fix check release by bumping to impossible version * fix types * Update Playwright Snapshots --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> * [v3-dev] Dedicate one LangChain history object per chat (#1151) * dedicate a separate LangChain history object per chat * pre-commit * fix mypy * Backport PR #1160: Trigger update snapshots based on commenter's role (#1161) Co-authored-by: david qiu <[email protected]> * Backport PR #1155: Fix code output format in IPython (#1162) Co-authored-by: Divyansh Choudhary <[email protected]> * Backport PR #1158: Update `/generate` to not split classes & functions across cells (#1164) Co-authored-by: Sanjiv Das <[email protected]> * Remove v2 frontend components (#1156) * First pass to remove the front end chat * Remove code-toolbar by using a simplified markdown renderer in settings * Remove chat-message-menu (should be ported in jupyter-chat) * Remove chat handler * Follow up 'Remove chat-message-menu (should be ported in jupyter-chat)' commit * Clean package.json * Remove UI tests * Remove the generative AI menu * Remove unused components * run yarn dedupe --------- Co-authored-by: David L. Qiu <[email protected]> * Upgrade to `jupyterlab-chat>=0.7.0` (#1166) * upgrade to jupyterlab-chat 0.7.0 * pre-commit * upgrade to @jupyter/chat ^0.7.0 in frontend * Remove v2 backend components (#1168) * remove v2 llm memory, implement ReplyStream * remove v2 websockets & REST handlers * remove unused v2 data models * fix slash command autocomplete * fix unit tests * remove unused _learned context provider * fix mypy * pre-commit * fix optional k arg in YChatHistory * bump jupyter chat to 0.7.1 to fix Python 3.9 tests * revert accidentally breaking /learn --------- Co-authored-by: Lumberbot (aka Jack) <[email protected]> Co-authored-by: Sanjiv Das <[email protected]> Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com> Co-authored-by: Alan Meeson <[email protected]> Co-authored-by: Mahmut CAVDAR <[email protected]> Co-authored-by: ctcjab <[email protected]> Co-authored-by: Mr.W <[email protected]> Co-authored-by: Nicolas Brichet <[email protected]> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: Divyansh Choudhary <[email protected]>
Fixes #1111
/generate
will sometimes split a class or a function across multiple code cells. After tracing the code ingenerate.py
the error arises from unanticipated LLM behavior in the curation of the resultant Jupyter notebook in the creation of theoutline
object. This is best handled by post-processing the notebook to merge "hanging" code cells with the preceding code cell.The error looks like this:
After the fix, the error is no longer present. See this example:
Here is another example of the rectified notebook generation:
if necessary to check the log, add print statements in lines 236, 240, 245 to the code as shown below.
The before and after versions of the logged notebook can be compared for lines with
***** CELL 1 *****
as shown here: