-
-
Notifications
You must be signed in to change notification settings - Fork 340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Google Gemini API support #395
Comments
@richlysakowski Here are the steps to add a new provider:
|
Hi @richlysakowski @3coins - I believe we can make this easier If this looks useful (we're used in production)- please let me know how we can help. UsagePaLM request curl http://0.0.0.0:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "palm/chat-bison",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}' gpt-3.5-turbo request curl http://0.0.0.0:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}' claude-2 request curl http://0.0.0.0:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "claude-2",
"messages": [{"role": "user", "content": "Say this is a test!"}],
"temperature": 0.7
}' |
Gemini API is superseding Palm API. I'll follow @3coins's advice to add a new provider, but I can't figure out how to use jupyter-ai from local sources, so I might need some help with testing. |
@simonff do you need help setting up jupyter-ai locally for development? Did you encounter any error, or failed at a particular step? |
@krassowski : I think various installed versions got mixed up, and the last error I got is below. Yes, it would be great to try some self-contained instructions for running jupyter-ai locally. For now I'm just using github actions to run tests. Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): |
Yes, it looks like it stems from a conflict between packages installed on the system level and local version of Python. I would strongly suggest developing in a self-contained virtual environment. |
@simonff To do a development installation locally, run these in your terminal: # verify that you are at the root of the repo
pwd
# create & activate a new isolated Python environment named `jupyter-ai-dev`
# I recommend using `micromamba`, but you can use `conda` or `venv`
micromamba create -yn jupyter-ai-dev python=3.11 jupyterlab
micromamba activate jupyter-ai-dev
# install JS dependencies
jlpm
# build JS code & install `jupyter-ai` locally in this environment
jlpm dev-install |
I don't have conda or micromamba installed, so I'm trying venv. This worked: But for the next step I get: Sorry for beginner questions, I usually just run "pip install --break-system-packages". :) |
In the example above |
Ok, 'jlpm' by itself runs (and I see it's coming from the myenv environment), but then I get (running from the root of the git repo): jlpm dev-install lerna notice cli v6.6.2
——————————————————————————————————————————————————————————————————————————————————
@jupyter-ai/magics: error: externally-managed-environment ——————————————————————————————————————————————————————————————————————————————————
Tasks not run because their dependencies failed or --nx-bail=true:
Failed tasks:
|
Meanwhile, this is the new code so far: |
I tried to add the Gemini provider, but could not get Gemini to show up on the model list. Any help will be appreciated. |
@dlqqq Thank you for your help! I couldn't have done it without your guidance. Thank you for accepting the PR. |
Problem
Support / wrapper for Google PaLM and Bard missing. I don't find any mention of it here, yet PaLM is one of the major AI LLM. (@JasonWeill on 2023-12-29: The Gemini API is replacing PaLM — thanks @simonff)
Proposed Solution
Can we start a discussion about requirements packages, and tools needed to bring PaLM and Bard functionality into JupyterAI?
Is there a description of how to wrap new LLM engines?
Are there template wrappers or decorators to make this easy ?
What limitations do we need to take into account for this integration? I know that Google has been blocking Python wrappers that reverse engineer Bard, but what about PaLM API? What are the most effective workarounds for using a wrapper around Bard Chat?
Please help with this integration so we can get it done soon, because I am getting requests from Julyter.AI users I am training.
The text was updated successfully, but these errors were encountered: