You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm a software developer currently working on a implementation of llamaindexTS to work with several LLM implementations in a configurable platform. Recently I started making the implementation for VertexAI through your Gemini implementation.
The implementation is very straightforward until I received this error message trying to execute the .chat() method:
{"level":"error","message":"LLM error: [VertexAI.ClientError]: got status: 400 Bad Request. {"error":{"code":400,"message":"User has requested a restricted HarmBlockThreshold setting BLOCK_NONE. You can get access either (
a) through an allowlist via your Google account team, or (b) by switching your account type to monthly invoiced billing via this instruction: https://cloud.google.com/billing/docs/how-to/invoiced-billing.\",\"status\":\"INVALID_ARGUMENT\"}}","reqId":"67645e3e9ee275d83956c68e"}
I'm a software developer currently working on a implementation of llamaindexTS to work with several LLM implementations in a configurable platform. Recently I started making the implementation for VertexAI through your Gemini implementation.
The implementation is very straightforward until I received this error message trying to execute the .chat() method:
{"level":"error","message":"LLM error: [VertexAI.ClientError]: got status: 400 Bad Request. {"error":{"code":400,"message":"User has requested a restricted HarmBlockThreshold setting BLOCK_NONE. You can get access either (
a) through an allowlist via your Google account team, or (b) by switching your account type to monthly invoiced billing via this instruction: https://cloud.google.com/billing/docs/how-to/invoiced-billing.\",\"status\":\"INVALID_ARGUMENT\"}}","reqId":"67645e3e9ee275d83956c68e"}
After some research I realized that the BLOCK_NONE setting is now under a huge paywall so I started looking at the source code to realize that the Safety Settings are not configurable by any possible ways on your implementation, and which is worse, there's a DEFAULT_SAFETY_SETTINGS constant that is imported at line 37 in https://github.com/run-llama/LlamaIndexTS/blob/main/packages/llamaindex/src/llm/gemini/base.ts which is declared at https://github.com/run-llama/LlamaIndexTS/blob/main/packages/llamaindex/src/llm/gemini/utils.ts in line 319 that has all the HarmBlockThresholds levels set in BLOCK_NONE. This DEFAULT_SAFETY_SETTINGS is hardcoded in the implementation of the streamChat and nonStreamChat methods for Gemini (see lines 243, 247, 286 and 290 in https://github.com/run-llama/LlamaIndexTS/blob/main/packages/llamaindex/src/llm/gemini/base.ts) which makes impossible to configure those values unless making a custom implementation of those classes which drives to several other issues.
There's a related post for the python version of llama which is already been solved: run-llama/llama_index#10788.
My request is to simply allow the user to configure their own safety_settings with something like this:
and send them as a parameter on the Gemini Constructor class or using a function.
Thanks for your time.
The text was updated successfully, but these errors were encountered: