Embedding Error: Requested 9064 tokens instead of 8192 tokens #4654
githubdebugger
started this conversation in
General | 讨论
Replies: 1 comment 5 replies
-
what's your document? It seems didn't chunk correctly |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
After uploading a document, lobe-chat did the chunking and when it was embedding the document it failed with this error:
embeddingChunks error {
message:
{"endpoint":"https://api.openai.com/v1","error":{"message":"This model's maximum context length is 8192 tokens, however you requested 9064 tokens (9064 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.","type":"invalid_request_error","param":null,"code":null},"errorType":"ProviderBizError","provider":"openai"}
,name: 'EmbeddingError'
}
I have not modified any token limits for the models, I do not even know how to make any changes to embedding models.
Any idea how do I fix this error, so I would be able to upload the document?
Beta Was this translation helpful? Give feedback.
All reactions