You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I got the same HTTP 500 error at front-end and server llama-gpt-llama-gpt-api exit with 139 error code.
That occured with incomplete model file under /model folder. Try download complete model from HuggingFace and replace *.gguf file with it.
Terminal -
INFO: Started server process [2986]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:3001/ (Press CTRL+C to quit)
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
INFO: 127.0.0.1:49485 - "GET /v1/models HTTP/1.1" 200 OK
llama-gpt-llama-gpt-ui-mac-1 | making request to http://host.docker.internal:3001/v1/models
INFO: 127.0.0.1:49488 - "GET /v1/models HTTP/1.1" 200 OK
llama-gpt-llama-gpt-ui-mac-1 | {
llama-gpt-llama-gpt-ui-mac-1 | id: '/models/llama-2-7b-chat.bin',
llama-gpt-llama-gpt-ui-mac-1 | name: 'Llama 2 7B',
llama-gpt-llama-gpt-ui-mac-1 | maxLength: 12000,
llama-gpt-llama-gpt-ui-mac-1 | tokenLimit: 4000
llama-gpt-llama-gpt-ui-mac-1 | } 'You are a helpful and friendly AI assistant. Respond very concisely.' 0.5 '' [ { role: 'user', content: 'hi' } ]
ggml_metal_graph_compute: command buffer 0 failed with status 5
GGML_ASSERT: /private/var/folders/nw/8b8162fj3sq3667m79wm49cw0000gn/T/pip-install-hcpw_ie5/llama-cpp-python_7f3b8275091343838f2dd60c58213caf/vendor/llama.cpp/ggml-metal.m:1094: false
llama-gpt-llama-gpt-ui-mac-1 | [TypeError: fetch failed] {
llama-gpt-llama-gpt-ui-mac-1 | cause: [SocketError: other side closed] {
llama-gpt-llama-gpt-ui-mac-1 | name: 'SocketError',
llama-gpt-llama-gpt-ui-mac-1 | code: 'UND_ERR_SOCKET',
llama-gpt-llama-gpt-ui-mac-1 | socket: {
llama-gpt-llama-gpt-ui-mac-1 | localAddress: '172.18.0.2',
llama-gpt-llama-gpt-ui-mac-1 | localPort: 58896,
llama-gpt-llama-gpt-ui-mac-1 | remoteAddress: '192.168.65.254',
llama-gpt-llama-gpt-ui-mac-1 | remotePort: 3001,
llama-gpt-llama-gpt-ui-mac-1 | remoteFamily: 'IPv4',
llama-gpt-llama-gpt-ui-mac-1 | timeout: undefined,
llama-gpt-llama-gpt-ui-mac-1 | bytesWritten: 587,
llama-gpt-llama-gpt-ui-mac-1 | bytesRead: 0
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
llama-gpt-llama-gpt-ui-mac-1 | }
Full Docker Version Terminal Log - https://ctxt.io/2/AABQSslxFQ
The text was updated successfully, but these errors were encountered: