-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
最新版本的xinference无法正常启动qwen2-vl-instruct模型 #2554
Comments
你的transformers版本是多少?可以尝试更新一下transformers. |
@codingl2k1 啊?镜像里面自带的还不行吗?我用的docker镜像 |
@codingl2k1 2024-11-17 16:25:41 ImportError: [address=0.0.0.0:46487, pid=176] cannot import name 'Qwen2AudioForConditionalGeneration' from 'transformers' (/usr/local/lib/python3.10/dist-packages/transformers/init.py) |
我也遇到了这个问题,docker版本是0.16.0 |
升级transformers到最新的版本,就可以启动了 |
Same here. |
Yes. plz confirm vllm>=0.6.4 |
Comfirmed that updating transformers to >4.46 and rebuild the docker image fixed this issue. However, changing to the base docker image from vllm 0.6.0 to 0.6.4 introducts a number of errors, mainly because of the python is also updated from 3.10 to 3.12. Still figuring out how to build docker image based on vllm 0.6.4. |
这个模型调用两次显存占用就翻倍了,有没有释放机制啊,很快就OOM了 |
System Info / 系統信息
cuda 12.2,centos7
Running Xinference with Docker? / 是否使用 Docker 运行 Xinfernece?
Version info / 版本信息
V0.16.3
The command used to start Xinference / 用以启动 xinference 的命令
docker run -d -v /home/llm-test/embedding_and_rerank_model:/root/models -p 9998:9997 --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0
Reproduction / 复现过程
Qwen2-VL-7B-Instruct
到目标目录:/home/llm-test/embedding_and_rerank_model
docker run -d -v /home/llm-test/embedding_and_rerank_model:/root/models -p 9998:9997 --gpus all xprobe/xinference:latest xinference-local -H 0.0.0.0
启动Xinference/root/models/Qwen2-VL-7B-Instruct
,单击启动按钮Expected behavior / 期待表现
能够正常启动模型
The text was updated successfully, but these errors were encountered: