-
Notifications
You must be signed in to change notification settings - Fork 893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] 2.6版本默认绑定flash_atten,无法取消,并且目前并没有提供对应flash_att的版本和安装示例。 #429
Comments
I dont understand Chinese but I think we have a similar problem about the flash_attn I use python 3.11 and install the libraries by order: numpy==1.24.3 |
同学,问题有解决方案吗,我也卡在这里了。我windows,cu117和torch2.1.0没找到对应的flash_atten的包 |
哈哈,我放弃Windows了,换了一台台式机一把成功: |
woc,太难了。我貌似没linux |
用windows的whl文件和对应的cuda、cudnn、torch版本即可 |
尝试一下安装flash-attn==1.0.4,我这边可以 |
在mac下已经解决了,输出博客:https://bothsavage.github.io/article/240810-minicpm2.6 提交pr:#461 修改web_demo_2.6.py文件
|
Hi
python web_demo_2.6.py --device cuda Now webdemo is start to run but got CUDA out of memory. :) (i've 12GB VRAM RTX3060) but this is another problem. |
可以把flash atten包直接删了也可以运行,我看到有写其他备选运行方式 |
I catched the import error, it may like So I use the flash-attn==2.5.8 when I have torch==2.3.0. |
It's very useful , thx bro. |
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
demo2.6会有这样的报错
期望行为 | Expected Behavior
期望能够提供取消对flash的绑定,或者提供一个安装教程,以及版本的指定。
复现方法 | Steps To Reproduce
2.6的demo例程。Linux系统。
运行环境 | Environment
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered: