Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] <title>为什么对于低分辨率256*256的图像输入的时候,调整max_slice_number = 1/9,对于结果有巨大的差异? #665

Open
2 tasks done
flawsss opened this issue Nov 12, 2024 · 1 comment

Comments

@flawsss
Copy link

flawsss commented Nov 12, 2024

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

  • 我已经搜索过已有的issues和讨论 | I have searched the existing issues / discussions

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

  • 我已经搜索过FAQ | I have searched FAQ

当前行为 | Current Behavior

我输入了一个分辨率256*256的图片数据,当max_slice_number = 1\2\等的时候,模型训练效果非常的差(语言描述方式不符合,描述的准确度不符合,但当max_slice_number = 9的时候模型的效果就会非常的好。

期望行为 | Expected Behavior

我看2.5相关的教程,每个分割是以448*448进行执行的,我之前的理解是,对于较小分辨率的图像,并不需要较大的分割数量。

复现方法 | Steps To Reproduce

  1. 通过不同的max_slice_number 进行训练
  2. 通过web_demo_2.6.py 进行测试

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

@LDLINGLINGLING
Copy link
Collaborator

你好,请问你用的是最近的代码么,理论上是不会切图的

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants