We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
我输入了一个分辨率256*256的图片数据,当max_slice_number = 1\2\等的时候,模型训练效果非常的差(语言描述方式不符合,描述的准确度不符合,但当max_slice_number = 9的时候模型的效果就会非常的好。
我看2.5相关的教程,每个分割是以448*448进行执行的,我之前的理解是,对于较小分辨率的图像,并不需要较大的分割数量。
- OS: - Python: - Transformers: - PyTorch: - CUDA (`python -c 'import torch; print(torch.version.cuda)'`):
No response
The text was updated successfully, but these errors were encountered:
你好,请问你用的是最近的代码么,理论上是不会切图的
Sorry, something went wrong.
No branches or pull requests
是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?
该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?
当前行为 | Current Behavior
我输入了一个分辨率256*256的图片数据,当max_slice_number = 1\2\等的时候,模型训练效果非常的差(语言描述方式不符合,描述的准确度不符合,但当max_slice_number = 9的时候模型的效果就会非常的好。
期望行为 | Expected Behavior
我看2.5相关的教程,每个分割是以448*448进行执行的,我之前的理解是,对于较小分辨率的图像,并不需要较大的分割数量。
复现方法 | Steps To Reproduce
运行环境 | Environment
备注 | Anything else?
No response
The text was updated successfully, but these errors were encountered: