-
Notifications
You must be signed in to change notification settings - Fork 668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SAM2 more than one point and label error #3484
Comments
@canglaoshidaidui |
@canglaoshidaidui
Let me see if I can improve it. |
I created a PR to address this issue: #3495 You need to use 0.31.0-SNAPSHOT to make it work |
@frankfliu |
Description
@ dear DJL Team
SAM2 more than one point and label error
Expected Behavior
Error Message
Exception in thread "main" ai.djl.translate.TranslateException: ai.djl.engine.EngineException: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch.py", line 63, in forward
feat_s1 = torch.view(torch.permute(feat2, [1, 2, 0]), [1, -1, 128, 128])
feat_s0 = torch.view(torch.permute(feat, [1, 2, 0]), [1, -1, 256, 256])
_23 = (sam_prompt_encoder0).forward(point_coords, point_labels, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_24, _25, = _23
image_embeddings = torch.unsqueeze(torch.select(image_embed, 0, 0), 0)
File "code/torch/sam2/modeling/sam/prompt_encoder.py", line 78, in forward
_32 = annotate(List[Optional[Tensor]], [_31])
33 = torch.add(torch.index(point_embedding2, _32), weight2)
_34 = torch.view(_33, [256])
~~~~~~~~~~ <--- HERE
_35 = annotate(List[Optional[Tensor]], [31])
point_embedding3 = torch.index_put(point_embedding2, _35, _34)
Traceback of TorchScript, original code (most recent call last):
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(98): _embed_points
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(169): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(74): predict
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(62): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/jit/_trace.py(1275): trace_module
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(104): trace_model
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(111):
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py(18): execfile
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1535): _exec
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1528): run
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2218): main
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2236):
RuntimeError: shape '[256]' is invalid for input of size 512
Caused by: ai.djl.engine.EngineException: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript, serialized code (most recent call last):
File "code/torch.py", line 63, in forward
feat_s1 = torch.view(torch.permute(feat2, [1, 2, 0]), [1, -1, 128, 128])
feat_s0 = torch.view(torch.permute(feat, [1, 2, 0]), [1, -1, 256, 256])
_23 = (sam_prompt_encoder0).forward(point_coords, point_labels, )
~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
_24, _25, = _23
image_embeddings = torch.unsqueeze(torch.select(image_embed, 0, 0), 0)
File "code/torch/sam2/modeling/sam/prompt_encoder.py", line 78, in forward
_32 = annotate(List[Optional[Tensor]], [_31])
33 = torch.add(torch.index(point_embedding2, _32), weight2)
_34 = torch.view(_33, [256])
~~~~~~~~~~ <--- HERE
_35 = annotate(List[Optional[Tensor]], [31])
point_embedding3 = torch.index_put(point_embedding2, _35, _34)
Traceback of TorchScript, original code (most recent call last):
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(98): _embed_points
/Users/lufen/source/venv/lib/python3.11/site-packages/sam2/modeling/sam/prompt_encoder.py(169): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(74): predict
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(62): forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1543): _slow_forward
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1562): _call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/nn/modules/module.py(1553): _wrapped_call_impl
/Users/lufen/source/venv/lib/python3.11/site-packages/torch/jit/_trace.py(1275): trace_module
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(104): trace_model
/Users/lufen/source/ptest/p_sam2/trace_sam2_img.py(111):
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py(18): execfile
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1535): _exec
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(1528): run
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2218): main
/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py(2236):
RuntimeError: shape '[256]' is invalid for input of size 512
The text was updated successfully, but these errors were encountered: