You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'm currently rewriting code for batch inference. Now I encounter some problems. I already rewrote the RPN layers to adapt batch images. However, when I run net.forward(), the result is not consistent. For example, if batch size = 2, net.blobs['rois'].data.shape = (2, 300, 5), and shape of boxes derived from rios is (2, 300, 4) which is consistent with 1 image situation. But the result of output is not like this, blobs_out = net.forward(**forward_kwargs), blobs_out['bbox_pred'].data.shape = (2, 8), it seems to miss one dimension. Any hints or suggests for this condition? Thanks a lot!
The text was updated successfully, but these errors were encountered:
Hi, I'm currently rewriting code for batch inference. Now I encounter some problems. I already rewrote the RPN layers to adapt batch images. However, when I run net.forward(), the result is not consistent. For example, if batch size = 2, net.blobs['rois'].data.shape = (2, 300, 5), and shape of boxes derived from rios is (2, 300, 4) which is consistent with 1 image situation. But the result of output is not like this, blobs_out = net.forward(**forward_kwargs), blobs_out['bbox_pred'].data.shape = (2, 8), it seems to miss one dimension. Any hints or suggests for this condition? Thanks a lot!
The text was updated successfully, but these errors were encountered: