Replies: 12 comments
-
Hi, please check the following demo code from mmpose/demo/top_down_img_demo_with_mmdet.py Lines 93 to 115 in 4a3b12a |
Beta Was this translation helpful? Give feedback.
-
Thank you for your answer. I try your advice, but inference don't work.
However, In google colab, In short, My local environment problem...? If the number of object is 1, pose-inference is OK. Are there another methods to solve this problem? |
Beta Was this translation helpful? Give feedback.
-
Is there any error information when you use a detection result with multiple boxes? |
Beta Was this translation helpful? Give feedback.
-
YOLOv4 object-detection result is OK. Good inference image as below.Bad inference image as below.This time,
In short, object-detection step is OK. On the other hand, pose-estimate step result is bad. |
Beta Was this translation helpful? Give feedback.
-
Could you please check that an image path is assigned to mmpose/demo/top_down_img_demo_with_mmdet.py Lines 106 to 115 in 4a3b12a |
Beta Was this translation helpful? Give feedback.
-
The argument [image_name] is image_path like I tried np.array img as [image_name].
However, the inference result doesn't change.
|
Beta Was this translation helpful? Give feedback.
-
Sorry, we could reproduce this problem in our environments. We tried the exact image and pose model, and the result is alright. Could you please provide the entire command you used which may help us locate the problem? |
Beta Was this translation helpful? Give feedback.
-
I tried this code on Jupyter Lab of conda environment. python: 3.8.12
input_cow.png |
Beta Was this translation helpful? Give feedback.
-
Same happens to me; seems like it's an issue happening on Apple M1 chip (using MacBook Pro w/ M1 PRO). Running the demo on a fresh install of mmpose on the latest release:
Provides different outputs from the Apple M1 than running on Ubuntu 18 (where it is correct). If there's only one bbox, result is alright in M1, otherwise only the detected boxes are returned. |
Beta Was this translation helpful? Give feedback.
-
Quick fix: when there are more than 1 bbox in an image, do inference one by one and then append the results together. |
Beta Was this translation helpful? Give feedback.
-
This code is OK even M1 chip.
|
Beta Was this translation helpful? Give feedback.
-
Yes, we are in the same situation.. My guess is that M1 can't handle batched inputs / a list of bboxes. Anyway, M1 inference speed is super slow everywhere, so probably not the best choice to use this as a development tool with PyTorch. |
Beta Was this translation helpful? Give feedback.
-
Hello.
Thank you for releasing nice projects.
I want to solve this title question.
I'm using YOLOv4 as bbox-detect model and pretrained
hrnet_w48_animalpose_256x256
as pose_model.Afterwards, I use
inference_top_down_pose_model(pose_model, img_or_path, person_results=[detected-bboxes of YOLOv4]...)
inmmpose.apis.inference.py
by CPU-only.This
img_or_path
is an image_path of 4 cows image.So, detected-bboxes of YOLOv4 contain 4 bboxes[x1,y1,x2,y2, prob].
Unfortunately, this inference doesn't work.
So, I changed
_inference_single_pose_model
as below.In short, I gave
Tensor[1, 3, 256, 256]
as img.This inference results is OK.
Do you know cause of this problem?
[My enevironment MEMO]
M1 mac BigSur
python: 3.8.12
pytorch: 1.9.0
mmpose: 0.20.0
gcc: Apple clang version 12.0.5
Beta Was this translation helpful? Give feedback.
All reactions