You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 19, 2022. It is now read-only.
Hello! I saw your great TF realization. I would be thankful for your help. Now, I'm realizing a similar task for the BlazePose model in TF2. I use another architecture of the model, but it's the same as yours. I'm using this model with pre-trained weights by this graph. And I train this model on our specific dataset, after augmentation we have more 30 000 images.
But, one problem appears during this process: as an output, we have a set of points (coordinates), which is the same for any input, so all coordinates are constant. Perhaps, you have some ideas or direction of thinking for this?
Thank you for your work and help.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Hello! I saw your great TF realization. I would be thankful for your help. Now, I'm realizing a similar task for the BlazePose model in TF2. I use another architecture of the model, but it's the same as yours. I'm using this model with pre-trained weights by this graph. And I train this model on our specific dataset, after augmentation we have more 30 000 images.
But, one problem appears during this process: as an output, we have a set of points (coordinates), which is the same for any input, so all coordinates are constant. Perhaps, you have some ideas or direction of thinking for this?
Thank you for your work and help.
The text was updated successfully, but these errors were encountered: