You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As mentioned in #206 , I'd like to see features from my fork brought upstream so I can get rid of my fork altogether. I don't have the time to do that so I'm outlining the features here in case someone else is feeling ambitious.
The second feature worth bringing up is a different tracker algorithm that is mathematically more accurate. It is based on the image in the diagram found here. The relevant code is here.
There are a couple benefits to this algorithm. First, the only presumed value is the camera's focal length and that can be overwritten by doing a camera calibration, therefore the positional estimation may be correct with any camera (not only PSEye) can the calibration is more standard than the psmoveapi distance calibration. Second, it does not assume the projection of the sphere on the sensor is a perfect circle; it is an ellipse, especially far from centre. Third, it uses simple geometry and trigonometric identities to get rid of any complicated math -> it's faster than the current version and has a real solution (no binary search).
The main drawbacks are that it can break with poor camera calibration and that it is pretty sensitive to partially occluded bulbs because it changes the ellipse fit.
Note that there are also a couple different positional smoothing filters that are configured with a new settings structure.
There is a better algorithm in the wild based on fitting a 3D cone to the data using the camera's focal point as the cone tip. You can see it in action here. I haven't attempted to work out how Oliver did it yet and he hasn't published his code yet. One of those two things will happen eventually.
The text was updated successfully, but these errors were encountered:
@cboulay I'm running into situations where psmoveapi's current positional tracking could do with improvement (mainly to do with partial occlusion of the sphere).
I had a look at that reddit thread you linked and Doc_Ok's solution looks super interesting; it's stable and deals with occlusion well.
You mention that you've already been in contact with him to see if we can get his implementation in here; I was wondering whether you've been in further contact with him or if there's anything we can do to help this further along.
I'm really interested in getting this in as it looks like it improves the tracking by quite a lot.
I haven't had time to work it out myself and I don't think he's willing to share until he publishes it somehow. I have a couple surgeries/experiments in the next few weeks and I need to finish preparing for those, but I'll get back to this when my other software is ready.
As mentioned in #206 , I'd like to see features from my fork brought upstream so I can get rid of my fork altogether. I don't have the time to do that so I'm outlining the features here in case someone else is feeling ambitious.
The second feature worth bringing up is a different tracker algorithm that is mathematically more accurate. It is based on the image in the diagram found here. The relevant code is here.
There are a couple benefits to this algorithm. First, the only presumed value is the camera's focal length and that can be overwritten by doing a camera calibration, therefore the positional estimation may be correct with any camera (not only PSEye) can the calibration is more standard than the psmoveapi distance calibration. Second, it does not assume the projection of the sphere on the sensor is a perfect circle; it is an ellipse, especially far from centre. Third, it uses simple geometry and trigonometric identities to get rid of any complicated math -> it's faster than the current version and has a real solution (no binary search).
The main drawbacks are that it can break with poor camera calibration and that it is pretty sensitive to partially occluded bulbs because it changes the ellipse fit.
Note that there are also a couple different positional smoothing filters that are configured with a new settings structure.
There is a better algorithm in the wild based on fitting a 3D cone to the data using the camera's focal point as the cone tip. You can see it in action here. I haven't attempted to work out how Oliver did it yet and he hasn't published his code yet. One of those two things will happen eventually.
The text was updated successfully, but these errors were encountered: