-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds evaluation on challenging WxBS and EVD datasets #52
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the great contribution, I believe the evals on your challenging datasets are a great addition to gluefactory!
I marked some improvements in the code. When running the eval on both EVD / WxBS, I get many warnings "libpng warning: iCCP: known incorrect sRGB profile", not sure how this could be adressed. Also, WxBS failed for me, related to epi_error
. The proposed changes should fix these problems.
Please also run the formatter once such that the CI passes:
python -m black .
python -m isort .
|
||
To evaluate LightGlue on EVD, run: | ||
```bash | ||
python -m gluefactory.eval.evd --conf gluefactory/configs/superpoint+lightglue-official.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can also run this with python -m gluefactory.eval.evd --conf superpoint+lightglue-official
|
||
To evaluate LightGlue on WxBS, run: | ||
```bash | ||
python -m gluefactory.eval.WxBS --conf gluefactory/configs/superpoint+lightglue-official.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
python -m gluefactory.eval.wxbs --conf superpoint+lightglue-official
{'epi_error@10px': 0.6141352941176471, | ||
'epi_error@1px': 0.2968, | ||
'epi_error@20px': 0.6937882352941176, | ||
'epi_error@5px': 0.5143617647058826, | ||
'epi_error_mAA': 0.5297713235294118, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When I run this I get different results, close to what is reported in the table below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will update this
M = est["M_0to1"] | ||
inl = est["inliers"].numpy() | ||
n_epi_err = sym_epipolar_distance(data['pts_0to1'][:,:2].double(), data['pts_0to1'][:,2:].double(), M.double(), squared=False).detach().cpu().numpy() | ||
results["epi_error"] = n_epi_err |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
results should be a float per entry (here it is a list). I suggest computing the AUC over the epipolar errors here, create one entry per threshold, and then just average them over all data points in the summary. This way it is compatible with the current setup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Upd.: I will check if that does not change the actual results.
if not isinstance(v[0], str): | ||
arr = np.array([x.astype(np.float64) for x in v]) | ||
dt = h5py.special_dtype(vlen=np.float64) | ||
hfile.create_dataset(k, data=arr, dtype=dt) | ||
else: | ||
arr = arr.astype("object") | ||
hfile.create_dataset(k, data=arr) | ||
else: | ||
hfile.create_dataset(k, data=arr) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the fixes proposed in utils this should not be required.
for th, results_i in fm_results.items(): | ||
pair_mean = [] | ||
for pair_results in results_i[key]: | ||
pair_mean.append(AUCMetric(auc_ths, pair_results).compute()) | ||
pose_aucs[th] = np.array(pair_mean).mean(axis=0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be better to compute the AUC over epipolar errors in eval_matches_epipolar_via_gt_points
, and then just average the respective value here. The only thing this function needs to compute is the mAA, where you can just average all entries in summary that start with epi_error@X
.
Thank you for review, I will make changes shortly, where possible. Regarding the "libpng warning: iCCP: known incorrect sRGB profile" - that is an issue with the images themselves, I will check if anybody still have an access to that server to update them. |
Closes #35
Compared to original papers, I have added the image resize to be consistent with gluefactory evals, and to make things easier for learned features.
Also turned out, that big homography changes in EVD are harder than appearance changes in WxBS.
EVD
mAA @ 1/5/10/20 px
WxBS
mAA @ 1/5/10/20 px