Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pcl sptm #24

Open
wants to merge 44 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
8605245
update Unsupervised version
Joechencc Jul 16, 2021
290604c
update
Joechencc Jul 16, 2021
60912fe
update
Joechencc Jul 16, 2021
03fba19
minor bug
Joechencc Jul 17, 2021
7c36bf7
update
Joechencc Jul 17, 2021
eb99e55
updated
Joechencc Jul 17, 2021
87e5f0b
push
Joechencc Jul 20, 2021
72eca54
update
Joechencc Jul 20, 2021
350aa5c
remove generate at this time
Joechencc Jul 20, 2021
a8a31fa
update no generate new data version
Joechencc Jul 20, 2021
58b6be3
update code work?
Joechencc Jul 21, 2021
58ddd16
update conflict
Joechencc Jul 21, 2021
ed81ef2
workable unsupervised
Joechencc Jul 22, 2021
5e0a752
workable unsupervised
Joechencc Jul 22, 2021
db8287c
update for shannon
Jul 26, 2021
61742ce
try rotation augmentation
Jul 27, 2021
630ddcd
update CNN structure best now
Jul 29, 2021
8d3de5d
update CNN model
Aug 1, 2021
760433c
update CNN
Aug 1, 2021
70bf241
add batch update
Aug 2, 2021
2890125
update minor
Aug 10, 2021
a0e7374
small update
Aug 24, 2021
7bd53d1
Merge branch 'Unsupervised_Shannon' of https://github.com/Joechencc/U…
Aug 24, 2021
b1b4bff
add res_mode
Aug 24, 2021
314f1ba
update conflict
May 31, 2022
0522bf4
PCL_SPTM
May 31, 2022
7950fa4
Update README.md
Joechencc May 31, 2022
c0fa51b
Update README.md
Joechencc Jul 30, 2022
8589c96
Update README.md
Joechencc Jul 30, 2022
bb68b6e
Update README.md
Joechencc Jul 30, 2022
64b439c
Update README.md
Joechencc Jul 30, 2022
da0f5fc
Update README.md
Joechencc Jul 30, 2022
927107e
Update README.md
Joechencc Jul 30, 2022
df098fb
Update README.md
Joechencc Jul 30, 2022
4c4ca2b
Update README.md
Joechencc Jul 30, 2022
b88830f
Add files via upload
Joechencc Jul 30, 2022
68e09c7
Update README.md
Joechencc Jul 30, 2022
2424a36
Update README.md
Joechencc Jul 30, 2022
1708fb0
Update README.md
Joechencc Jul 30, 2022
da72a19
Update README.md
Joechencc Jul 30, 2022
642570d
Update README.md
Joechencc Jul 30, 2022
3c3d05b
Update README.md
Joechencc Jul 30, 2022
9a4d9f6
Update README.md
Joechencc Jul 30, 2022
3b60425
Update README.md
Joechencc Sep 13, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added NSF_1.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added NSF_2.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added NSF_3.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
98 changes: 85 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,112 @@
# PointNetVlad-Pytorch
Unofficial PyTorch implementation of PointNetVlad (https://github.com/mikacuy/pointnetvlad)
# Self-Supervised Visual Place Recognition by Mining Temporal and Feature Neighborhoods
[Chao Chen](https://scholar.google.com/citations?hl=en&user=WOBQbwQAAAAJ), [Xinhao Liu](https://gaaaavin.github.io), [Xuchu Xu](https://www.xuchuxu.com), [Li Ding](https://www.hajim.rochester.edu/ece/lding6/), [Yiming Li](https://scholar.google.com/citations?user=i_aajNoAAAAJ), [Ruoyu Wang](https://github.com/ruoyuwangeel4930), [Chen Feng](https://scholar.google.com/citations?user=YeG8ZM0AAAAJ)

**"A Novel self-supervised VPR model capable of retrieving positives from various orientations."**

![PyTorch](https://img.shields.io/badge/PyTorch-%23EE4C2C.svg?logo=PyTorch&logoColor=white)
[![Linux](https://svgshare.com/i/Zhy.svg)](https://svgshare.com/i/Zhy.svg)
[![GitLab issues total](https://badgen.net/github/issues/ai4ce/V2X-Sim)](https://github.com/Joechencc/TF-VPR)
[![GitHub stars](https://img.shields.io/github/stars/ai4ce/V2X-Sim.svg?style=social&label=Star&maxAge=2592000)](https://github.com/Joechencc/TF-VPR/stargazers/)
<div align="center">
<img src="https://s2.loli.net/2022/07/30/ZldqmQGFhajCxRn.png" height="300">
</div>
<br>

## Abstract

Visual place recognition (VPR) using deep networks has achieved state-of-the-art performance. However, most of the related approaches require a training set with ground truth sensor poses to obtain the positive and negative samples of each observation's spatial neighborhoods. When such knowledge is unknown, the temporal neighborhoods from a sequentially collected data stream could be exploited for self-supervision, although with suboptimal performance. Inspired by noisy label learning, we propose a novel self-supervised VPR framework that uses both the temporal neighborhoods and the learnable feature neighborhoods to discover the unknown spatial neighborhoods. Our method follows an iterative training paradigm which alternates between: (1) representation learning with data augmentation, (2) positive set expansion to include the current feature space neighbors, and (3) positive set contraction via geometric verification. We conduct comprehensive experiments on both simulated and real datasets, with input of both images and point clouds. The results demonstrate that our method outperforms the baselines in both recall rate, robustness, and a novel metric we proposed for VPR, the orientation diversity.

## Dataset

Download links:
- For Pointcloud: Please refer to DeepMapping paper, https://github.com/ai4ce/PointCloudSimulator
- For Real-world Panoramic RGB: https://drive.google.com/drive/u/0/folders/1ErXzIx0je5aGSRFbo5jP7oR8gPrdersO

You could find more detailed documents on our [website](https://ai4ce.github.io/TF-VPR/)!

TF-VPR follows the same file structure as the [PointNetVLAD](https://github.com/mikacuy/pointnetvlad):
```
TF-VPR
├── loss # loss function
├── models # network model
| ├── PointNetVlad.py # PointNetVLAD network model
| ├── resnet_mod.py # ResNet network model
| ...
├── generating_queries # Preprocess the data, initial the label, and generate Pickle file
| ├── generate_test_cc_sets.py # Generate the test pickle file
| ├── generate_training_tuples_cc_baseline_batch.py # Generate the train pickle file
| ...
├── results # Results are saved here
├── config.py # Config file
├── evaluate.py # evaluate file
├── loading_pointcloud.py # file loading script
├── train_pointnetvlad.py # Main file to train TF-VPR
| ...
```
Point cloud TF-VPR result:

![](NSF_1.gif)

RGB TF-VPR result:

![](NSF_2.gif)

Real-world RGB TF-VPR result:

![](NSF_3.gif)

# Note

I kept almost everything not related to tensorflow as the original implementation.
The main differences are:
* Multi-GPU support
* Configuration file (config.py)
* Evaluation on the eval dataset after every epochs

This implementation achieved an average top 1% recall on oxford baseline of 84.81%

### Pre-Requisites
* PyTorch 0.4.0
* tensorboardX
- PyTorch 0.4.0
- tensorboardX
- open3d-python 0.4
- scipy
- matplotlib
- numpy

### Generate pickle files
```
cd generating_queries/

# For training tuples in our baseline network
python generate_training_tuples_baseline.py

# For training tuples in our refined network
python generate_training_tuples_refine.py
python generate_training_tuples_cc_baseline_batch.py

# For network evaluation
python generate_test_sets.py
python generate_test_cc_sets.py
```

### Train
```
python train_pointnetvlad.py --dataset_folder $DATASET_FOLDER
python train_pointnetvlad.py
```

### Evaluate
```
python evaluate.py --dataset_folder $DATASET_FOLDER
python evaluate.py
```

Take a look at train_pointnetvlad.py and evaluate.py for more parameters

## Benchmark

We implement SPTM, TF-VPR, and supervise version, please check the other branches for reference

<!-- ## Citation

If you find TF-VPR useful in your research, please cite:

```bibtex
@article{Chen_2022_RAL,
title = {Self-Supervised Visual Place Recognition by Mining Temporal and Feature Neighborhoods},
author = {Chen, Chao and Liu, Xinhao and Xu, Xuchu and Ding, Li and Li, Yiming and Wang, Ruoyu and Feng, Chen},
booktitle = {IEEE Robotics and Automation Letters},
year = {2022}
}
``` -->
2 changes: 2 additions & 0 deletions config.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@
MOMENTUM = 0.9
OPTIMIZER = 'ADAM'
MAX_EPOCH = 20
FOLD_NUM = 128

MARGIN_1 = 0.5
MARGIN_2 = 0.2
Expand All @@ -28,6 +29,7 @@
BN_DECAY_CLIP = 0.99

RESUME = False
ROT_NUM = 8

TRAIN_FILE = 'generating_queries/training_queries_baseline.pickle'
TEST_FILE = 'generating_queries/test_queries_baseline.pickle'
Expand Down
181 changes: 115 additions & 66 deletions evaluate.py
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import socket
import importlib
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
os.environ["CUDA_VISIBLE_DEVICES"] = "2"
import sys
import torch
import torch.nn as nn
Expand Down Expand Up @@ -45,10 +45,11 @@ def evaluate():
print("ave_one_percent_recall:"+str(ave_one_percent_recall))


def evaluate_model(model,save=False):
def evaluate_model(model,optimizer,epoch,save=False):
if save:
torch.save({
'state_dict': model.state_dict(),
'optimizer': optimizer,
}, cfg.LOG_DIR + "checkpoint.pth.tar")

#checkpoint = torch.load(cfg.LOG_DIR + "checkpoint.pth.tar")
Expand All @@ -57,25 +58,19 @@ def evaluate_model(model,save=False):
DATABASE_SETS = get_sets_dict(cfg.EVAL_DATABASE_FILE)

QUERY_SETS = get_sets_dict(cfg.EVAL_QUERY_FILE)
'''
QUERY_SETS = []
for i in range(4):
QUERY = {}
for j in range(len(QUERY_SETS_temp)//4):
#QUERY[len(QUERY.keys())] = {"query":QUERY_SETS_temp[i][j]['query'],
# "x":float(QUERY_SETS_temp[i][j]['x']),
# "y":float(QUERY_SETS_temp[i][j]['y']),
# }
QUERY[len(QUERY.keys())] = QUERY_SETS_temp[i][j]
QUERY_SETS.append(QUERY)
'''

if not os.path.exists(cfg.RESULTS_FOLDER):
os.mkdir(cfg.RESULTS_FOLDER)

recall = np.zeros(25)
count = 0
similarity = []

similarity_1 = []
similarity_5 = []
similarity_10 = []

one_percent_recall = []
five_percent_recall = []
ten_percent_recall = []

DATABASE_VECTORS = []
QUERY_VECTORS = []
Expand All @@ -86,49 +81,82 @@ def evaluate_model(model,save=False):
for j in range(len(QUERY_SETS)):
QUERY_VECTORS.append(get_latent_vectors(model, QUERY_SETS[j]))

len_tr = np.array(DATABASE_VECTORS).shape[1]
recall_1 = np.zeros(int(round(len_tr/100)))
recall_5 = np.zeros(int(round(len_tr/20)))
recall_10 = np.zeros(int(round(len_tr/10)))
#############
for m in range(len(QUERY_SETS)):
for n in range(len(QUERY_SETS)):
if (m == n):
continue
pair_recall, pair_similarity, pair_opr = get_recall(
pair_recall_1, pair_recall_5, pair_recall_10, pair_similarity_1, pair_similarity_5, pair_similarity_10, pair_opr_1, pair_opr_5, pair_opr_10 = get_recall(
m, n, DATABASE_VECTORS, QUERY_VECTORS, QUERY_SETS)
recall += np.array(pair_recall)
recall_1 += np.array(pair_recall_1)
recall_5 += np.array(pair_recall_5)
recall_10 += np.array(pair_recall_10)

count += 1
one_percent_recall.append(pair_opr)
for x in pair_similarity:
similarity.append(x)
one_percent_recall.append(pair_opr_1)
five_percent_recall.append(pair_opr_5)
ten_percent_recall.append(pair_opr_10)

for x in pair_similarity_1:
similarity_1.append(x)
for x in pair_similarity_5:
similarity_5.append(x)
for x in pair_similarity_10:
similarity_10.append(x)
#########


### Save Evaluate vectors
file_name = os.path.join(cfg.RESULTS_FOLDER, "database.npy")
### Save Evaluate vectors

file_name = os.path.join(cfg.RESULTS_FOLDER, "database"+str(epoch)+".npy")
np.save(file_name, np.array(DATABASE_VECTORS))
print("saving for DATABASE_VECTORS to "+str(file_name))

ave_recall = recall / count

ave_recall_1 = recall_1 / count
ave_recall_5 = recall_5 / count
ave_recall_10 = recall_10 / count
# print(ave_recall)

# print(similarity)
average_similarity = np.mean(similarity)
average_similarity_1 = np.mean(similarity_1)
average_similarity_5 = np.mean(similarity_5)
average_similarity_10 = np.mean(similarity_10)
# print(average_similarity)

ave_one_percent_recall = np.mean(one_percent_recall)
ave_five_percent_recall = np.mean(five_percent_recall)
ave_ten_percent_recall = np.mean(ten_percent_recall)
# print(ave_one_percent_recall)

#print("os.path.join(/home/cc/PointNet-torch2,cfg.OUTPUT_FILE,log.txt):"+str(os.path.join("/home/cc/PointNet-torch2",cfg.OUTPUT_FILE,"log.txt")))
#assert(0)
with open(os.path.join("/home/cc/PointNet-torch2",cfg.OUTPUT_FILE), "w") as output:
output.write("Average Recall @N:\n")
output.write(str(ave_recall))
with open(os.path.join(cfg.OUTPUT_FILE), "w") as output:
output.write("Average Recall @1:\n")
output.write(str(ave_recall_1)+"\n")
output.write("Average Recall @5:\n")
output.write(str(ave_recall_5)+"\n")
output.write("Average Recall @10:\n")
output.write(str(ave_recall_10)+"\n")
output.write("\n\n")
output.write("Average Similarity:\n")
output.write(str(average_similarity))
output.write("Average Similarity_1:\n")
output.write(str(average_similarity_1)+"\n")
output.write("Average Similarity_5:\n")
output.write(str(average_similarity_5)+"\n")
output.write("Average Similarity_10:\n")
output.write(str(average_similarity_10)+"\n")
output.write("\n\n")
output.write("Average Top 1% Recall:\n")
output.write(str(ave_one_percent_recall))
output.write(str(ave_one_percent_recall)+"\n")
output.write("Average Top 5% Recall:\n")
output.write(str(ave_five_percent_recall)+"\n")
output.write("Average Top 10% Recall:\n")
output.write(str(ave_ten_percent_recall)+"\n")

return ave_one_percent_recall
return ave_one_percent_recall, ave_five_percent_recall, ave_ten_percent_recall


def get_latent_vectors(model, dict_to_process):
Expand Down Expand Up @@ -192,41 +220,62 @@ def get_latent_vectors(model, dict_to_process):


def get_recall(m, n, DATABASE_VECTORS, QUERY_VECTORS, QUERY_SETS):

database_output = DATABASE_VECTORS[m]
queries_output = QUERY_VECTORS[n]
# print(len(queries_output))
database_output = DATABASE_VECTORS[m] #2048*256
queries_output = QUERY_VECTORS[n] #10*256

database_nbrs = KDTree(database_output)
num_neighbors = 25
recall = [0] * num_neighbors

top1_similarity_score = []
one_percent_retrieved = 0
threshold = max(int(round(len(database_output)/100.0)), 1)

num_evaluated = 0
for i in range(len(queries_output)):
true_neighbors = QUERY_SETS[n][i][m]
if(len(true_neighbors) == 0):
continue
num_evaluated += 1
distances, indices = database_nbrs.query(
np.array([queries_output[i]]),k=num_neighbors)
for j in range(len(indices[0])):
if indices[0][j] in true_neighbors:
if(j == 0):
similarity = np.dot(
queries_output[i], database_output[indices[0][j]])
top1_similarity_score.append(similarity)
recall[j] += 1
break

if len(list(set(indices[0][0:threshold]).intersection(set(true_neighbors)))) > 0:
one_percent_retrieved += 1

one_percent_recall = (one_percent_retrieved/float(num_evaluated))*100
recall = (np.cumsum(recall)/float(num_evaluated))*100
return recall, top1_similarity_score, one_percent_recall

recalls = []
similarity_scores = []
N_percent_recalls = []

percent_array = [100, 20, 10]
for percent in percent_array:
threshold = max(int(round(len(database_output)/percent)), 1)

recall_N = [0] * threshold
topN_similarity_score = []
N_percent_retrieved = 0

num_evaluated = 0
for i in range(len(queries_output)):
true_neighbors = QUERY_SETS[n][i][m]
if(len(true_neighbors) == 0):
continue
num_evaluated += 1
distances, indices = database_nbrs.query(
np.array([queries_output[i]]),k=threshold)

#indices = indices + n*2048
for j in range(len(indices[0])):
if indices[0][j] in true_neighbors:
if(j == 0):
similarity = np.dot(
queries_output[i], database_output[indices[0][j]])
topN_similarity_score.append(similarity)
recall_N[j] += 1
break

if len(list(set(indices[0][0:threshold]).intersection(set(true_neighbors)))) > 0:
N_percent_retrieved += 1

if float(num_evaluated)!=0:
N_percent_recall = (N_percent_retrieved/float(num_evaluated))*100
recall_N = (np.cumsum(recall_N)/float(num_evaluated))*100
else:
N_percent_recall = 0
recall_N = 0
recalls.append(recall_N)
similarity_scores.append(topN_similarity_score)
N_percent_recalls.append(N_percent_recall)

recall_1, recall_5, recall_10 = recalls[0], recalls[1], recalls[2]
top1_similarity_score, top5_similarity_score, top10_similarity_score = similarity_scores[0], similarity_scores[1], similarity_scores[2]
one_percent_recall, five_percent_recall, ten_percent_recall = N_percent_recalls[0], N_percent_recalls[1], N_percent_recalls[2]

return recall_1, recall_5, recall_10, top1_similarity_score, top5_similarity_score, top10_similarity_score, one_percent_recall, five_percent_recall, ten_percent_recall



if __name__ == "__main__":
Expand Down
Binary file not shown.
Binary file removed generating_queries/__pycache__/set_path.cpython-36.pyc
Binary file not shown.
Binary file removed generating_queries/__pycache__/set_path.cpython-38.pyc
Binary file not shown.
Loading