Skip to content

Commit

Permalink
Merge branch 'dev/231030' into main
Browse files Browse the repository at this point in the history
  • Loading branch information
hugoycj committed Oct 29, 2023
2 parents 38a49f3 + 143e1cf commit 6eb447e
Show file tree
Hide file tree
Showing 17 changed files with 1,024 additions and 97 deletions.
3 changes: 1 addition & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -151,5 +151,4 @@ runs/
load/
extern/
data/
results
scripts
results
38 changes: 32 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,25 @@ To extract COLMAP data from custom images, you must first have COLMAP installed
```
-data_001
-images
-mask (optional)
-data_002
-images
-mask (optional)
-data_003
-images
-mask (optional)
```
Each separate data folder houses its own images folder.

If you have the mask, we recommend filtering the colmap sparse point using it before starting the reconstruction. You can use the following manual script for preprocessing:
```
python scripts/run_colmap.py ${INPUT_DIR}
python scripts/filter_colmap.py --data ${INPUT_DIR} --output-dir ${INPUT_DIR}_filtered
```
In the script, ${INPUT_DIR} should be replaced with the actual directory path where your data is located.

The first line runs the colmap reconstruction script with full image. The second line filters the colmap sparse point using the specified mask, and saves the filtered data in a new output directory with the suffix "_filtered".

## Start Reconstruction!
### Run Smooth Surface Reconstruction in 20 Minutes
<details>
Expand All @@ -44,15 +56,28 @@ The smooth reconstruction mode is well-suited for the following cases:

**Information you need to know before you start:**
* The smooth reconstruction mode's reliance on curvature loss can over-smooth geometry, failing to capture flat surface structures and subtle variations on flatter regions of the original object. <img src="assets/over-smooth.png">
* This mode rely on sparse points generated by colmap for geometry guidence in the early stage of training. SFM sometimes will generate noisy point cloud due to repeated texture, noisy pose or inaccurate point matches.

* This mode relies on sparse points generated by colmap to guide the geometry in the early stage of training. However, SFM (Structure from Motion) can sometimes generate noisy point clouds due to factors such as repeated texture, inaccurate poses, or incorrect point matches. To address this issue, one possible solution is to utilize more powerful SFM tools like [hloc](https://github.com/cvg/Hierarchical-Localization) or [DetectorFreeSfM](https://github.com/zju3dv/DetectorFreeSfM).
Additionally, post-processing techniques can be employed to further refine the point cloud. For example, using methods like Radius Outlier Removal in [Open3D](http://www.open3d.org/docs/latest/tutorial/Advanced/pointcloud_outlier_removal.html) or [pixsfm](https://github.com/cvg/pixel-perfect-sfm) can help eliminate outliers and improve the quality of the point cloud.
---

Now it is time to start by running:
```
bash run_neuralangelo-colmap_sparse.sh $YOUR_DATA_DIR
bash run_neuralangelo-colmap_sparse.sh ${INPUT_DIR}
```
This script is designed to automate the process of running SFM without the need for any preparation beforehand. It will automatically initiate the reconstruction process and export the resulting mesh. The output files will be saved in the logs directory.

If mask is avaible and placed at the right place under data_folder you could start by running:
```
The results will be saved under `logs` directory.
bash run_neuralangelo-colmap_sparse.sh ${INPUT_DIR}_filtered
```

Additionally, we have developed an experimental version called **SH-neuralangelo**, which utilizes Spherical Harmonics (SH) instead of Multilayer Perceptron (MLP) for radiance field. SH-neuralangelo is inspired by [Plenoxel](https://alexyu.net/plenoxels/) and [Gaussian Splatting](https://github.com/graphdeco-inria/gaussian-splatting), incorporating progressive Spherical Harmonics for faster convergence and better coefficient regulation.
```
bash run_SH-neuralangelo-colmap_sparse.sh ${INPUT_DIR}
```
However, currently, SH-Neus is inferior to the original Neus with MLP in terms of PSNR and reconstruction quality. We are actively working on improving its quality and plan to support exporting Spherical Harmonics coefficients for real-time viewers in the future, similar to Gaussian Splatting.


</details>

### Run Detail Surface Reconstruction in 1 Hour
Expand All @@ -77,7 +102,7 @@ The detail reconstruction mode without additional preprocessing is optimal for s

Now it is time to start by running:
```
bash run_neuralangelo-colmap_sparse-50k.sh $YOUR_DATA_DIR
bash run_neuralangelo-colmap_sparse-50k.sh ${INPUT_DIR}
```
</details>

Expand All @@ -99,7 +124,7 @@ Importantly, in real-world scenarios like oblique photography and virtual tours,

Now it is time to start by running:
```
bash run_neuralangelo-colmap_dense.sh $YOUR_DATA_DIR
bash run_neuralangelo-colmap_dense.sh ${INPUT_DIR}
```
</details>

Expand Down Expand Up @@ -139,4 +164,5 @@ bash run_neuralangelo-colmap_dense.sh $YOUR_DATA_DIR
## Acknocklement
* Thanks to bennyguo for his excellent pipeline [instant-nsr-pl](https://github.com/bennyguo/instant-nsr-pl)
* Thanks to RaduAlexandru for his implementation of improved curvature loss in [permuto_sdf](https://github.com/RaduAlexandru/permuto_sdf)
* Thanks to Alex Yu for his implementation of spherical harmonics in [svox2](https://github.com/sxyu/svox2/tree/master)
* Thanks for Zesong Yang and Chris for providing valuable insights and feedback that assisted development
4 changes: 2 additions & 2 deletions configs/neuralangelo-colmap_sparse-50k.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ model:
base_resolution: 32
per_level_scale: 1.3195079107728942
include_xyz: true
start_level: 4
start_level: 8
start_step: 5000
update_steps: 2000
mlp_network_config:
Expand All @@ -74,7 +74,7 @@ model:
activation: ReLU
output_activation: none
n_neurons: 64
n_hidden_layers: 2
n_hidden_layers: 4
color_activation: sigmoid
# background model configurations
num_samples_per_ray_bg: 256
Expand Down
181 changes: 181 additions & 0 deletions configs/neuralangelo-colmap_sparse-SH.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,181 @@
name: neuralangelo-colmap_sparse-SH-${basename:${dataset.root_dir}}
tag: ""
seed: 42

dataset:
name: colmap
root_dir: ???
img_downscale: 2 # specify training image size by either img_wh or img_downscale
up_est_method: ground # if true, use estimated ground plane normal direction as up direction
center_est_method: lookat
n_test_traj_steps: 2
apply_mask: false
load_data_on_gpu: false
dense_pcd_path: null

model:
name: sh-neus
radius: 1.5
num_samples_per_ray: 1024
train_num_rays: 128
max_train_num_rays: 8192
grid_prune: true
grid_prune_occ_thre: 0.001
dynamic_ray_sampling: true
batch_image_sampling: true
randomized: true
ray_chunk: 2048
cos_anneal_end: 20000
learned_background: true
background_color: random
variance:
init_val: 0.3
modulate: false
geometry:
name: volume-sdf
radius: ${model.radius}
feature_dim: 65
grad_type: analytic
finite_difference_eps: progressive
isosurface:
method: mc
resolution: 512
chunk: 2097152
threshold: 0.001
xyz_encoding_config:
otype: ProgressiveBandHashGrid
n_levels: 16
n_features_per_level: 2
log2_hashmap_size: 19
base_resolution: 32
per_level_scale: 1.3195079107728942
include_xyz: true
start_level: 6
start_step: 5000
update_steps: 1000
mlp_network_config:
otype: VanillaMLP
activation: ReLU
output_activation: none
n_neurons: 64
n_hidden_layers: 2
sphere_init: true
sphere_init_radius: 0.5
weight_norm: true
texture:
name: volume-SH
input_feature_dim: ${add:${model.geometry.feature_dim},3} # surface normal as additional input
sh_level: 3
mlp_network_config:
otype: VanillaMLP
activation: ReLU
output_activation: none
n_neurons: 64
n_hidden_layers: 2
color_activation: sigmoid
# background model configurations
num_samples_per_ray_bg: 256
geometry_bg:
name: volume-density
radius: ${model.radius}
feature_dim: 8
density_activation: trunc_exp
density_bias: -1
isosurface: null
xyz_encoding_config:
otype: ProgressiveBandHashGrid
n_levels: 16
n_features_per_level: 2
log2_hashmap_size: 19
base_resolution: 32
per_level_scale: 1.3195079107728942
include_xyz: true
start_level: 8
start_step: 5000
update_steps: 2000
mlp_network_config:
otype: VanillaMLP
activation: ReLU
output_activation: none
n_neurons: 64
n_hidden_layers: 1
texture_bg:
name: volume-radiance
input_feature_dim: ${model.geometry_bg.feature_dim}
dir_encoding_config:
otype: SphericalHarmonics
degree: 4
mlp_network_config:
otype: VanillaMLP
activation: ReLU
output_activation: none
n_neurons: 64
n_hidden_layers: 2
color_activation: sigmoid

system:
name: neus-system
loss:
lambda_sdf_l1: [0, 1, 0, 20000]
lambda_normal: 0.
lambda_rgb_mse: 10.
lambda_rgb_l1: 0.
lambda_sh_mse: 0.01
lambda_mask: 0.0
lambda_eikonal: 0.1
lambda_curvature: [0, 0, 5.e-1, 5000]
lambda_sparsity: 0.0
lambda_distortion: 0.0
lambda_distortion_bg: 0.0
lambda_opaque: 0.0
sparsity_scale: 1.
optimizer:
name: AdamW
args:
lr: 0.01
betas: [0.9, 0.99]
eps: 1.e-15
params:
geometry:
lr: 0.01
texture:
lr: 0.01
geometry_bg:
lr: 0.01
texture_bg:
lr: 0.01
variance:
lr: 0.001
warmup_steps: 500
scheduler:
name: SequentialLR
interval: step
milestones:
- ${system.warmup_steps}
schedulers:
- name: LinearLR # linear warm-up in the first system.warmup_steps steps
args:
start_factor: 0.01
end_factor: 1.0
total_iters: ${system.warmup_steps}
- name: ExponentialLR
args:
gamma: ${calc_exp_lr_decay_rate:0.1,${sub:${trainer.max_steps},${system.warmup_steps}}}

checkpoint:
save_top_k: -1
every_n_train_steps: ${trainer.max_steps}

export:
chunk_size: 2097152
export_vertex_color: True

trainer:
max_steps: 20000
log_every_n_steps: 100
num_sanity_val_steps: 0
val_check_interval: 5000
limit_train_batches: 1.0
limit_val_batches: 2
enable_progress_bar: true
precision: 16
Loading

0 comments on commit 6eb447e

Please sign in to comment.