[Paper] [Docs] [Demo docs] [Video1] [Video2]
Click on the thumbnail image below to watch a video showcasing our Novel-View Acoustic Synthesis.
🎧 For the optimal experience, using a headset is recommended.
Welcome to the official code repository for "Novel-View Acoustic Synthesis from 3D Reconstructed Rooms". This project estimates the sound anywhere in a scene containing multiple unknown sound sources, hence resulting in novel-view acoustic synthesis, given audio recordings from multiple microphones and the 3D geometry and material of a scene.
"Novel-View Acoustic Synthesis from 3D Reconstructed Rooms"
Byeongjoo Ahn,
Karren Yang,
Brian Hamilton,
Jonathan Sheaffer,
Anurag Ranjan,
Miguel Sarabia,
Oncel Tuzel,
Jen-Hao Rick Chang
.
├── demo/ # Quickstart and demo
│ ├── ...
├── nvas3d/ # Implementation of our model
│ ├── ...
└── soundspaces_nvas3d/ # SoundSpaces integration for NVAS3D
├── ...
Follow our Step-by-Step Installation Guide for rendering room impulse responses (RIRs) and images in Matterport3D rooms using SoundSpaces.
Refer to the Demo Guide for instructions on data generation, dry sound estimation using our model, and novel-view acoustic rendering.
Download our pretrained model and place it in the nvas3d/assets/saved_models/default/checkpoints/
directory.
To get started with the full pipeline quickly:
bash demo/run_demo.sh
After Training Data Generation, start the training process with:
python main.py --config ./nvas3d/config/default_config.yaml --exp default_exp
We thank Dirk Schroeder and David Romblom for insightful discussions and feedback, Changan Chen for the assistance with SoundSpaces.