Skip to content

rgl-epfl/many-worlds-inverse-rendering

Repository files navigation

Teaser

ACM Transactions on Graphics (TOG), 2025.
Ziyi Zhang · Nicolas Roussel · Wenzel Jakob

Paper PDF Project Page

Overview

This repository contains an implementation of the method described in the article:

Ziyi Zhang and Nicolas Roussel and Wenzel Jakob, 2025. Many-Worlds Inverse Rendering. In ACM Transactions on Graphics 45(1).

It uses the Mitsuba 3 differentiable renderer.

Getting Started

System requirements

This code has been extensively tested on Ubuntu 22.04, using an RTX 4090 with driver version 580.95.05 and Python 3.10. Although not explicitly tested, other systems supported by Mitsuba 3 should also be able to run this project.

Building Mitsuba 3

This project was built with Mitsuba v3.8.0, any other version might not be supported.

pip install mitsuba==3.8.0

Alternatively, you can build Mitsuba yourself:

# Clone Mitsuba 3 with the required branch
git clone --recursive https://github.com/mitsuba-renderer/mitsuba3.git --branch v3.8.0
cd mitsuba3

# Build Mitsuba 3 (see Mitsuba docs for detailed instructions)
mkdir build && cd build
cmake -GNinja ..
ninja

# Set up environment variables
source setpath.sh

For detailed build instructions, see the Mitsuba 3 documentation.

Installation

  1. Clone this repository:

    git clone https://github.com/rgl-epfl/many-worlds-inverse-rendering.git
    cd many-worlds-inverse-rendering
  2. Install Python dependencies:

    pip install numpy matplotlib polyscope libigl

    Note: Some dependencies are only needed for specific features (e.g., polyscope for the GUI, libigl for mesh extraction).

Repository Structure

Main Scripts

  • optim.py — Main optimization script implementing the Many-Worlds inverse rendering pipeline. Sets up the scene, initializes the integrator and storage, runs the optimization loop with gradient-based updates, and saves results (images, loss curves, field snapshots).

Core Modules

  • manyworlds/ — Core Many-Worlds integrators. Contains the base MWIntegrator class and its variants (DirectMWIntegrator, PathMWIntegrator) that implement the stochastic surface sampling and rendering.

  • occupancy/ — Instant-NGP style occupancy grid for spatial acceleration. Tracks which regions of the volume contain surfaces to skip empty space during sampling.

  • loaders/ — Efficient batched ray sampling via RayLoader. Instead of rendering full images, samples random pixels across all viewpoints.

  • marching_cubes/ — Dr.Jit implementation of marching cubes for extracting triangle meshes from the implicit field representation.

  • optim_scene/ — Scene setup and configuration. Generates Mitsuba scene descriptions for both the optimization scene (with proxy geometry) and reference scene (with target geometry). Includes camera placement, lighting setup, and material configuration.

Utilities

  • utils/ — Configuration, logging, and post-processing tools. Contains ManyWorldsConfig (the main experiment configuration class), mesh extraction utilities, and a relighting tool for rendering results in different environments.

  • resources/ — Meshes and environment maps

Usage

Run the main optimization script:

python optim.py --ref_target bunny --envmap skycloud --sensor_count 16 \
    --optimizer adam --loss_type l1 --learning_rate 3e-2 --iter_count 400 \
    --img_reso 256 --spp 64 --suffix2 TMP --integrator direct_mw \
    --save_field_iter 10 --save_primal_iter 10

Key parameters:

  • --ref_target: Target shape to reconstruct (e.g., chair, bunny). Shapes are defined in utils/shape_trafo.json, which specifies mesh paths and transforms to place the shape within the unit cube. Alternatively, you can modify the integrator's to_world parameter to specify a custom bounding box for reconstruction (default is [0,1]³).
  • --envmap: Environment map name for lighting (e.g., skycloud, lythwood). Maps are loaded from resources/texture/<name>.exr, or use constant for uniform lighting.
  • --material: Surface material for the target shape. Options include diffuse, conductor, roughconductor<α>. See optim_scene/shape_envmap_gen.py for all presets.
  • --sensor_count: Number of camera viewpoints
  • --iter_count: Number of optimization iterations
  • --gui: Starts an interactive GUI to view the ongoing optimization

All available parameters are documented in utils/manyworlds_config.py.

Output

Results are saved to output/<model>/<ref_target>_view<sensor_count>_<envmap>/. You can add a subdirectory with --suffix2 <name>. The output folder contains:

  • config.txt — Copy of the configuration used for this run
  • loss_history.pdf / loss_history.txt — Loss curve over iterations
  • primal_*.exr — Rendered images at saved iterations
  • ref_*.exr — Reference images (ground truth)
  • mu_* — Mean field snapshots (use utils/extract_mesh.py to convert to mesh)
  • reconstructed_*.ply — The extracted surface after the last iteration

To extract meshes from all intermediate field snapshots (assuming you ran the example command above):

python utils/extract_mesh.py --folder output/MA/bunny_view16_skycloud/TMP --extraction_res 512

License and Citation

This project is released under the BSD 3-Clause License.

If you use this code in your research, please cite:

@article{Zhang2025MW,
    author = {Zhang, Ziyi and Roussel, Nicolas and Jakob, Wenzel},
    title = {Many-Worlds Inverse Rendering},
    year = {2025},
    publisher = {Association for Computing Machinery},
    address = {New York, NY, USA},
    issn = {0730-0301},
    url = {https://doi.org/10.1145/3767318},
    doi = {10.1145/3767318},
    journal = {ACM Trans. Graph.},
    month = sep,
    keywords = {differentiable rendering}
}

About

Code for the paper "Many-Worlds Inverse Rendering"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages