ACM Transactions on Graphics (TOG), 2025.
Ziyi Zhang
·
Nicolas Roussel
·
Wenzel Jakob
This repository contains an implementation of the method described in the article:
Ziyi Zhang and Nicolas Roussel and Wenzel Jakob, 2025. Many-Worlds Inverse Rendering. In ACM Transactions on Graphics 45(1).
It uses the Mitsuba 3 differentiable renderer.
This code has been extensively tested on Ubuntu 22.04, using an RTX 4090 with driver version 580.95.05 and Python 3.10. Although not explicitly tested, other systems supported by Mitsuba 3 should also be able to run this project.
This project was built with Mitsuba v3.8.0, any other version might not be supported.
pip install mitsuba==3.8.0Alternatively, you can build Mitsuba yourself:
# Clone Mitsuba 3 with the required branch
git clone --recursive https://github.com/mitsuba-renderer/mitsuba3.git --branch v3.8.0
cd mitsuba3
# Build Mitsuba 3 (see Mitsuba docs for detailed instructions)
mkdir build && cd build
cmake -GNinja ..
ninja
# Set up environment variables
source setpath.shFor detailed build instructions, see the Mitsuba 3 documentation.
-
Clone this repository:
git clone https://github.com/rgl-epfl/many-worlds-inverse-rendering.git cd many-worlds-inverse-rendering -
Install Python dependencies:
pip install numpy matplotlib polyscope libigl
Note: Some dependencies are only needed for specific features (e.g.,
polyscopefor the GUI,libiglfor mesh extraction).
optim.py— Main optimization script implementing the Many-Worlds inverse rendering pipeline. Sets up the scene, initializes the integrator and storage, runs the optimization loop with gradient-based updates, and saves results (images, loss curves, field snapshots).
-
manyworlds/— Core Many-Worlds integrators. Contains the baseMWIntegratorclass and its variants (DirectMWIntegrator,PathMWIntegrator) that implement the stochastic surface sampling and rendering. -
occupancy/— Instant-NGP style occupancy grid for spatial acceleration. Tracks which regions of the volume contain surfaces to skip empty space during sampling. -
loaders/— Efficient batched ray sampling viaRayLoader. Instead of rendering full images, samples random pixels across all viewpoints. -
marching_cubes/— Dr.Jit implementation of marching cubes for extracting triangle meshes from the implicit field representation. -
optim_scene/— Scene setup and configuration. Generates Mitsuba scene descriptions for both the optimization scene (with proxy geometry) and reference scene (with target geometry). Includes camera placement, lighting setup, and material configuration.
-
utils/— Configuration, logging, and post-processing tools. ContainsManyWorldsConfig(the main experiment configuration class), mesh extraction utilities, and a relighting tool for rendering results in different environments. -
resources/— Meshes and environment maps
Run the main optimization script:
python optim.py --ref_target bunny --envmap skycloud --sensor_count 16 \
--optimizer adam --loss_type l1 --learning_rate 3e-2 --iter_count 400 \
--img_reso 256 --spp 64 --suffix2 TMP --integrator direct_mw \
--save_field_iter 10 --save_primal_iter 10Key parameters:
--ref_target: Target shape to reconstruct (e.g.,chair,bunny). Shapes are defined inutils/shape_trafo.json, which specifies mesh paths and transforms to place the shape within the unit cube. Alternatively, you can modify the integrator'sto_worldparameter to specify a custom bounding box for reconstruction (default is [0,1]³).--envmap: Environment map name for lighting (e.g.,skycloud,lythwood). Maps are loaded fromresources/texture/<name>.exr, or useconstantfor uniform lighting.--material: Surface material for the target shape. Options includediffuse,conductor,roughconductor<α>. Seeoptim_scene/shape_envmap_gen.pyfor all presets.--sensor_count: Number of camera viewpoints--iter_count: Number of optimization iterations--gui: Starts an interactive GUI to view the ongoing optimization
All available parameters are documented in utils/manyworlds_config.py.
Results are saved to output/<model>/<ref_target>_view<sensor_count>_<envmap>/. You can add a subdirectory with --suffix2 <name>. The output folder contains:
config.txt— Copy of the configuration used for this runloss_history.pdf/loss_history.txt— Loss curve over iterationsprimal_*.exr— Rendered images at saved iterationsref_*.exr— Reference images (ground truth)mu_*— Mean field snapshots (useutils/extract_mesh.pyto convert to mesh)reconstructed_*.ply— The extracted surface after the last iteration
To extract meshes from all intermediate field snapshots (assuming you ran the example command above):
python utils/extract_mesh.py --folder output/MA/bunny_view16_skycloud/TMP --extraction_res 512This project is released under the BSD 3-Clause License.
If you use this code in your research, please cite:
@article{Zhang2025MW,
author = {Zhang, Ziyi and Roussel, Nicolas and Jakob, Wenzel},
title = {Many-Worlds Inverse Rendering},
year = {2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
issn = {0730-0301},
url = {https://doi.org/10.1145/3767318},
doi = {10.1145/3767318},
journal = {ACM Trans. Graph.},
month = sep,
keywords = {differentiable rendering}
}