Krzysztof Byrski, Grzegorz Wilczyński, Weronika Smolak-Dyżewska, Piotr Borycki, Dawid Baran, Sławomir Tadeja, Przemysław Spurek
| arXiv |
|---|
| REdiSplats: Ray Tracing for Editable Gaussian Splatting https://arxiv.org/pdf/2503.12284 |
![]() |
![]() |
![]() |
|---|---|---|
![]() |
![]() |
![]() |
- Spherical harmonics support up to the degree 4.
- Interactive Windows viewer / optimizer application allowing to preview the trained model state in the real time.
- Support for the PLY trained model output format.
- Highly efficient renderer utilizing the built-in RT core ray-triangle intersection tests (thanks to approximating the Gaussians by the regular polygons).
- Highly configurable optimizer based on the convenient text configuration file.
- Support for both the Blender and COLMAP data sets (after some preprocessing by the GaMeS).
- Built-in evaluation of the model and visualization to the *.bmp file with the configurable frequency.
- Double Left Click: Toggle between the static camera and the free roam mode.
- Mouse Movement: Rotate the camera in the free roam mode.
- W / S: Move forward / backward.
- A / D: Step left / right.
- Spacebar / C: Move up / down.
- [ / ]: Switch the camera to the previous / next training pose.
- Print Screen: Make screenshot and save it to the 24-bit *.bmp file.
- Visual Studio 2019 Enterprise;
- CUDA Toolkit 12.4.1;
- NVIDIA OptiX SDK 8.0.0;
-
Create the new Windows Desktop Application project and name it "REdiSplats";
-
Remove the newly generated REdiSplats.cpp file containing the code template;
-
In Build Dependencies -> Build Customizations... select the checkbox matching your installed CUDA version. On our test system, we had to select the following checkbox:
CUDA 12.4(.targets, .props)
-
Add all the files from the directory "REdiSplats" to the project;
-
In the project's Properties set Configuration to "Release" and Platform to "x64";
-
In Properties -> Configuration Properties -> CUDA C/C++ -> Common -> Generate Relocatable Device Code select Yes (-rdc=true);
-
For file "shaders.cuh" in Properties -> Configuration Properties -> General -> Item Type select "CUDA C/C++;
-
For files: "shaders.cuh", "shaders_SH0.cu", "shaders_SH1.cu", "shaders_SH2.cu", "shaders_SH3.cu" and "shaders_SH4.cu" in Properties -> Configuration Properties -> CUDA C/C++ -> Common:
- Change the suffix of Compiler Output (obj/cubin) from ".obj" to ".ptx";
- In Generate Relocatable Device Code select No;
- In NVCC Compilation Type select Generate device-only .ptx file (-ptx)";
-
In Properties -> Configuration Properties -> VC++ Directories -> Include Directories add OptiX "include" directory path. On our test system, we had to add the following path:
"C:\ProgramData\NVIDIA Corporation\OptiX SDK 8.0.0\include" -
In Properties -> Configuration Properties -> CUDA C/C++ -> Device -> Code Generation type the compute capability and microarchitecture version of your GPU. On our test system with RTX 4070 GPU we typed:
"compute_89,sm_89" -
In Properties -> Configuration Properties -> Linker -> Input -> Additional Dependencies add three new lines containing:
"cuda.lib""cudart.lib""cufft.lib" -
In each of two different blocks of code in file InitializeOptiXRenderer.cu:
if constexpr (SH_degree == 0) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH0.cu.ptx", "rb"); else if constexpr (SH_degree == 1) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH1.cu.ptx", "rb"); else if constexpr (SH_degree == 2) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH2.cu.ptx", "rb"); else if constexpr (SH_degree == 3) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH3.cu.ptx", "rb"); else if constexpr (SH_degree == 4) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH4.cu.ptx", "rb");and
if constexpr (SH_degree == 0) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH0.cu.ptx", "rt"); else if constexpr (SH_degree == 1) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH1.cu.ptx", "rt"); else if constexpr (SH_degree == 2) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH2.cu.ptx", "rt"); else if constexpr (SH_degree == 3) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH3.cu.ptx", "rt"); else if constexpr (SH_degree == 4) f = fopen("C:/Users/pc/source/repos/REdiSplats/REdiSplats/x64/Release/shaders_SH4.cu.ptx", "rt");replace the provided path with the path to the *.ptx compiled shaders files on your hdd.
-
Train the model with GaMeS for some small number of iterations (for example 100) on some Blender dataset (for example: "lego" from "NeRF synthetic" set);
-
Convert all of the files in the subdirectories: "train" and "test" located in the dataset main directory to 24-bit *.bmp file format without changing their names;
-
Copy the configuration file "config.txt" to the project's main directory. On our test system we copied it to the following directory:
"C:\Users\<Windows username>\source\repos\REdiSplats\REdiSplats" -
In lines: 4 and 5 of the configuration file specify the location of the dataset main directory and the output GaMeS *.ply file obtained after short model pretraining (Important! The spherical harmonics degree used for pretraining and the target one specified in the line 7 of the config file don't have to match);
-
In lines: 13-15 of the configuration file specify the background color that matches the background color used for pretraining using the following formula:
R' = (R + 0.5) / 256
G' = (G + 0.5) / 256
B' = (B + 0.5) / 256where R, G and B are the integer non-negative background color coordinates in the range 0-255.
-
Run the "REdiSplats" project from the Visual Studio IDE;
Scripts directory contains different scripts that allows to manipulate trained models.
- First create your conda environment
- Install pytorch with CUDA support
conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia- Install Nvdiffrast
pip install ninja imageio PyOpenGL glfw xatlas gdown
pip install git+https://github.com/NVlabs/nvdiffrast/- Install other dependencies
pip install -r requirements.txtThis script allows you to get the MeshSplat representation of ReDiSplats model. Output is saved as a .npz file, it's output path is /path/to/ply/meshsplat.npz.
python generate_mesh.py <ply_path> --opac_threshold <opac_thresh_val> --quant <quant_val>where:
ply_path- path to the input PLY fileopac_thresh_val- opacity threshold for the mesh (e.g. if opac_thresh_val=0.5, only Gaussians with opacity greater than 0.5 will be used to generate the mesh), default value is 0.5quant_val- quantile value (the bigger the larger the MeshSplat size), default value is 4.0
This script allows you to get the Gaussian representation of ReDiSplats model from collection of frames .obj files. Output for each frame is saved as a .ply file.
python convert.py --ply_path <ply_path> --output_path <output_path> --frames_path <frames_path> --quant <quant_val>where:
ply_path- path to the original PLY fileoutput_path- path where the calculated.plyfiles for each frame should be savedframes_path- path to the directory where.objfiles are storedquant_val- quantile value (the bigger the larger the MeshSplat size), default value is 4.0
This script allows you to render the MeshSplat representation of ReDiSplats model using Blender. Of course you have to have installed Blender already.
blender --background --python path/to/render_blender.py -- --npz_path <path_to_npz_file> --cam_path <path_to_cameras_file>where:
--npz_path- path to the input.npzfile--cam_path- path to the input cameras file
This script allows you to render the MeshSplat representation of ReDiSplats model using Nvdiffrast.
python render_nvdiffrast.py <npz_path> <cameras_path> --dp_layers <dp_layers_val>where:
<npz_path>- path to the input.npzfile<cameras_path>- path to the input cameras file. If this is a path to a.jsonfile, then the script will assume you are using Blender dataset. If it's a path to a dataset, then the script will assume you are using Colmap dataset.<dp_layers_val>- number of depth peeling layers, default value is 50. For NeRF Synthetic datasets use 50, for real scenes use at least 100 (recommended is 200).






