Paper: The Sky's the Limit: Relightable Outdoor Scenes via a Sky-pixel Constrained Illumination Prior and Outside-In Visibility
NeuSky is a nerfstudio extension for outdoor neural scene reconstruction with sky-pixel constrained illumination priors. It depends on:
- nerfstudio (mainline, from source)
- ns_reni (RENI++ illumination fields, included as a git submodule)
- tiny-cuda-nn (hash grid encodings)
- nvdiffrast (differentiable rasterization)
- COLMAP (Structure-from-Motion)
- NVIDIA GPU with CUDA 12.x support
- Docker + NVIDIA Container Toolkit, OR
- Apptainer (for HPC clusters)
git clone --recurse-submodules https://github.com/JADGardner/neusky.git
cd neuskyNeuSky requires datasets, pretrained RENI++ checkpoints, and an output directory. Either create symlinks in the project root:
ln -s /path/to/datasets data
ln -s /path/to/pretrained-models model-storage
mkdir -p outputsOr set environment variables (in your shell or a .env file in the project root):
# .env
DATA_PATH=/path/to/datasets
MODEL_STORAGE_PATH=/path/to/pretrained-models
OUTPUTS_PATH=/path/to/outputsRENI++ checkpoints (required): NeuSky uses RENI++ as its illumination prior. Set RENI_CKPT_PATH to the directory containing the pretrained RENI++ models (the checkpoints/reni_plus_plus_models/ directory from ns_reni):
export RENI_CKPT_PATH=/path/to/ns_reni/checkpoints/reni_plus_plus_modelsThis gets mounted at /workspace/model-storage/reni_plus_plus inside the container.
# Build the image (compiles CUDA extensions — takes 20-40 min first time)
docker compose build research
# Start an interactive shell
docker compose run research bash
# Or train directly
docker compose run research ns-train neusky --data /workspace/data/NeRF-OSR/Data/lk2Inside the container, the project is mounted at /workspace with:
/workspace/data-- datasets (NeRF-OSR atdata/NeRF-OSR/Data/)/workspace/outputs-- training outputs/workspace/model-storage-- pretrained checkpoints/workspace/model-storage/reni_plus_plus-- RENI++ checkpoints
The entrypoint automatically installs neusky and ns_reni (submodule at ns_reni/) editably.
See the .apptainer/ directory for HPC/SLURM setup.
cp .apptainer/.env.example .apptainer/.env
# Edit .apptainer/.env with your cluster paths# Build the SIF (submit as a build job — needs ~64GB RAM, ~3 hours)
.apptainer/apptainer.sh build
# Register local project packages (one-time)
.apptainer/apptainer.sh install
# Interactive shell
.apptainer/apptainer.sh shell
# Run a command
.apptainer/apptainer.sh exec -- ns-train neusky --vis wandb
# Verify the container
.apptainer/apptainer.sh exec -- python .apptainer/test_container.pyFor development without containers.
conda create -n neusky python=3.12 -y
conda activate neusky
conda install -c conda-forge colmap -y
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu128
# CUDA extensions
pip install --no-build-isolation git+https://github.com/NVlabs/tiny-cuda-nn.git#subdirectory=bindings/torch
pip install --no-build-isolation git+https://github.com/NVlabs/nvdiffrast.git
# nerfstudio
git clone --depth 1 https://github.com/nerfstudio-project/nerfstudio.git
pip install -e nerfstudio
# ns_reni (submodule)
pip install -e ns_reni
# NeuSky
pip install -e .
ns-install-clipython ns_reni/scripts/download_models.py output/path/for/reni_plus_plus_models/Then update the config for NeuSky to point to the chosen RENI++ directory:
neusky/neusky/configs/neusky_config.py
Line 150 in bdf689b
ns-download-data nerfosr --save-dir data --capture-name lk2python neusky/scripts/download_and_copy_segmentation_masks.py lk2 /path/to/Data/NeRF-OSRns-train neusky --vis wandbIf you run out of GPU memory, try updating some or all of these settings in neusky/configs/neusky_config.py:
train_num_images_to_sample_from=-1, # Set to integer value if out of GPU memory
train_num_times_to_repeat_images=-1, # Iterations before resampling a new subset
images_on_gpu=True, # set False if out of GPU memory
masks_on_gpu=True, # set False if out of GPU memory
train_num_rays_per_batch=1024, # Lower to 512, 256, or 128 if out of GPU memory
eval_num_rays_per_batch=1024, # Lower to 512, 256, or 128 if out of GPU memory