Skip to content

fthbng77/RadarPillar

Repository files navigation

RadarPillars: Efficient Object Detection from 4D Radar Point Clouds

OpenPCDet-based implementation for View-of-Delft (VoD) & Astyx datasets

Python 3.8+ PyTorch 2.4+ License

This work is currently under review. Pre-trained model weights and full reproduction details will be released upon paper acceptance. Please do not use or redistribute without written permission from the authors.


Table of Contents


Overview

This repository implements the RadarPillars architecture (Gillen et al., IROS 2024) for radar-only 3D object detection. Built on top of OpenPCDet, it removes LiDAR/image dependencies and adds radar-specific physics features including Doppler velocity decomposition and RCS normalization.

Supported Datasets:

Dataset Classes Radar Features Frames
View-of-Delft (VoD) Car, Pedestrian, Cyclist x, y, z, RCS, v_r, v_r_comp, time 5-frame accumulation
Astyx HiRes2019 Car, Pedestrian x, y, z, RCS, v_r, v_x, v_y Single frame

Architecture

flowchart TD
    A["<b>Radar Point Cloud</b><br/><i>(N, 7): x, y, z, RCS, v_r, v_r_comp, time</i>"]
    B["<b>PillarVFE</b><br/><i>Voxelization + Velocity Decomposition</i><br/>vx = v_r · cos(φ), vy = v_r · sin(φ)"]
    C["<b>PillarAttention</b><br/><i>Masked Multi-Head Self-Attention</i><br/>C=32, H=1, LayerNorm + FFN"]
    D["<b>PointPillarScatter</b><br/><i>Sparse → Dense BEV Grid</i><br/>320 × 320 × 32"]
    E["<b>BaseBEVBackbone</b><br/><i>3-layer 2D CNN + Multi-scale Upsample</i><br/>Filters: [32, 32, 32], Strides: [2, 2, 2]"]
    F["<b>AnchorHeadSingle</b><br/><i>3 Classes + Direction Classifier + NMS</i><br/>Car | Pedestrian | Cyclist"]
    G["<b>3D Bounding Boxes</b><br/><i>(x, y, z, dx, dy, dz, heading, score)</i>"]

    A --> B --> C --> D --> E --> F --> G

    style A fill:#2C3E50,color:#fff,stroke:#1a252f
    style B fill:#2980B9,color:#fff,stroke:#1f6391
    style C fill:#8E44AD,color:#fff,stroke:#6c3483
    style D fill:#27AE60,color:#fff,stroke:#1e8449
    style E fill:#E67E22,color:#fff,stroke:#ba6418
    style F fill:#C0392B,color:#fff,stroke:#96281b
    style G fill:#2C3E50,color:#fff,stroke:#1a252f
Loading

Key Contributions

1. Doppler Velocity Decomposition

Radar measures only radial velocity (v_r). We decompose it into Cartesian components in the VFE layer for directional awareness:

φ = atan2(y, x + 1e-6)
vx = v_r_comp · cos(φ)
vy = v_r_comp · sin(φ)

2. Physics-Consistent Augmentation

Fixed a critical bug in augmentor_utils.py where random_flip and global_rotation were incorrectly transforming time values instead of velocity vectors. Velocity is a physical vector and must be rotated/flipped alongside point coordinates.

3. PillarAttention for Sparse Radar

Masked multi-head self-attention that handles the inherent sparsity of radar point clouds via key padding masks, preventing empty pillar regions from corrupting attention scores.

4. Dual Cyclist Anchor Strategy

VoD's Cyclist class contains diverse sub-types (bicycle, rider, motor, moped). A dual-anchor approach captures both small (bicycle) and large (motorcycle) vehicles separately.


Results

SOTA Comparison on VoD

Entire Annotated Area (EAA) — 3D AP (%) at IoU: Car=0.50, Ped/Cyc=0.25

Rank Method Year Car Ped Cyc mAP
1 MAFF-Net 2025 RA-L 42.3 46.8 74.7 54.6
2 SCKD 2025 AAAI 41.89 43.51 70.83 52.08
3 RadarGaussianDet3D 2025 40.7 42.4 73.0 52.0
5 SMURF 2023 TIV 42.31 39.09 71.50 50.97
6 RadarPillars (paper) 2024 IROS 41.1 38.6 72.6 50.70
10 Ours (default, e58) -- 36.29 41.09 68.90 48.76
11 Ours (vel. decomp, e56) -- 35.43 39.96 70.76 48.72
12 CenterPoint (baseline) -- 33.87 39.01 66.85 46.58
13 PointPillars (baseline) -- 37.92 31.24 65.66 44.94

Our Results vs. Paper

Configuration Car Ped Cyc mAP
RadarPillars paper (5-frame) 41.1 38.6 72.6 50.7
Ours — default (e58) 36.29 41.09 (+2.5) 68.90 48.76
Ours — vel. decomp (e56) 35.43 39.96 (+1.4) 70.76 48.72

Key observations:

  • Pedestrian detection exceeds the paper by +1.4 to +2.5 AP
  • Velocity decomposition boosts Cyclist AP significantly: 68.90 → 70.76 (+1.86)
  • Overall mAP gap is -1.9 from the original paper
  • Cyclist detection shows the largest gap (-1.8 to -3.7 AP)

3D AP Evolution (Epoch 48-60, Default Experiment)

3D AP Evolution
Model converges around epoch 54: Car ~36, Pedestrian ~41, Cyclist ~68-69 AP


Ablation: Velocity Decomposition

The main ablation compares the default pipeline (raw v_r_comp feature) against velocity decomposition (v_r_comp → vx, vy via azimuth angle). Both experiments use the same config, training schedule, and data augmentation.

Configuration Car Ped Cyc mAP
Default — no decomposition (e58) 36.29 41.09 68.90 48.76
Velocity decomposition (e56) 35.43 39.96 70.76 48.72
Delta -0.86 -1.13 +1.86 -0.04

Key findings:

  • Velocity decomposition significantly boosts Cyclist AP (+1.86), likely because directional velocity helps distinguish moving two-wheelers
  • Car and Pedestrian show a slight decrease, suggesting the additional features may add noise for these classes
  • Overall mAP is nearly identical (-0.04), indicating a class-level trade-off rather than a net gain
  • The original paper reports +3.8 mAP from decomposition; our smaller gain may be because we retain raw v_r, v_r_comp and time as input features alongside vx/vy, causing partial redundancy

Velocity Normalization Analysis

The decomposed vx/vy components are optionally normalized via (value - μ) / σ. Analysis revealed that the config's std values were roughly half the actual data distribution:

Parameter Config (old) Actual Data Ratio
vx std 0.891 2.080 0.43x
vy std 0.453 1.051 0.43x

Since config std was too small, normalization was amplifying the distribution instead of compressing it:

Velocity Normalization Comparison
Config normalization increases outliers (5.8%) compared to raw (4.4%). Correct normalization reduces them to 2.3%

2D Velocity Distribution
Top: vy histogram. Bottom: vx-vy heatmap (log-scale), cyan dashed circle = 3σ boundary

σ (std) Outlier ratio (|v|>3)
Raw 2.075 4.4%
Config Norm (σ=0.89) 2.328 5.8% (increased)
Correct Norm (σ=2.08) 1.000 2.3% (decreased)

Installation

Requirements: Python 3.8+, PyTorch 2.4+, CUDA 12.x, spconv 2.3.6

# Create virtual environment
python -m venv .venv
source .venv/bin/activate
python -m pip install -U pip

# Install OpenPCDet with CUDA extensions
python setup.py develop

# Install WandB for experiment tracking (optional)
pip install wandb

See docs/INSTALL.md for detailed instructions.


Dataset Preparation

View-of-Delft (VoD)

data/VoD/view_of_delft_PUBLIC/radar_5frames/
├── ImageSets/
│   ├── train.txt
│   ├── val.txt
│   └── test.txt
├── training/
│   ├── velodyne/          # Radar point clouds (.bin)
│   ├── label_2/           # 3D annotations
│   ├── calib/             # Calibration files
│   └── image_2/           # Camera images (optional)
└── testing/
    └── velodyne/
# Generate info files and GT database
python -m pcdet.datasets.vod.vod_dataset create_vod_infos \
    tools/cfgs/dataset_configs/vod_dataset_radar.yaml

Astyx HiRes2019

data/astyx/
├── ImageSets/
│   ├── train.txt
│   ├── val.txt
│   └── test.txt
├── training/
│   └── radar/             # Radar point clouds (.bin)
└── testing/
python -m pcdet.datasets.astyx.astyx_dataset create_astyx_infos \
    tools/cfgs/dataset_configs/astyx_dataset_radar.yaml

Training & Evaluation

VoD Training

CUDA_VISIBLE_DEVICES=0 python tools/train.py \
    --cfg_file tools/cfgs/vod_models/vod_radarpillar.yaml \
    --batch_size 16

# With WandB experiment tracking
CUDA_VISIBLE_DEVICES=0 python tools/train.py \
    --cfg_file tools/cfgs/vod_models/vod_radarpillar.yaml \
    --batch_size 16 --use_wandb

Astyx Training

CUDA_VISIBLE_DEVICES=0 python tools/train.py \
    --cfg_file tools/cfgs/astyx_models/astyx_radarpillar.yaml \
    --batch_size 4

Evaluation

CUDA_VISIBLE_DEVICES=0 python tools/test.py \
    --cfg_file tools/cfgs/vod_models/vod_radarpillar.yaml \
    --ckpt <checkpoint_path>

Key Hyperparameters

Parameter VoD Astyx
Voxel Size 0.16 x 0.16 x 5.0 m 0.2 x 0.2 x 4.0 m
Max Points/Voxel 16 32
Epochs 60 160
Learning Rate 0.01 0.003
Optimizer adam_onecycle adam_onecycle
Early Stopping 30 epoch patience --
NMS Threshold 0.1 0.01

Visualization Tools

BEV (Bird's Eye View) Visualization

Visualize model predictions overlaid on radar point clouds. GT boxes are solid lines, predictions are dashed. Points are colored by RCS value.

# Generate BEV from result.pkl (recommended)
python tools/generate_readme_visuals.py

# Or from KITTI-format txt predictions
python tools/visualize_bev.py \
    --pred_dir <path_to_kitti_txt_predictions> \
    --samples 00315 00107 \
    --score_thresh 0.15 \
    --output_dir output_bev

BEV Sample 00315
Sample 00315 — Dense urban scene (default experiment, epoch 58)

BEV Sample 00107
Sample 00107 — Close-range cyclist cluster (default experiment, epoch 58)

Anchor Verification

Analyze dataset object size distributions and verify anchor box alignment.

python tools/visualize_anchors.py    # Dimension scatter plot with anchors
python tools/plot_cyclist_dist.py    # Cyclist length histogram

Anchor Verification
GT size distributions with configured anchors — Car (4.17x1.84), Pedestrian (0.65x0.64), Cyclist (1.94x0.79)

Cyclist Distribution
Cyclist length distribution (N=6685): mean=1.94m, median=1.94m — anchor aligns with data center

AP Evolution Plots

python visualize_radar_logs.py \
    --logs output/cfgs/vod_models/vod_radarpillar/<exp>/eval/epoch_*/val/default/log_eval_*.txt \
    --output output_plots

Velocity Normalization Analysis

python tools/generate_velocity_norm_plots.py

Changelog

Date Description
2026-02 Velocity decomposition: vr_comp → vx, vy in VFE layer
2026-02 Dual Cyclist anchor strategy for diverse sub-types
2026-02 Augmentor bug fix: correct velocity index handling in flip/rotation
2026-02 BEV visualization tool (tools/visualize_bev.py)
2026-02 WandB integration with --use_wandb flag
2026-02 VoD radar pipeline: dataset config, info generation
2026-01 Astyx radar pipeline: 7-feature point loader, velocity-aware augmentations

Citation

@inproceedings{gillen2024radarpillars,
  title     = {RadarPillars: Efficient Object Detection from 4D Radar Point Clouds},
  author    = {Gillen, Julius and Bieder, Manuel and Stiller, Christoph},
  booktitle = {Proc. IEEE/RSJ Int. Conf. Intelligent Robots and Systems (IROS)},
  year      = {2024}
}
@misc{openpcdet2020,
  title  = {OpenPCDet: An Open-source Toolbox for 3D Object Detection from Point Clouds},
  author = {OpenPCDet Development Team},
  year   = {2020},
  howpublished = {\url{https://github.com/open-mmlab/OpenPCDet}}
}

Acknowledgement

This project is built upon OpenPCDet, an open-source 3D object detection framework. We thank the OpenPCDet team for the original codebase and supported methods.


License

OpenPCDet is released under the Apache 2.0 license.

About

Radar-only 3D object detection using RadarPillars architecture on View-of-Delft and Astyx datasets. Built on OpenPCDet.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors