Skip to content

GianmarcoDonnesi/MultiPICP-SLAM

Repository files navigation

MultiPICP-SLAM (Multi-Camera Visual Odometry System)

This project implements a robust visual odometry system using three synchronized cameras to track the 6-DOF pose of a robot observing a set of known 3D landmarks. The system performs both global localization (bootstrap) and continuous tracking with automatic recovery from tracking failures.

Key Features

  • Multi-camera sensor fusion (3 cameras)
  • Robust data association without known correspondences
  • PnP-based global localization with RANSAC
  • Adaptive gating for dynamic data association
  • Automatic recovery from tracking failures
  • Sub-centimeter tracking accuracy (RMSE: 14.9mm over 47.5m)

Project Structure

Headers (include/)

  • dataset_loader.hpp: data structures and I/O functions for loading dataset files
  • data_association.hpp: data association algorithms for matching 2D-3D correspondences
  • picp_solver.hpp: Point-to-point ICP solver with robust cost functions
  • numerical_utils.hpp: utility functions for numerical validation and sanitization

Source Files (src/)

Main Pipeline

  • main_integrated.cpp: Complete visual odometry pipeline with bootstrap and recovery
    • Implements PnP-based initialization
    • Adaptive tracking with multiple gate sizes
    • Automatic recovery system
    • Confidence-based state management
Alternative Implementations
  • main_track.cpp: Basic tracking without explicit data association module
  • main_track_da.cpp: Tracking with dedicated data association
Utilities
  • verify_projection.cpp: Validates camera projection model correctness
  • evaluation_metrics.cpp: Computes ATE, RPE, and tracking statistics
  • picp_solver.cpp: Ceres-based pose optimization with Cauchy robust kernel

Scripts

  • run_full_pipeline.sh: Automated pipeline execution script
  • plot_results.py: Visualization of trajectories and error metrics

Building the Project

# Create build directory
mkdir build && cd build

# Configure with CMake
cmake ..

# Build all targets
make -j$(nproc)

# Or build specific targets
make main_integrated
make verify_projection
make evaluation_metrics

Running the Pipeline

Complete Pipeline

# From build directory
make run_pipeline DATA_DIR=../data

# Or manually
./run_full_pipeline.sh ../data

Individual Components

1. Verify Projection Model

./bin/verify_projection ../data

Validates that the camera parameters correctly project 3D landmarks to image measurements.

2. Run Tracking

./bin/main_integrated ../data

Executes the complete visual odometry pipeline.

3. Evaluate Results

./bin/evaluation_metrics ../data integrated_trajectory.txt tracking_metrics.txt

Computes performance metrics comparing estimated trajectory to ground truth.

4. Visualize Results

python plot_results.py
# Or with interactive plots
python plot_results.py --show

Input Data Format

Required Files in data/ directory:

  • map.dat: 3D landmark positions

    LANDMARK_ID X Y Z
    
  • param.dat: Camera calibration parameters

    • Camera ID, intrinsic matrix, image dimensions
    • Near/far clipping planes
    • Camera-to-robot transformation
  • meas.dat: 2D measurements per epoch

    EPOCH_ID CAMERA_ID LANDMARK_ID COL ROW
    
  • traj.dat: Ground truth trajectory

    EPOCH TX TY TZ QX QY QZ QW
    

Output Files

The pipeline generates the following outputs:

Trajectory Files

  • integrated_trajectory.txt: estimated 6-DOF poses (position + quaternion)
  • tracking_metrics.txt: per-epoch tracking statistics (RMS, associations, confidence)

Evaluation Results

  • evaluation_results.txt: complete performance metrics
    • Absolute Trajectory Error (ATE)
    • Relative Pose Error (RPE)
    • Success rate and drift analysis

Visualization Data

  • ate_plot.txt: ATE error per epoch for plotting
  • trajectories_3d.txt: 3D positions for trajectory visualization

Generated Plots

  • ate_evolution.png: ATE error and confidence over time
  • trajectories_3d.png: 3D trajectory comparison (GT vs estimated)
  • error_distribution.png: Statistical error distribution
  • tracking_summary.png: Comprehensive results summary

Performance Analysis & Evaluation Metrics

Example Image

Current System Performance:

  • Mean ATE: 2.899 mm
  • Median ATE: 0.058 mm
  • RMSE: 14.903 mm
  • Success Rate: 100%
  • Drift Rate: 0.000113%
  • Processing Speed: ~1 ms/epoch

Algorithm Details

Bootstrap (Global Localization)

  1. Collect measurements from all cameras at epoch 0
  2. For each camera, run PnP-RANSAC to estimate initial pose
  3. Refine with all associations using robust optimization
  4. Validate with reprojection error threshold

Tracking Pipeline

  1. Prediction: constant velocity motion model
  2. Data Association: Multi-scale gating (3-20 pixels)
  3. Optimization: Ceres solver with Cauchy robust kernel
  4. Validation: motion consistency checks
  5. Recovery: PnP-based relocalization when lost

Recovery System

  • Triggers after 3 consecutive tracking failures
  • Uses PnP with wider search radius
  • Verifies recovered pose with tight gate
  • Gradually rebuilds confidence

Dependencies

  • CMake >= 3.10
  • Eigen3 >= 3.3
  • OpenCV >= 4.0 (for PnP algorithms)
  • Ceres Solver >= 2.0 (for optimization)
  • Python >= 3.6 with matplotlib, numpy (for visualization)

Author

Gianmarco Donnesi | [email protected]

License

This project is licensed under the GPL-3.0 License. See the file for more details.

For questions, suggestions, or bug reports, open an Issue or submit a Pull Request on GitHub.

About

This repository contains an high-precision visual odometry pipeline using three synchronized cameras for real-time robot localization and tracking. Features PnP-based global initialization, adaptive data association with RANSAC and automatic recovery from tracking failures. This project was developed for the Probabilistic Robotics course.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors