Examples
Visualize real-world robotic data using Foxglove.
Autonomous Vehicle
The field of machine learning is changing rapidly. Waymo is in a unique position to contribute to the research community, by creating and sharing some of the largest and most diverse autonomous driving datasets.
Autonomous Robotic Manipulation
The DROID dataset trained using DepthAnythingV2 models plus the integration of time-series data from the Franka Robotics Panda Arm visualized using Foxglove. With parameter scales from 25M to 1.3B DepthAnythingV2 demonstrates strong generalization and broad applicability, achieving fine-grained details and enhanced depth accuracy.
More examples
Autonomous Vehicle
Argoverse 2 is a large-scale, open-source dataset designed to advance autonomous vehicle perception, prediction, and mapping research.
Humanoid
The Unitree LAFAN1 Retargeting Dataset uses numerical optimization techniques based on an inverse kinematics approach that ensures that the retargeted motions adhere to and effective pose constraints and joint position/velocity limits, effectively minimizing issues like foot slippage.
Autonomous Surface Vehicle
MIT Sea Grant AUV Lab Prodromos marine perception dataset includes recordings from a surface vessel navigating a diverse maritime scenario such as sailboat and kayak traffic, commercial shipping vessels, and bridges. This dataset is a key feature in the integration of AIS data with radar imagery, enabling vessel identification and tracking in real-world conditions.
Autonomous Vehicle
WayveScenes101 is a groundbreaking dataset tailored to propel advancements in novel view synthesis for autonomous driving. This dataset encompasses 101 diverse driving scenarios, encompassing urban, suburban, and highway environments, all captured under varying weather and lighting conditions.
Autonomous Vehicle
The nuScenes dataset (pronounced /nuːsiːnz/) is a public large-scale dataset for autonomous driving developed by the team at Motional (formerly nuTonomy).
Drone
The UZH-FPV drone racing dataset provides the most aggressive visual-inertial odometry dataset with high-speed 6DoF trajectories developed using state estimation, large accelerations, and rotations to date.
Quadruped
This dataset is a short validation of the Boston Dynamics Spot robot, combining the URDF with onboard sensor data, including recorded velocities from onboard odometry and contact states (pressure for each foot, or paw in this case perhaps).
Mobile Robot
SLAM (Simultaneous Localization and Mapping) is crucial for autonomous robotic systems, enabling efficient path planning and effective obstacle avoidance.
Humanoid
NASA Valkyrie humanoid was designed and built to be a robust, rugged, entirely electric humanoid robot with 44 DoF that could operate in degraded or damaged human-engineered environments.
Autonomous Robotic Manipulation
Simulation plays a crucial role in robotics development by allowing developers to assess their algorithms and designs without physical prototypes. This dataset simulates a Universal Robots UR5e robotic arm with physics-based simulation Gazebo.
Unmanned Ground Vehicle
Using a Clearpath A200 Husky mobile robot equipped with a Velodyne HDL32 lidar sensor, an Xsens MTi-30 IMU, and wheel encoders, the research team overcomes the limitations of manual agriculture, allowing them to cover far more ground.
Autonomous Underwater Vehicle
SubPipe is a submarine outfall pipeline dataset, acquired by the Light AUV developed by Oceanscan-MST, within the scope of Challenge Camp 1 of the H2020 REANMO project.
Mobile Robot
Hilti Robot SLAM challenge with robot-mounted sensor suite consisting of Robosense BPearl lidar, Xsens MTi-670 IMU, and 4x OAK-D cameras.
Autonomous Mobile Robot
Treescope uses lidar, IMU, GPS, RGBD and thermal cameras for agricultural semantic segmentation of tree trunks, with the goal of automating the process.
Unmanned Ground Vehicle
A multi-cue positioning system for agricultural robots aimed at developing a self-localization method capable of robust pose estimation for accurate mapping while operating in farming environments.
Autonomous Vehicle
KITTI-360 is a large-scale dataset for 3D scene reconstruction, object detection, and semantic segmentation in urban environments, encompassing 360° sensor data.
Handheld
RGB-D cameras can be used to create dense 3D maps of indoor environments. These maps can be used for robot navigation, manipulation, and semantic mapping.
Unmanned Ground Vehicle
The Rellis-3D dataset was collected on the Rellis Campus of Texas A&M University and includes an Ouster OS1 64-beam lidar, Velodyne VLP-32C 32-beam lidar, and a Nerian SceneScan stereo depth camera.
Handheld
The Hilti Handheld SLAM dataset was captured using a Sevensense Alphasense Core unit, which houses 5 cameras plus a high-end IMU, and a Hesai PandarXT-32 lidar sensor.