Vision Laboratory at Yale University

Projects

Amongst our many works, we have highlighted some below including datasets, benchmarks and projects. Feel free to browse through our project pages and codebases, and to make use of our datasets and benchmarks to train and test your models.

Sparse-to-Dense Depth Completion Benchmarks
Sparse-to-Dense Depth Completion Benchmarks

We compile both unsupervised/self-supervised and supervised methods published in recent conferences and journals on the VOID (Wong et. al., 2020) and KITTI (Uhrig et. al., 2017) depth completion benchmarks. Benchmark ranking considers four metrics - MAE, RMSE, iMAE, iRMSE.

VOID Dataset
VOID Dataset from Unsupervised Depth Completion from Visual Inertial Odometry

VOID (Visual Odometry with Inertial and Depth) contains ~47K frames of indoor and outdoor scenes with non-trivial 6 degrees of motion captured with a Intel RealSense D435i that was configured to produce synchronized RGB (VGA-sized), depth, accelerometer and gyroscope. We provide sparse points measured by XIVO at 3 density levels - 1500, 500, and 150 - which corresponds to 0.5%, 0.15% and 0.05% of the image space, respectively.