Testbed
Basic formatting
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
italic text
bold text
strike-through text
Text with extra blank lines above and below
- list item a
- list item b
- list item c
- ordered list item 1
- ordered list item 2
- ordered list item 3
Plain image:

Heading 1
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Heading 2
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Heading 3
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Heading 4
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
| TABLE | Game 1 | Game 2 | Game 3 | Total |
|---|---|---|---|---|
| Anna | 144 | 123 | 218 | 485 |
| Bill | 90 | 175 | 120 | 385 |
| Cara | 102 | 214 | 233 | 549 |
It was the best of times it was the worst of times. It was the age of wisdom, it was the age of foolishness. It was the spring of hope, it was the winter of despair.
// some code with syntax highlighting
const popup = document.querySelector("#popup");
popup.style.width = "100%";
popup.innerText =
"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.";
This sentence has inline code, useful for making references to variables, packages, versions, etc. within a sentence.
Lorem ipsum dolor sit amet.
Consectetur adipiscing elit.
Sed do eiusmod tempor incididunt.
Jekyll Spaceship
| Stage | Direct Products | ATP Yields |
|---|---|---|
| Glycolysis
|
2 ATP | |
| 2 NADH | 3–5 ATP | |
| Pyruvaye oxidation | 2 NADH | 5 ATP |
| Citric acid cycle
|
2 ATP | |
| 6 NADH | 15 ATP | |
| 2 FADH | 3 ATP | |
| 30–32 ATP |
$ a * b = c ^ b $
$ 2^{\frac{n-1}{3}} $
$ \int_a^b f(x)\,dx. $
Components
Section
Section, background
Section, dark=true
Section, background dark=true
Section, size=wide
Section, size=full w/ figure
Figure
px width
% width
px height
px width, svg
% width, svg
px height, svg
Button
Icon
Lorem Ipsum Dolor
Feature
Title
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
List
List citations
2025
2024
2023
2022
2021
2020
2019
2017
2015
2011
List projects
We compile both unsupervised/self-supervised and supervised methods published in recent conferences and journals on the VOID (Wong et. al., 2020) and KITTI (Uhrig et. al., 2017) depth completion benchmarks. Benchmark ranking considers four metrics - MAE, RMSE, iMAE, iRMSE.
VOID (Visual Odometry with Inertial and Depth) contains ~47K frames of indoor and outdoor scenes with non-trivial 6 degrees of motion captured with a Intel RealSense D435i that was configured to produce synchronized RGB (VGA-sized), depth, accelerometer and gyroscope. We provide sparse points measured by XIVO at 3 density levels - 1500, 500, and 150 - which corresponds to 0.5%, 0.15% and 0.05% of the image space, respectively.
List team members
List blog posts
2025
It has been a busy Spring for the Vision Lab. We are happy to share that five of our papers have been published at top venues!
Our paper “ProtoDepth: Unsupervised Continual Depth Completion with Prototypes” has been accepted by CVPR 2025!
2024
Our paper “RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions” has been accepted by NeurIPS 2024!
Our paper Adaptive Correspondence Scoring for Unsupervised Medical Image Registration has been selected for oral presentation (top 3%) at ECCV 2024! Congratulations and many thanks to everyone involved!
Professor Alex Wong will be giving an invited talk titled “The Know-Hows of Multimodal Depth Perception” on August 8, 2024 at 9 PM EST (August 9, 2024 at 10 AM KST). The event is hosted by the Korea Institute of Industrial Technology (KITECH), National Collaboration Center (NCC).
Our group has three papers accepted by ECCV 2024. Congratulations everyone, and many thanks to our collaborators!
Our paper All-day Depth Completion, led by Vadim Ezhov, an undergraduate student, has been accepted to IROS 2024. Congrats everyone!
Our paper Heteroscedastic Uncertainty Estimation Framework for Unsupervised Registration has been accepted to MICCAI 2024. Congrats everyone!
CVPR 2024 was a blast! This year we had 4 papers in this year, one of which was selected as a highlight paper, on topics ranging from (1) adapting multimodal depth estimation models to novel test-time environments, (2) leveraging language to ground depth estimates to metric scale, (3) scaling the number of sensor modalities supported by multimodal models through use of sensor tokens, and (4) mitigating distortions stemming to atmospheric turbulence (highlight paper! 🚀 🚀 🚀 🚀)
Professor Alex Wong will be giving an invited talk at a symposium titled “Embracing Challenges and Opportunities: Perspective of Asian American Scholars” on May 4, 2024. The event is hosted by the Chinese-American Professors’ Association in Connecticut (CAPA-CT) and Asian Faculty Association at Yale University.
Our paper Sub-token ViT Embedding via Stochastic Resonance Transformers has been accepted by ICML 2024!
Our group has four papers accepted by CVPR 2024. Congratulations everyone, and many thanks to our collaborators!
Hosted by the Lindi Liao and the FHWA, Professor Alex Wong will be visiting Turner Fairbank Highway Research Center on Wednesday, February 7, 2024. Professor Wong will also be giving an invited talk titled “Unsupervised Learning of Depth Perception and Beyond” at the George Mason University on Friday February 9, 2024.
2023
Hosted by the Yale Institute for Foundations of Data Science (FDS) and Google, Professor Alex Wong will be giving an invited talk titled “Unsupervised Learning of Depth Perception and Beyond” at the Theory and Practice of Foundation Models Workshop on Friday October 27, 2023.
Professor Alex Wong will be giving an invited talk titled “Unsupervised Learning of Depth Perception and Beyond” at the Northeast Robotics Colloquium (NERC) 2023 on November 4th, 2023. Stay tuned!
Hosted by Professor Paul Huang, Professor Alex Wong will be giving an invited talk titled “Unsupervised Learning of Depth Perception and Beyond” at the AI Symposium of University of Delaware. The event will take place at Salon C of the University of Delaware, Courtyard Marriot Hotel on September 25nd, 2023.
Professor Alex Wong will be joining Scott Aaronso, Arman Cohen, Tesca Fitzgerald, Denny Zhou as a panelists on the Advances in Artificial Intelligence, moderated by Marynel Vasquez. The event will be held at the grand opening of Kline Tower on September 22nd, 2023.
Excited to be co-hosting the UG2+ workshop on bridging the gap between computational photography and visual recognition at CVPR2023. The workshop will be held at West 107-108 from 8:30 AM - 5:00 PM PST on June 19, 2023. Be sure to come by! This year we have 3 competition tracks with many exciting results to deal with challenging scenarios i.e. object detection in haze, atmospheric turbulence mitigation, and diverse rain removal. Additionally, we have invited a fantastic list of speakers - Joon Chul Ye, Sabine Süsstrunk, Vishal M. Patel, Tianfan Xue, Nianyi Li, Jinwei Gu, Emma Alexander, Kevin J. Miller.
We will be presenting both of our works at CVPR2023 on June 21, 2023. Be sure to swing by our posters WED-AM-100 for Depth Estimation from Camera Image and mmWave Radar Point Cloud, and WED-PM-109 for WeatherStream - Light Transport Automation of Single Image Deweathering!
Two of our papers (Depth Estimation from Camera Image and mmWave Radar Point Cloud, and WeatherStream - Light Transport Automation of Single Image Deweathering) have been accepted by CVPR 2023! Both of these works were made possible by the cross-lab collaborations between Yale Vision Laboratory, Visual Machines Group (VMG), UCLA Vision Lab, and Networked and Embedded Systems Laboratory (NESL).
Our workshop UG2+ - bridging the gap between computational photography and visual recognition has been accepted by CVPR 2023! Many thanks to the advisory (Stanley Chan, Atlas Wang, Achuta Kadambi, Jiaying Liu, Walter Scheirer, Wenqi Ren) and organizing committees (Zhiyuan Mao Wuyang Chen, Abdullah Al-Shabili, Zhenyu Wu, Xingguang Zhang Ajay Jaiswal, Yunhao Ba, Howard Zhang) for all their hard work!
Our paper Spatial Mapping of Mitochondrial Networks and Bioenergetics in Lung Cancer has been accepted to Nature! This is a multi-lab across several institues. Many thanks to all of those involved for their hard work!
2022
Hosted by Achuta Kadambi (UCLA) and Katie Bouman (Caltech), Professor Alex Wong will be giving an invited talk titled “Rethinking Supervision for Some Vision Tasks” as part of the Grundfest Memorial Lecture Series on Friday October 07, 2022.
Two of our papers (Monitored Distillation for Positive Congruent Depth Completion, and Not Just Streaks - Towards Ground Truth for Single Image Deraining) have been accepted by ECCV 2022! Monitored Distillation for Positive Congruent Depth Completion is led by three undergraduate students (Tian Yu Liu, Parth Agrawal, and Allison Chen) from UCLA.
Great news! I’ve accepted an offer to join Yale University as a tenure-track Assistant Professor of Computer Science. Yale Vision Laboratory will be opening its doors this Fall!
Our paper Stereoscopic Universal Perturbations (SUPs) across Different Architectures and Datasets has been accepted to CVPR 2022! The leading authors on this paper are undergraduate students (Zachary Berger, Parth Agrawal) whom I have been advising at UCLA.
Citation
Card
Portrait
Post Excerpt
Alert
Tip Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Help Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Info Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Success Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Warning Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Error Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Tags
Float
Figures
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Code
const test = "Lorem ipsum dolor sit amet, consectetur adipiscing elit.";
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Nulla facilisi etiam dignissim diam quis. Id aliquet lectus proin nibh nisl condimentum id venenatis a. Tristique magna sit amet purus gravida quis blandit turpis cursus. Ultrices eros in cursus turpis massa tincidunt dui ut ornare. A cras semper auctor neque vitae tempus quam pellentesque nec. At tellus at urna condimentum mattis pellentesque. Ipsum consequat nisl vel pretium. Ultrices mi tempus imperdiet nulla malesuada pellentesque elit eget gravida. Integer vitae justo eget magna fermentum iaculis eu non diam. Mus mauris vitae ultricies leo integer malesuada nunc vel. Leo integer malesuada nunc vel risus. Ornare arcu odio ut sem nulla pharetra. Purus semper eget duis at tellus at urna condimentum. Enim neque volutpat ac tincidunt vitae semper quis lectus.
Grid
Regular
With Markdown images


![]()
![]()
![]()
Square
With figure components
Grid of citations
2025
2024
2023
2022
2021
2020
2019
2017
2015
2011
Grid of blog posts
2025
It has been a busy Spring for the Vision Lab. We are happy to share that five of our papers have been published at top venues!
Our paper “ProtoDepth: Unsupervised Continual Depth Completion with Prototypes” has been accepted by CVPR 2025!
2024
Our paper “RSA: Resolving Scale Ambiguities in Monocular Depth Estimators through Language Descriptions” has been accepted by NeurIPS 2024!
Our paper Adaptive Correspondence Scoring for Unsupervised Medical Image Registration has been selected for oral presentation (top 3%) at ECCV 2024! Congratulations and many thanks to everyone involved!
Professor Alex Wong will be giving an invited talk titled “The Know-Hows of Multimodal Depth Perception” on August 8, 2024 at 9 PM EST (August 9, 2024 at 10 AM KST). The event is hosted by the Korea Institute of Industrial Technology (KITECH), National Collaboration Center (NCC).
Our group has three papers accepted by ECCV 2024. Congratulations everyone, and many thanks to our collaborators!
Our paper All-day Depth Completion, led by Vadim Ezhov, an undergraduate student, has been accepted to IROS 2024. Congrats everyone!
Our paper Heteroscedastic Uncertainty Estimation Framework for Unsupervised Registration has been accepted to MICCAI 2024. Congrats everyone!
CVPR 2024 was a blast! This year we had 4 papers in this year, one of which was selected as a highlight paper, on topics ranging from (1) adapting multimodal depth estimation models to novel test-time environments, (2) leveraging language to ground depth estimates to metric scale, (3) scaling the number of sensor modalities supported by multimodal models through use of sensor tokens, and (4) mitigating distortions stemming to atmospheric turbulence (highlight paper! 🚀 🚀 🚀 🚀)
Professor Alex Wong will be giving an invited talk at a symposium titled “Embracing Challenges and Opportunities: Perspective of Asian American Scholars” on May 4, 2024. The event is hosted by the Chinese-American Professors’ Association in Connecticut (CAPA-CT) and Asian Faculty Association at Yale University.
Our paper Sub-token ViT Embedding via Stochastic Resonance Transformers has been accepted by ICML 2024!
Our group has four papers accepted by CVPR 2024. Congratulations everyone, and many thanks to our collaborators!
Hosted by the Lindi Liao and the FHWA, Professor Alex Wong will be visiting Turner Fairbank Highway Research Center on Wednesday, February 7, 2024. Professor Wong will also be giving an invited talk titled “Unsupervised Learning of Depth Perception and Beyond” at the George Mason University on Friday February 9, 2024.
2023
Hosted by the Yale Institute for Foundations of Data Science (FDS) and Google, Professor Alex Wong will be giving an invited talk titled “Unsupervised Learning of Depth Perception and Beyond” at the Theory and Practice of Foundation Models Workshop on Friday October 27, 2023.
Professor Alex Wong will be giving an invited talk titled “Unsupervised Learning of Depth Perception and Beyond” at the Northeast Robotics Colloquium (NERC) 2023 on November 4th, 2023. Stay tuned!
Hosted by Professor Paul Huang, Professor Alex Wong will be giving an invited talk titled “Unsupervised Learning of Depth Perception and Beyond” at the AI Symposium of University of Delaware. The event will take place at Salon C of the University of Delaware, Courtyard Marriot Hotel on September 25nd, 2023.
Professor Alex Wong will be joining Scott Aaronso, Arman Cohen, Tesca Fitzgerald, Denny Zhou as a panelists on the Advances in Artificial Intelligence, moderated by Marynel Vasquez. The event will be held at the grand opening of Kline Tower on September 22nd, 2023.
Excited to be co-hosting the UG2+ workshop on bridging the gap between computational photography and visual recognition at CVPR2023. The workshop will be held at West 107-108 from 8:30 AM - 5:00 PM PST on June 19, 2023. Be sure to come by! This year we have 3 competition tracks with many exciting results to deal with challenging scenarios i.e. object detection in haze, atmospheric turbulence mitigation, and diverse rain removal. Additionally, we have invited a fantastic list of speakers - Joon Chul Ye, Sabine Süsstrunk, Vishal M. Patel, Tianfan Xue, Nianyi Li, Jinwei Gu, Emma Alexander, Kevin J. Miller.
We will be presenting both of our works at CVPR2023 on June 21, 2023. Be sure to swing by our posters WED-AM-100 for Depth Estimation from Camera Image and mmWave Radar Point Cloud, and WED-PM-109 for WeatherStream - Light Transport Automation of Single Image Deweathering!
Two of our papers (Depth Estimation from Camera Image and mmWave Radar Point Cloud, and WeatherStream - Light Transport Automation of Single Image Deweathering) have been accepted by CVPR 2023! Both of these works were made possible by the cross-lab collaborations between Yale Vision Laboratory, Visual Machines Group (VMG), UCLA Vision Lab, and Networked and Embedded Systems Laboratory (NESL).
Our workshop UG2+ - bridging the gap between computational photography and visual recognition has been accepted by CVPR 2023! Many thanks to the advisory (Stanley Chan, Atlas Wang, Achuta Kadambi, Jiaying Liu, Walter Scheirer, Wenqi Ren) and organizing committees (Zhiyuan Mao Wuyang Chen, Abdullah Al-Shabili, Zhenyu Wu, Xingguang Zhang Ajay Jaiswal, Yunhao Ba, Howard Zhang) for all their hard work!
Our paper Spatial Mapping of Mitochondrial Networks and Bioenergetics in Lung Cancer has been accepted to Nature! This is a multi-lab across several institues. Many thanks to all of those involved for their hard work!
2022
Hosted by Achuta Kadambi (UCLA) and Katie Bouman (Caltech), Professor Alex Wong will be giving an invited talk titled “Rethinking Supervision for Some Vision Tasks” as part of the Grundfest Memorial Lecture Series on Friday October 07, 2022.
Two of our papers (Monitored Distillation for Positive Congruent Depth Completion, and Not Just Streaks - Towards Ground Truth for Single Image Deraining) have been accepted by ECCV 2022! Monitored Distillation for Positive Congruent Depth Completion is led by three undergraduate students (Tian Yu Liu, Parth Agrawal, and Allison Chen) from UCLA.
Great news! I’ve accepted an offer to join Yale University as a tenure-track Assistant Professor of Computer Science. Yale Vision Laboratory will be opening its doors this Fall!
Our paper Stereoscopic Universal Perturbations (SUPs) across Different Architectures and Datasets has been accepted to CVPR 2022! The leading authors on this paper are undergraduate students (Zachary Berger, Parth Agrawal) whom I have been advising at UCLA.
Cols
Text
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Nulla facilisi etiam dignissim diam quis. Id aliquet lectus proin nibh nisl condimentum id venenatis a.
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Nulla facilisi etiam dignissim diam quis. Id aliquet lectus proin nibh nisl condimentum id venenatis a. Tristique magna sit amet purus gravida quis blandit turpis cursus. Ultrices eros in cursus turpis massa tincidunt dui ut ornare. A cras semper auctor neque vitae tempus quam pellentesque nec. At tellus at urna condimentum mattis pellentesque. Ipsum consequat nisl vel pretium. Ultrices mi tempus imperdiet nulla malesuada pellentesque elit eget gravida. Integer vitae justo eget magna fermentum iaculis eu non diam. Mus mauris vitae ultricies leo integer malesuada nunc vel. Leo integer malesuada nunc vel risus. Ornare arcu odio ut sem nulla pharetra. Purus semper eget duis at tellus at urna condimentum. Enim neque volutpat ac tincidunt vitae semper quis lectus.
Images
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.
Code
const test = "Lorem ipsum dolor sit amet";
const test = "Lorem ipsum dolor sit amet";
const test = "Lorem ipsum dolor sit amet";