Vision Laboratory at Yale University

CVPR 2024

CVPR 2024 was a blast! This year we had 4 papers in this year, one of which was selected as a highlight paper, on topics ranging from (1) adapting multimodal depth estimation models to novel test-time environments, (2) leveraging language to ground depth estimates to metric scale, (3) scaling the number of sensor modalities supported by multimodal models through use of sensor tokens, and (4) mitigating distortions stemming to atmospheric turbulence (highlight paper! 🚀 🚀 🚀 🚀)

For a summary, please see: https://www.linkedin.com/posts/alexklwong_cvpr2024-highlight-multimodal-activity-7210037642803965952-_6mP?utm_source=share&utm_medium=member_desktop

This would not be possible without the collaborations with Stefano Soatto, Andrew Owens, and Byung-Woo Hong with a team spanning across several universities including Yale, UCLA, University of Michigan, Chung-Ang University, and UC Berkeley

Credit, of course, goes to our students including

Undergraduates: Anjali Gupta, Suchisrit Gangopadhyay, Xien Chen

Graduates: Alfred Wu, Chao Feng, Daniel Wang, Fengyu Yang, Hyoungseob Park, Ziyang Chen, Yiming Dou, Ziyao Zeng

Postdocs: Dong Lao, Congli Wang

Previous post
Invited talk hosted by CAPA-CT
Next post
UG2+ workshop at CVPR 2024