Date: 04/08/2025
Time: 1:00 pm - 2:00 pm
Description
The rapid advancement of imaging techniques and artificial intelligence has revolutionized research and applications in visual intelligence (VI). In this talk, I will present our studies covering a broad range of topics in VI, including visual recognition, video understanding, visual enhancement, and relevant machine learning techniques, with applications in virtual/augmented reality, biomedical research, and more.
I will then present our recent work applying AI to projector systems for spatial augmented reality tasks. In particular, image-based relighting, projector compensation and depth/normal reconstruction are three important tasks of projector-camera systems (ProCams) and spatial augmented reality (SAR). Although they share a similar pipeline of finding projector-camera image mappings, in tradition, they are addressed independently, sometimes with different prerequisites, devices and sampling images. In practice, this may be cumbersome for SAR applications to address them one-by-one. In this talk, we propose a novel end-to-end trainable model named DeProCams to explicitly learn the photometric and geometric mappings of ProCams, and once trained, DeProCams can be applied simultaneously to the three tasks. DeProCams explicitly decomposes the projector-camera image mappings into three subprocesses: shading attributes estimation, rough direct light estimation and photorealistic neural rendering. In our experiments, DeProCams shows clear advantages over previous arts with promising quality and meanwhile being fully differentiable. Moreover, by solving the three tasks in a unified model, DeProCams waives the need for additional optical devices, radiometric calibrations and structured light patterns. This is a joint work with Bingyao Huang.
Haibin Ling received the B.S. and M.S. degrees from Peking University in 1997 and 2000, respectively, and the Ph.D. degree from the University of Maryland, College Park, in 2006. From 2000 to 2001, he was an assistant researcher at Microsoft Research Asia. From 2006 to 2007, he worked as a postdoctoral scientist at the University of California Los Angeles. In 2007, he joined Siemens Corporate Research as a research scientist; then, from 2008 to 2019, he worked as an Assistant Professor and then Associate Professor at Temple University. In fall 2019, he joined Stony Brook University as a SUNY Empire Innovation Professor in the Department of Computer Science. His research interests include computer vision, augmented reality, medical image analysis, machine learning, and human computer interaction. He received Best Student Paper Award at ACM UIST (2003), Best Journal Paper Award at IEEE VR (2021), NSF CAREER Award (2014), Yahoo Faculty Research and Engagement Award (2019), and Amazon Machine Learning Research Award (2019). He serves or served as Associate Editors for IEEE Trans. on Pattern Analysis and Machine Intelligence (PAMI), IEEE Trans. on Visualization and Computer Graphics (TVCG), Computer Vision and Image Understanding (CVIU), and Pattern Recognition (PR), and as Area Chairs various times for CVPR, ICCV, ECCV and WACV. He is a fellow of IEEE.
Register for the event at this Zoom link.
Clara Tran
Email: clara.tran@stonybrook.edu
Latest posts by Clara Tran (see all)
- 2025 Spring Semester: Reference and Virtual Chat Services - January 26, 2025
- Nikita Soni, PhD student,on “Human-Centered Large Language Modeling” - November 26, 2024
- Dr. Jesus Rios on “Can a machine learn chemistry?” - November 8, 2024