Presenter: Yuxiong Wang, Department of Computer Science, University of Illinois
Wednesday, December 7, 2022
8 AM Pacific / 9 AM Mountain / 10 AM Central / 11 AM Eastern
The visual world which artificial intelligent agents live in and perceive is intrinsically open, streaming, and dynamic. However, despite impressive advances in visual learning and perception, state-of- the-art systems are still narrowly applicable, operating within a closed, static world of fixed datasets. In this talk, I will discuss our efforts towards developing generalizable and adaptive open-world perception and learning systems. Our key insight is to introduce a mental model with hallucination ability – creating internal imaginations of scenes, objects, and their variations and dynamics not actually present to the senses. I will focus on how to integrate such an intrinsic mental model with extrinsic task-oriented models and construct a corresponding closed-loop feedback system. I will demonstrate the potential of this framework for scaling up open-world, in-the-wild perception in application domains such as transportation, robotics, geospatial intelligence, and healthcare.
Yuxiong Wang is an Assistant Professor in the Department of Computer Science at the University of Illinois Urbana-Champaign. He is also affiliated with the National Center for Supercomputing Applications (NCSA). He received a Ph.D. in robotics from Carnegie Mellon University. His research interests lie in computer vision, machine learning, and robotics, with a particular focus on few-shot learning, meta-learning, open-world learning, and streaming perception. He is a recipient of awards including the Amazon Faculty Research Award, the European Conference on Computer Vision (ECCV) Best Paper Honorable Mention Award, and the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR( Best Paper Award Finalist. For details: https://yxw.cs.illinois.edu/.