Qi Sun is an assistant professor at New York University. Before joining NYU, he was a research scientist at Adobe Research. He received his PhD at Stony Brook University. His research interests lie in VR/AR, perceptual computer graphics, display, and computational cognition. He is a recipient of the IEEE Virtual Reality Best Dissertation Award, with his research recognized as several best paper and honorable mention awards at ACM SIGGRAPH, IEEE ISMAR, and IEEE VIS.
Virtual and Augmented Reality enables unprecedented possibilities for displaying virtual content, sensing physical surroundings, and tracking human behaviors with high fidelity. However, we still haven't created "superhumans" who can outperform what we are in physical reality, nor a "perfect" XR system that delivers infinite battery life or realistic sensation. In this talk, he will discuss some of our recent research on leveraging eye/muscular sensing and learning to model our perception, reaction, and sensation in virtual environments. Based on the knowledge, we create just-in-time visual content that jointly optimizes human (such as reaction speed to events) and system performance (such as reduced display power consumption) in XR.
Date: Wednesday, April 17, 2024
Time: 2-3:15pm (EDT)
Location: Studio X - Carlson Library, 1st Floor & Zoom
Register to attend.
The Voices of XR speaker series is made possible by Kathy McMorran Murray and the National Science Foundation (NSF) Research Traineeship (NRT) program as part of the Interdisciplinary Graduate Training in the Science, Technology, and Applications of Augmented and Virtual Reality at the University of Rochester (#1922591).