Visual Perception and Plasticity Lab
Home / People / Research / Publication / Contact / Jobs
Research

Our lab uses psychophysical and neuroimaging techniques to investigate the mechanisms of visual plasticity, adaptation and other aspects of visual perception, e.g. multisensory integration, and social factors that modulate human vision. We also developed various altered reality systems to study longer-term adaptation (or training) that is not easily realized with traditional psychophysical methods.

Ocular Dominance Plasticity Using Altered Reality

A traditional way to improve vision is perceptual training, which can produce large benefits for a wide variety of tasks in adults. However, frequent training sessions can be difficult to integrate into patients¡¯ life and work, which limits compliance. Our group has adopted a complementary approach¡ª allowing observers to perform everyday tasks in an environment that places demands upon mechanisms of plasticity. This long-term adaptation approach has been enhanced by the development and use of augmented reality (AR). Recent AR-based visual plasticity studies have used an HMD-based video see-through method. Unlike the typical AR method, in which digital content is overlaid upon the camera video of the world, in our visual plasticity studies global image processing is applied in real-time to the entire video to alter some feature of the images. We term this type of manipulation altered reality.

Using the altered reality approach, we can present designated dissimilar video images to each eye, and test the visual functional changes both psychophysically and neurophysiologically after the observers adapt to this altered environment for a few hours. We have also applied this technique to the treatment for visual impairment, e.g. amblyopia.

Visual Awareness and Attention Shaped by Reward

Perceptions can be biased in favor of reward-associated stimuli. From the perspective of visual plasticity, reward can be considered a motivational factor that drives experience-dependent changes of cortex. Since neuromodulatory signals for rewards and punishments are released diffusely throughout the entire brain, one hypothesis is that the effects of reward learning could be free from the constraints of consciousness. Our recent study supports this hypothesis by discovering an eye-based unconscious reward learning effect. Besides the perceptual level, we have also explored how reward modulates the distribution of attentional allocation in a multiple object tracking task, while recording the event-related potentials to auditory distractors. Our new findings help complement Lavie¡¯s load theory of attention, presenting a more complex picture of how attentional load and reward affects selective attention.

Visual-Vestibular Interactions Using Virtual Reality

Our recent studies on visual-vestibular interactions are mostly around a main topic¡ª the cross-modal bias hypothesis. Living in a multisensory environment leads the brain to develop an association between signals from the visual and vestibular pathways. We hypothesize that over one¡¯s own lifetime, the association becomes so strong that the signals from one modality could produce a bias signal in a congruent direction for the other modality. Given the intrinsic noises in neural responses, when the visual input signal is sufficiently weak and (or) uncertain, such a bias signal may easily manifest its perceptual outcome.

This hypothesis has been well supported by our findings of the vestibular modulation on the processing of motion aftereffect and head-rotation-induced flash-lag effect. Besides studying the influences of head movement on vision, we have also found that visual-vestibular interactions can occur even during the preparation of head rotation.