Background: Previous studies have demonstrated the plasticity of perceptual sensitivity and compensatory mechanisms of audiovisual integration (AVI) in older adults. However, the impact of perceptual training on audio...Background: Previous studies have demonstrated the plasticity of perceptual sensitivity and compensatory mechanisms of audiovisual integration (AVI) in older adults. However, the impact of perceptual training on audiovisual integrative abilities remains unclear. Methods: This study randomly assigned 40 older adults to either a training or control group. The training group underwent a five-day audiovisual perceptual program, while the control group received no training. Participants completed simultaneous judgment (SJ) and audiovisual detection tasks before and after training. Results: Findings indicated improved perceptual sensitivity to audiovisual synchrony in the training group, with AVI significantly higher post-test compared to pre-test (9.95% vs. 13.87%). No significant change was observed in the control group (9.61% vs. 10.77%). Conclusion: These results suggested that cross-modal perceptual training might be an effective candidate cognitive intervention to ease the dysfunction of unimodal sensory.展开更多
The present study examined whether audiovisual integration of temporal stimulus features in humans can be predicted by the maximum likelihood estimation (MLE) model which is based on the weighting of unisensory cues...The present study examined whether audiovisual integration of temporal stimulus features in humans can be predicted by the maximum likelihood estimation (MLE) model which is based on the weighting of unisensory cues by their relative reliabilities. In an audiovisual temporal order judgment paradigm, the reliability of the auditory signal was manipulated by Gaussian volume envelopes, introducing varying degrees of temporal uncertainty. While statistically optimal weighting according to the MLE rule was found in half of the participants, the other half consistently overweighted the auditory signal. The results are discussed in terms of a general auditory bias in time perception, interindividual differences, as well as in terms of the conditions and limits of statistically optimal multisensory integration.展开更多
Integrating multisensory inputs to generate accurate perception and guide behavior is among the most critical functions of the brain.Subcortical regions such as the amygdala are involved in sensory processing includin...Integrating multisensory inputs to generate accurate perception and guide behavior is among the most critical functions of the brain.Subcortical regions such as the amygdala are involved in sensory processing including vision and audition,yet their roles in multisensory integration remain unclear.In this study,we systematically investigated the function of neurons in the amygdala and adjacent regions in integrating audiovisual sensory inputs using a semi-chronic multi-electrode array and multiple combinations of audiovisual stimuli.From a sample of 332 neurons,we showed the diverse response patterns to audiovisual stimuli and the neural characteristics of bimodal over unimodal modulation,which could be classified into four types with differentiated regional origins.Using the hierarchical clustering method,neurons were further clustered into five groups and associated with different integrating functions and sub-regions.Finally,regions distinguishing congruent and incongruent bimodal sensory inputs were identified.Overall,visual processing dominates audiovisual integration in the amygdala and adjacent regions.Our findings shed new light on the neural mechanisms of multisensory integration in the primate brain.展开更多
文摘Background: Previous studies have demonstrated the plasticity of perceptual sensitivity and compensatory mechanisms of audiovisual integration (AVI) in older adults. However, the impact of perceptual training on audiovisual integrative abilities remains unclear. Methods: This study randomly assigned 40 older adults to either a training or control group. The training group underwent a five-day audiovisual perceptual program, while the control group received no training. Participants completed simultaneous judgment (SJ) and audiovisual detection tasks before and after training. Results: Findings indicated improved perceptual sensitivity to audiovisual synchrony in the training group, with AVI significantly higher post-test compared to pre-test (9.95% vs. 13.87%). No significant change was observed in the control group (9.61% vs. 10.77%). Conclusion: These results suggested that cross-modal perceptual training might be an effective candidate cognitive intervention to ease the dysfunction of unimodal sensory.
基金Supported by the German Research Foundation (DFG) (No. GK 1247/1)
文摘The present study examined whether audiovisual integration of temporal stimulus features in humans can be predicted by the maximum likelihood estimation (MLE) model which is based on the weighting of unisensory cues by their relative reliabilities. In an audiovisual temporal order judgment paradigm, the reliability of the auditory signal was manipulated by Gaussian volume envelopes, introducing varying degrees of temporal uncertainty. While statistically optimal weighting according to the MLE rule was found in half of the participants, the other half consistently overweighted the auditory signal. The results are discussed in terms of a general auditory bias in time perception, interindividual differences, as well as in terms of the conditions and limits of statistically optimal multisensory integration.
基金supported by the National Natural Science Foundation of China(U20A2017 and 31830037)Guangdong Basic and Applied Basic Research Foundation(2020A1515010785,2020A1515111118,and 2022A1515010134)+5 种基金the Youth Innovation Promotion Association of the Chinese Academy of Sciences(2017120)the Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions(NYKFKT2019009)Shenzhen Technological Research Center for Primate Translational Medicine(F-2021-Z99-504979)the Strategic Research Program of the Chinese Academy of Sciences(XDBS01030100 and XDB32010300)Scientific and Technological Innovation 2030(2021ZD0204300)the Fundamental Research Funds for the Central Universities.
文摘Integrating multisensory inputs to generate accurate perception and guide behavior is among the most critical functions of the brain.Subcortical regions such as the amygdala are involved in sensory processing including vision and audition,yet their roles in multisensory integration remain unclear.In this study,we systematically investigated the function of neurons in the amygdala and adjacent regions in integrating audiovisual sensory inputs using a semi-chronic multi-electrode array and multiple combinations of audiovisual stimuli.From a sample of 332 neurons,we showed the diverse response patterns to audiovisual stimuli and the neural characteristics of bimodal over unimodal modulation,which could be classified into four types with differentiated regional origins.Using the hierarchical clustering method,neurons were further clustered into five groups and associated with different integrating functions and sub-regions.Finally,regions distinguishing congruent and incongruent bimodal sensory inputs were identified.Overall,visual processing dominates audiovisual integration in the amygdala and adjacent regions.Our findings shed new light on the neural mechanisms of multisensory integration in the primate brain.