Top-down attention mechanisms require the selection of specificobjects or locations;however,the brain mechanism involved when attention is allocated across different modalities is not well understood.The aim of this s...Top-down attention mechanisms require the selection of specificobjects or locations;however,the brain mechanism involved when attention is allocated across different modalities is not well understood.The aim of this study was to use functional magnetic resonance imaging to define the neural mechanisms underlyingdivided and selective spatial attention.A concurrent audiovisual stimulus was used,and subjects were prompted to focus on a visual,auditory and audiovisual stimulus in a Posner paradigm.Ourbehavioral results confirmed the better performance of selectiveattention compared to devided attention.We found differences in the activation level of the frontoparietal network,visual/auditorycortex,the putamen and the salience network under differentattention conditions.We further used Granger causality(GC)toexplore effective connectivity differences between tasks.Differences in GC connectivity between visual and auditory selective tasksreflected the visual dominance effect under spatial attention.In addition,our results supported the role of the putamen inredistributing attention and the functional separation of the saliencenetwork.In summary,we explored the audiovisual top-down allocation of attention and observed the differences in neuralmechanisms under endogenous attention modes,which revealedthe differences in cross-modal expression in visual and auditory attention under attentional modulation.展开更多
Synthetic Aperture Radar(SAR) imaging systems have been widely used in civil and military fields due to their all-weather and all-day abilities and various other advantages. However, due to image data exponentially in...Synthetic Aperture Radar(SAR) imaging systems have been widely used in civil and military fields due to their all-weather and all-day abilities and various other advantages. However, due to image data exponentially increasing, there is a need for novel automatic target detection and recognition technologies. In recent years, the visual attention mechanism in the visual system has helped humans effectively deal with complex visual signals. In particular, biologically inspired top-down attention models have garnered much attention recently. This paper presents a visual attention model for SAR target detection, comprising a bottom-up stage and top-down process.In the bottom-up step, the Itti model is improved based on the difference between SAR and optical images. The top-down step fully utilizes prior information to further detect targets. Extensive detection experiments carried out on the benchmark Moving and Stationary Target Acquisition and Recognition(MSTAR) dataset show that, compared with typical visual models and other popular detection methods, our model has increased ability and robustness for SAR target detection, under a range of Signal to Clutter Ratio(SCR) conditions and scenes. In addition, results obtained using only the bottom-up stage are inferior to those of the proposed method, further demonstrating the effectiveness and rationality of a top-down strategy. In summary, our proposed visual attention method can be considered a potential benchmark resource for the SAR research community.展开更多
基金The study was supported by the National Natural Science Foundation of China(Grant Nos.62171300,61727807).
文摘Top-down attention mechanisms require the selection of specificobjects or locations;however,the brain mechanism involved when attention is allocated across different modalities is not well understood.The aim of this study was to use functional magnetic resonance imaging to define the neural mechanisms underlyingdivided and selective spatial attention.A concurrent audiovisual stimulus was used,and subjects were prompted to focus on a visual,auditory and audiovisual stimulus in a Posner paradigm.Ourbehavioral results confirmed the better performance of selectiveattention compared to devided attention.We found differences in the activation level of the frontoparietal network,visual/auditorycortex,the putamen and the salience network under differentattention conditions.We further used Granger causality(GC)toexplore effective connectivity differences between tasks.Differences in GC connectivity between visual and auditory selective tasksreflected the visual dominance effect under spatial attention.In addition,our results supported the role of the putamen inredistributing attention and the functional separation of the saliencenetwork.In summary,we explored the audiovisual top-down allocation of attention and observed the differences in neuralmechanisms under endogenous attention modes,which revealedthe differences in cross-modal expression in visual and auditory attention under attentional modulation.
基金supported by the National Natural Science Foundation of China(Nos.61771027,61071139,61471019,61671035)supported in part under the Royal Society of Edinburgh-National Natural Science Foundation of China(RSE-NNSFC)Joint Project(2017–2019)(No.6161101383)with China University of Petroleum(Huadong)partially supported by the UK Engineering and Physical Sciences Research Council(EPSRC)(Nos.EP/I009310/1,EP/M026981/1)
文摘Synthetic Aperture Radar(SAR) imaging systems have been widely used in civil and military fields due to their all-weather and all-day abilities and various other advantages. However, due to image data exponentially increasing, there is a need for novel automatic target detection and recognition technologies. In recent years, the visual attention mechanism in the visual system has helped humans effectively deal with complex visual signals. In particular, biologically inspired top-down attention models have garnered much attention recently. This paper presents a visual attention model for SAR target detection, comprising a bottom-up stage and top-down process.In the bottom-up step, the Itti model is improved based on the difference between SAR and optical images. The top-down step fully utilizes prior information to further detect targets. Extensive detection experiments carried out on the benchmark Moving and Stationary Target Acquisition and Recognition(MSTAR) dataset show that, compared with typical visual models and other popular detection methods, our model has increased ability and robustness for SAR target detection, under a range of Signal to Clutter Ratio(SCR) conditions and scenes. In addition, results obtained using only the bottom-up stage are inferior to those of the proposed method, further demonstrating the effectiveness and rationality of a top-down strategy. In summary, our proposed visual attention method can be considered a potential benchmark resource for the SAR research community.