Multi‐modal brain image registration has been widely applied to functional localisation,neurosurgery and computational anatomy.The existing registration methods based on the dense deformation fields involve too many ...Multi‐modal brain image registration has been widely applied to functional localisation,neurosurgery and computational anatomy.The existing registration methods based on the dense deformation fields involve too many parameters,which is not conducive to the exploration of correct spatial correspondence between the float and reference images.Meanwhile,the unidirectional registration may involve the deformation folding,which will result in the change of topology during registration.To address these issues,this work has presented an unsupervised image registration method using the free form deformation(FFD)and the symmetry constraint‐based generative adversarial networks(FSGAN).The FSGAN utilises the principle component analysis network‐based structural representations of the reference and float images as the inputs and uses the generator to learn the FFD model parameters,thereby producing two deformation fields.Meanwhile,the FSGAN uses two discriminators to decide whether the bilateral registration have been realised simultaneously.Besides,the symmetry constraint is utilised to construct the loss function,thereby avoiding the deformation folding.Experiments on BrainWeb,high grade gliomas,IXI and LPBA40 show that compared with state‐of‐the‐art methods,the FSGAN provides superior performance in terms of visual comparisons and such quantitative indexes as dice value,target registration error and computational efficiency.展开更多
In telerobotic system for remote welding, human-machine interface is one of the most important factor for enhancing capability and efficiency. This paper presents an architecture design of human-machine interface for ...In telerobotic system for remote welding, human-machine interface is one of the most important factor for enhancing capability and efficiency. This paper presents an architecture design of human-machine interface for welding telerobotic system: welding multi-modal human-machine interface. The human-machine interface integrated several control modes, which are namely shared control, teleteaching, supervisory control and local autonomous control. Space mouse, panoramic vision camera and graphics simulation system are also integrated into the human-machine interface for welding teleoperation. Finally, weld seam tracing and welding experiments of U-shape seam are performed by these control modes respectively. The results show that the system has better performance of human-machine interaction and complexity environment welding.展开更多
Traditional electroencephalograph(EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer i...Traditional electroencephalograph(EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer interface(BCI)in practice.We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples.To solve this problem,we propose a multimodal domain adaptive variational autoencoder(MMDA-VAE)method,which learns shared cross-domain latent representations of the multi-modal data.Our method builds a multi-modal variational autoencoder(MVAE)to project the data of multiple modalities into a common space.Through adversarial learning and cycle-consistency regularization,our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge.Extensive experiments are conducted on two public datasets,SEED and SEED-IV,and the results show the superiority of our proposed method.Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.展开更多
Leveraging deep learning-based techniques to classify diseases has attracted extensive research interest in recent years.Nevertheless,most of the current studies only consider single-modal medical images,and the numbe...Leveraging deep learning-based techniques to classify diseases has attracted extensive research interest in recent years.Nevertheless,most of the current studies only consider single-modal medical images,and the number of ophthalmic diseases that can be classified is relatively small.Moreover,imbalanced data distribution of different ophthalmic diseases is not taken into consideration,which limits the application of deep learning techniques in realistic clinical scenes.In this paper,we propose a Multimodal Multi-disease Long-tailed Classification Network(M^(2)LC-Net)in response to the challenges mentioned above.M^(2)LC-Net leverages ResNet18-CBAM to extract features from fundus images and Optical Coherence Tomography(OCT)images,respectively,and conduct feature fusion to classify 11 common ophthalmic diseases.Moreover,Class Activation Mapping(CAM)is employed to visualize each mode to improve interpretability of M^(2)LC-Net.We conduct comprehensive experiments on realistic dataset collected from a Grade III Level A ophthalmology hospital in China,including 34,396 images of 11 disease labels.Experimental results demonstrate effectiveness of our proposed model M^(2)LC-Net.Compared with the stateof-the-art,various performance metrics have been improved significantly.Specifically,Cohen’s kappa coefficient κ has been improved by 3.21%,which is a remarkable improvement.展开更多
基金supported in part by the National Key Research and Development Program of China under Grant 2018Y FE0206900in part by the National Natural Science Foundation of China under Grant 61871440in part by the CAAIHuawei MindSpore Open Fund.We gratefully acknowledge the support of MindSpore for this research.
文摘Multi‐modal brain image registration has been widely applied to functional localisation,neurosurgery and computational anatomy.The existing registration methods based on the dense deformation fields involve too many parameters,which is not conducive to the exploration of correct spatial correspondence between the float and reference images.Meanwhile,the unidirectional registration may involve the deformation folding,which will result in the change of topology during registration.To address these issues,this work has presented an unsupervised image registration method using the free form deformation(FFD)and the symmetry constraint‐based generative adversarial networks(FSGAN).The FSGAN utilises the principle component analysis network‐based structural representations of the reference and float images as the inputs and uses the generator to learn the FFD model parameters,thereby producing two deformation fields.Meanwhile,the FSGAN uses two discriminators to decide whether the bilateral registration have been realised simultaneously.Besides,the symmetry constraint is utilised to construct the loss function,thereby avoiding the deformation folding.Experiments on BrainWeb,high grade gliomas,IXI and LPBA40 show that compared with state‐of‐the‐art methods,the FSGAN provides superior performance in terms of visual comparisons and such quantitative indexes as dice value,target registration error and computational efficiency.
文摘In telerobotic system for remote welding, human-machine interface is one of the most important factor for enhancing capability and efficiency. This paper presents an architecture design of human-machine interface for welding telerobotic system: welding multi-modal human-machine interface. The human-machine interface integrated several control modes, which are namely shared control, teleteaching, supervisory control and local autonomous control. Space mouse, panoramic vision camera and graphics simulation system are also integrated into the human-machine interface for welding teleoperation. Finally, weld seam tracing and welding experiments of U-shape seam are performed by these control modes respectively. The results show that the system has better performance of human-machine interaction and complexity environment welding.
基金National Natural Science Foundation of China(61976209,62020106015,U21A20388)in part by the CAS International Collaboration Key Project(173211KYSB20190024)in part by the Strategic Priority Research Program of CAS(XDB32040000)。
文摘Traditional electroencephalograph(EEG)-based emotion recognition requires a large number of calibration samples to build a model for a specific subject,which restricts the application of the affective brain computer interface(BCI)in practice.We attempt to use the multi-modal data from the past session to realize emotion recognition in the case of a small amount of calibration samples.To solve this problem,we propose a multimodal domain adaptive variational autoencoder(MMDA-VAE)method,which learns shared cross-domain latent representations of the multi-modal data.Our method builds a multi-modal variational autoencoder(MVAE)to project the data of multiple modalities into a common space.Through adversarial learning and cycle-consistency regularization,our method can reduce the distribution difference of each domain on the shared latent representation layer and realize the transfer of knowledge.Extensive experiments are conducted on two public datasets,SEED and SEED-IV,and the results show the superiority of our proposed method.Our work can effectively improve the performance of emotion recognition with a small amount of labelled multi-modal data.
基金the National Natural Science Foundation of China(No.62076035)。
文摘Leveraging deep learning-based techniques to classify diseases has attracted extensive research interest in recent years.Nevertheless,most of the current studies only consider single-modal medical images,and the number of ophthalmic diseases that can be classified is relatively small.Moreover,imbalanced data distribution of different ophthalmic diseases is not taken into consideration,which limits the application of deep learning techniques in realistic clinical scenes.In this paper,we propose a Multimodal Multi-disease Long-tailed Classification Network(M^(2)LC-Net)in response to the challenges mentioned above.M^(2)LC-Net leverages ResNet18-CBAM to extract features from fundus images and Optical Coherence Tomography(OCT)images,respectively,and conduct feature fusion to classify 11 common ophthalmic diseases.Moreover,Class Activation Mapping(CAM)is employed to visualize each mode to improve interpretability of M^(2)LC-Net.We conduct comprehensive experiments on realistic dataset collected from a Grade III Level A ophthalmology hospital in China,including 34,396 images of 11 disease labels.Experimental results demonstrate effectiveness of our proposed model M^(2)LC-Net.Compared with the stateof-the-art,various performance metrics have been improved significantly.Specifically,Cohen’s kappa coefficient κ has been improved by 3.21%,which is a remarkable improvement.