With the remarkable advancements in machine vision research and its ever-expanding applications,scholars have increasingly focused on harnessing various vision methodologies within the industrial realm.Specifically,de...With the remarkable advancements in machine vision research and its ever-expanding applications,scholars have increasingly focused on harnessing various vision methodologies within the industrial realm.Specifically,detecting vehicle floor welding points poses unique challenges,including high operational costs and limited portability in practical settings.To address these challenges,this paper innovatively integrates template matching and the Faster RCNN algorithm,presenting an industrial fusion cascaded solder joint detection algorithm that seamlessly blends template matching with deep learning techniques.This algorithm meticulously weights and fuses the optimized features of both methodologies,enhancing the overall detection capabilities.Furthermore,it introduces an optimized multi-scale and multi-template matching approach,leveraging a diverse array of templates and image pyramid algorithms to bolster the accuracy and resilience of object detection.By integrating deep learning algorithms with this multi-scale and multi-template matching strategy,the cascaded target matching algorithm effectively accurately identifies solder joint types and positions.A comprehensive welding point dataset,labeled by experts specifically for vehicle detection,was constructed based on images from authentic industrial environments to validate the algorithm’s performance.Experiments demonstrate the algorithm’s compelling performance in industrial scenarios,outperforming the single-template matching algorithm by 21.3%,the multi-scale and multitemplate matching algorithm by 3.4%,the Faster RCNN algorithm by 19.7%,and the YOLOv9 algorithm by 17.3%in terms of solder joint detection accuracy.This optimized algorithm exhibits remarkable robustness and portability,ideally suited for detecting solder joints across diverse vehicle workpieces.Notably,this study’s dataset and feature fusion approach can be a valuable resource for other algorithms seeking to enhance their solder joint detection capabilities.This work thus not only presents a novel and effective solution for industrial solder joint detection but lays the groundwork for future advancements in this critical area.展开更多
Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image f...Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices.展开更多
To address the challenges of missed detections in water surface target detection using solely visual algorithms in unmanned surface vehicle(USV)perception,this paper proposes a method based on the fusion of visual and...To address the challenges of missed detections in water surface target detection using solely visual algorithms in unmanned surface vehicle(USV)perception,this paper proposes a method based on the fusion of visual and LiDAR point-cloud projection for water surface target detection.Firstly,the visual recognition component employs an improved YOLOv7 algorithmbased on a self-built dataset for the detection of water surface targets.This algorithm modifies the original YOLOv7 architecture to a Slim-Neck structure,addressing the problemof excessive redundant information during feature extraction in the original YOLOv7 network model.Simultaneously,this modification simplifies the computational burden of the detector,reduces inference time,and maintains accuracy.Secondly,to tackle the issue of sample imbalance in the self-built dataset,slide loss function is introduced.Finally,this paper replaces the original Complete Intersection over Union(CIoU)loss function with the Minimum Point Distance Intersection over Union(MPDIoU)loss function in the YOLOv7 algorithm,which accelerates model learning and enhances robustness.To mitigate the problem of missed recognitions caused by complex water surface conditions in purely visual algorithms,this paper further adopts the fusion of LiDAR and camera data,projecting the threedimensional point-cloud data from LiDAR onto a two-dimensional pixel plane.This significantly reduces the rate of missed detections for water surface targets.展开更多
The fractal dimension of the fusion line in different dissimilar welded joints is measured with Box Dimension Method.The non scale region of the fusion line with fractal character is calculated. The fusion line in th...The fractal dimension of the fusion line in different dissimilar welded joints is measured with Box Dimension Method.The non scale region of the fusion line with fractal character is calculated. The fusion line in the dissimilar welded joint is proved to be a fractal structure. The change and influence factors of the fractal dimension of the fusion line are studied.展开更多
Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively h...Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively handling social media data with multiple modalities.Moreover,most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations,leading to unsatisfactory sentiment classification results.Motivated by this,we propose a new visualtextual sentiment classification model named Multi-Model Fusion(MMF),which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content.The proposed model comprises three deep neural networks.Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data.Thus,more discriminative features are gathered for accurate sentiment classification.Then,a multichannel joint fusion modelwith a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification.Finally,the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model.An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model(LIME)to ensure the model’s explainability and resilience.The proposed MMF model has been tested on four real-world sentiment datasets,achieving(99.78%)accuracy on Binary_Getty(BG),(99.12%)on Binary_iStock(BIS),(95.70%)on Twitter,and(79.06%)on the Multi-View Sentiment Analysis(MVSA)dataset.These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.展开更多
Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without an...Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without any explicit emotional keywords,which appears more frequently in application scenarios.The lack of explicit emotion information makes it extremely hard to extract emotion causes only with the local context.Moreover,an entire event is usually across multiple clauses,while existing work merely extracts cause events at clause level and cannot effectively capture complete cause event information.To address these issues,the events are first redefined at the tuple level and a span-based tuple-level algorithm is proposed to extract events from different clauses.Based on it,a corpus for implicit emotion cause extraction that tries to extract causes of implicit emotions is constructed.The authors propose a knowledge-enriched jointlearning model of implicit emotion recognition and implicit emotion cause extraction tasks(KJ-IECE),which leverages commonsense knowledge from ConceptNet and NRC_VAD to better capture connections between emotion and corresponding cause events.Experiments on both implicit and explicit emotion cause extraction datasets demonstrate the effectiveness of the proposed model.展开更多
A new view of characterstic zone classification of fusion welding joint has been put for-ward on the base of a number of metallograplic observations and researches. TLe characteristiczones of the joint include (1) hom...A new view of characterstic zone classification of fusion welding joint has been put for-ward on the base of a number of metallograplic observations and researches. TLe characteristiczones of the joint include (1) homogenous mixture region (2)heterogerous mixture zone, (3)partically melting zone and (4) heat-affected zone. (1) and (2) consist of the weld metal. (2) and (3)compose the bond, the boundary betweer (2) and (3) is the fusion line Four kinds of characteristicappearences in the ' heterogenous mixture zone' are induced. The formation process of thecharcteristic zones is distussed in detail. The differences between authors' classification and W. F.Savage's one are compared, to hoping that the formation essence and composition feature of fusionwelding joint can be reasonably reflected.展开更多
Objective To evaluate the clinical application of atlantoaxial joint fusion using anterior transarticular screw fixation and bone grafting for atlantoaxial joint instability. Methods Twenty-three cases of atlantoaxial...Objective To evaluate the clinical application of atlantoaxial joint fusion using anterior transarticular screw fixation and bone grafting for atlantoaxial joint instability. Methods Twenty-three cases of atlantoaxial joint instability were展开更多
中文电子病历实体关系抽取是构建医疗知识图谱,服务下游子任务的重要基础。目前,中文电子病例进行实体关系抽取仍存在因医疗文本关系复杂、实体密度大而造成医疗名词识别不准确的问题。针对这一问题,提出了基于对抗学习与多特征融合的...中文电子病历实体关系抽取是构建医疗知识图谱,服务下游子任务的重要基础。目前,中文电子病例进行实体关系抽取仍存在因医疗文本关系复杂、实体密度大而造成医疗名词识别不准确的问题。针对这一问题,提出了基于对抗学习与多特征融合的中文电子病历实体关系联合抽取模型AMFRel(adversarial learning and multi-feature fusion for relation triple extraction),提取电子病历的文本和词性特征,得到融合词性信息的编码向量;利用编码向量联合对抗训练产生的扰动生成对抗样本,抽取句子主语;利用信息融合模块丰富文本结构特征,并根据特定的关系信息抽取出相应的宾语,得到医疗文本的三元组。采用CHIP2020关系抽取数据集和糖尿病数据集进行实验验证,结果显示:AMFRel在CHIP2020关系抽取数据集上的Precision为63.922%,Recall为57.279%,F1值为60.418%;在糖尿病数据集上的Precision、Recall和F1值分别为83.914%,67.021%和74.522%,证明了该模型的三元组抽取性能优于其他基线模型。展开更多
为获得结构化的小麦品种表型和遗传描述,针对非结构化小麦种质数据中存在的实体边界模糊以及关系重叠问题,提出一种基于深度字词融合的小麦种质信息实体关系联合抽取模型WGIE-DCWF(wheat germplasm information extraction model based ...为获得结构化的小麦品种表型和遗传描述,针对非结构化小麦种质数据中存在的实体边界模糊以及关系重叠问题,提出一种基于深度字词融合的小麦种质信息实体关系联合抽取模型WGIE-DCWF(wheat germplasm information extraction model based on deep character and word fusion)。模型编码层通过深度字词融合和上下文语义特征融合,提高密集实体特征识别能力;模型三元组抽取层建立层叠指针网络,提高重叠关系的提取能力。在小麦种质数据集和公开数据集上的一系列对比实验结果表明,WGIE-DCWF模型能够有效提高小麦种质数据实体关系联合抽取效果,同时拥有较好的泛化性,可以为小麦种质信息知识库构建提供技术支撑。展开更多
煤矿掘进巷道锚护位置的精准识别与定位是钻锚机器人实现智能永久支护亟需突破的关键技术。笔者提出一种基于视觉图像与激光点云融合的巷道锚护孔位智能识别定位方法,包括图像目标识别、点云图像特征融合和定位坐标提取3个步骤:①针对...煤矿掘进巷道锚护位置的精准识别与定位是钻锚机器人实现智能永久支护亟需突破的关键技术。笔者提出一种基于视觉图像与激光点云融合的巷道锚护孔位智能识别定位方法,包括图像目标识别、点云图像特征融合和定位坐标提取3个步骤:①针对煤矿井下低照度、水雾和粉尘等环境因素导致的锚孔轮廓成像模糊的问题,采用IA(Image-Adaptive)-SimAM-YOLOv7-tiny网络对巷道待锚护孔位进行视觉识别,该网络能够自适应地增强图像亮度和对比度,恢复锚孔边缘的高频信息,并使模型重点关注锚孔特征,提高锚孔检测的成功率;②求解激光雷达和工业相机联合标定的外参矩阵,将图像检测的锚孔边界框通过透视投影关系生成锥形感兴趣区域(Region Of Interest,ROI),获得对应的目标点云团簇;③采用点云处理算法提取锚护孔位边界点云,获得孔位中心坐标及其法向量,并通过坐标深度差比较判断锚孔识别的正确性。文中搭建了锚杆台车机械臂钻孔定位系统,对算法自主定位的精度以及准确度进行验证,试验结果表明:IA-SimAM-YOLOv7-tiny模型的平均精度均值(Mean Average Precision,mAP)为87.3%,较YOLOv7-tiny模型提高了4.6%;提出的融合算法定位误差为3 mm,单锚孔情况下系统平均识别时间为0.77 s,与单一视觉方法相比,采用激光与视觉多源融合不仅可以降低环境和小样本训练对定位性能的影响,而且可以获得锚护孔位的法向量,为机械臂调整钻孔位姿实现精准锚固提供依据。展开更多
基金supported in part by the National Key Research Project of China under Grant No.2023YFA1009402General Science and Technology Plan Items in Zhejiang Province ZJKJT-2023-02.
文摘With the remarkable advancements in machine vision research and its ever-expanding applications,scholars have increasingly focused on harnessing various vision methodologies within the industrial realm.Specifically,detecting vehicle floor welding points poses unique challenges,including high operational costs and limited portability in practical settings.To address these challenges,this paper innovatively integrates template matching and the Faster RCNN algorithm,presenting an industrial fusion cascaded solder joint detection algorithm that seamlessly blends template matching with deep learning techniques.This algorithm meticulously weights and fuses the optimized features of both methodologies,enhancing the overall detection capabilities.Furthermore,it introduces an optimized multi-scale and multi-template matching approach,leveraging a diverse array of templates and image pyramid algorithms to bolster the accuracy and resilience of object detection.By integrating deep learning algorithms with this multi-scale and multi-template matching strategy,the cascaded target matching algorithm effectively accurately identifies solder joint types and positions.A comprehensive welding point dataset,labeled by experts specifically for vehicle detection,was constructed based on images from authentic industrial environments to validate the algorithm’s performance.Experiments demonstrate the algorithm’s compelling performance in industrial scenarios,outperforming the single-template matching algorithm by 21.3%,the multi-scale and multitemplate matching algorithm by 3.4%,the Faster RCNN algorithm by 19.7%,and the YOLOv9 algorithm by 17.3%in terms of solder joint detection accuracy.This optimized algorithm exhibits remarkable robustness and portability,ideally suited for detecting solder joints across diverse vehicle workpieces.Notably,this study’s dataset and feature fusion approach can be a valuable resource for other algorithms seeking to enhance their solder joint detection capabilities.This work thus not only presents a novel and effective solution for industrial solder joint detection but lays the groundwork for future advancements in this critical area.
文摘Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices.
基金supported by the National Natural Science Foundation of China(No.51876114)the Shanghai Engineering Research Center of Marine Renewable Energy(Grant No.19DZ2254800).
文摘To address the challenges of missed detections in water surface target detection using solely visual algorithms in unmanned surface vehicle(USV)perception,this paper proposes a method based on the fusion of visual and LiDAR point-cloud projection for water surface target detection.Firstly,the visual recognition component employs an improved YOLOv7 algorithmbased on a self-built dataset for the detection of water surface targets.This algorithm modifies the original YOLOv7 architecture to a Slim-Neck structure,addressing the problemof excessive redundant information during feature extraction in the original YOLOv7 network model.Simultaneously,this modification simplifies the computational burden of the detector,reduces inference time,and maintains accuracy.Secondly,to tackle the issue of sample imbalance in the self-built dataset,slide loss function is introduced.Finally,this paper replaces the original Complete Intersection over Union(CIoU)loss function with the Minimum Point Distance Intersection over Union(MPDIoU)loss function in the YOLOv7 algorithm,which accelerates model learning and enhances robustness.To mitigate the problem of missed recognitions caused by complex water surface conditions in purely visual algorithms,this paper further adopts the fusion of LiDAR and camera data,projecting the threedimensional point-cloud data from LiDAR onto a two-dimensional pixel plane.This significantly reduces the rate of missed detections for water surface targets.
文摘The fractal dimension of the fusion line in different dissimilar welded joints is measured with Box Dimension Method.The non scale region of the fusion line with fractal character is calculated. The fusion line in the dissimilar welded joint is proved to be a fractal structure. The change and influence factors of the fractal dimension of the fusion line are studied.
文摘Multimodal Sentiment Analysis(SA)is gaining popularity due to its broad application potential.The existing studies have focused on the SA of single modalities,such as texts or photos,posing challenges in effectively handling social media data with multiple modalities.Moreover,most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations,leading to unsatisfactory sentiment classification results.Motivated by this,we propose a new visualtextual sentiment classification model named Multi-Model Fusion(MMF),which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content.The proposed model comprises three deep neural networks.Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data.Thus,more discriminative features are gathered for accurate sentiment classification.Then,a multichannel joint fusion modelwith a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification.Finally,the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model.An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model(LIME)to ensure the model’s explainability and resilience.The proposed MMF model has been tested on four real-world sentiment datasets,achieving(99.78%)accuracy on Binary_Getty(BG),(99.12%)on Binary_iStock(BIS),(95.70%)on Twitter,and(79.06%)on the Multi-View Sentiment Analysis(MVSA)dataset.These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.
基金National Natural Science Foundation of China,Grant/Award Numbers:61671064,61732005National Key Research&Development Program,Grant/Award Number:2018YFC0831700。
文摘Emotion cause extraction(ECE)task that aims at extracting potential trigger events of certain emotions has attracted extensive attention recently.However,current work neglects the implicit emotion expressed without any explicit emotional keywords,which appears more frequently in application scenarios.The lack of explicit emotion information makes it extremely hard to extract emotion causes only with the local context.Moreover,an entire event is usually across multiple clauses,while existing work merely extracts cause events at clause level and cannot effectively capture complete cause event information.To address these issues,the events are first redefined at the tuple level and a span-based tuple-level algorithm is proposed to extract events from different clauses.Based on it,a corpus for implicit emotion cause extraction that tries to extract causes of implicit emotions is constructed.The authors propose a knowledge-enriched jointlearning model of implicit emotion recognition and implicit emotion cause extraction tasks(KJ-IECE),which leverages commonsense knowledge from ConceptNet and NRC_VAD to better capture connections between emotion and corresponding cause events.Experiments on both implicit and explicit emotion cause extraction datasets demonstrate the effectiveness of the proposed model.
文摘A new view of characterstic zone classification of fusion welding joint has been put for-ward on the base of a number of metallograplic observations and researches. TLe characteristiczones of the joint include (1) homogenous mixture region (2)heterogerous mixture zone, (3)partically melting zone and (4) heat-affected zone. (1) and (2) consist of the weld metal. (2) and (3)compose the bond, the boundary betweer (2) and (3) is the fusion line Four kinds of characteristicappearences in the ' heterogenous mixture zone' are induced. The formation process of thecharcteristic zones is distussed in detail. The differences between authors' classification and W. F.Savage's one are compared, to hoping that the formation essence and composition feature of fusionwelding joint can be reasonably reflected.
文摘Objective To evaluate the clinical application of atlantoaxial joint fusion using anterior transarticular screw fixation and bone grafting for atlantoaxial joint instability. Methods Twenty-three cases of atlantoaxial joint instability were
文摘中文电子病历实体关系抽取是构建医疗知识图谱,服务下游子任务的重要基础。目前,中文电子病例进行实体关系抽取仍存在因医疗文本关系复杂、实体密度大而造成医疗名词识别不准确的问题。针对这一问题,提出了基于对抗学习与多特征融合的中文电子病历实体关系联合抽取模型AMFRel(adversarial learning and multi-feature fusion for relation triple extraction),提取电子病历的文本和词性特征,得到融合词性信息的编码向量;利用编码向量联合对抗训练产生的扰动生成对抗样本,抽取句子主语;利用信息融合模块丰富文本结构特征,并根据特定的关系信息抽取出相应的宾语,得到医疗文本的三元组。采用CHIP2020关系抽取数据集和糖尿病数据集进行实验验证,结果显示:AMFRel在CHIP2020关系抽取数据集上的Precision为63.922%,Recall为57.279%,F1值为60.418%;在糖尿病数据集上的Precision、Recall和F1值分别为83.914%,67.021%和74.522%,证明了该模型的三元组抽取性能优于其他基线模型。
文摘为获得结构化的小麦品种表型和遗传描述,针对非结构化小麦种质数据中存在的实体边界模糊以及关系重叠问题,提出一种基于深度字词融合的小麦种质信息实体关系联合抽取模型WGIE-DCWF(wheat germplasm information extraction model based on deep character and word fusion)。模型编码层通过深度字词融合和上下文语义特征融合,提高密集实体特征识别能力;模型三元组抽取层建立层叠指针网络,提高重叠关系的提取能力。在小麦种质数据集和公开数据集上的一系列对比实验结果表明,WGIE-DCWF模型能够有效提高小麦种质数据实体关系联合抽取效果,同时拥有较好的泛化性,可以为小麦种质信息知识库构建提供技术支撑。
文摘煤矿掘进巷道锚护位置的精准识别与定位是钻锚机器人实现智能永久支护亟需突破的关键技术。笔者提出一种基于视觉图像与激光点云融合的巷道锚护孔位智能识别定位方法,包括图像目标识别、点云图像特征融合和定位坐标提取3个步骤:①针对煤矿井下低照度、水雾和粉尘等环境因素导致的锚孔轮廓成像模糊的问题,采用IA(Image-Adaptive)-SimAM-YOLOv7-tiny网络对巷道待锚护孔位进行视觉识别,该网络能够自适应地增强图像亮度和对比度,恢复锚孔边缘的高频信息,并使模型重点关注锚孔特征,提高锚孔检测的成功率;②求解激光雷达和工业相机联合标定的外参矩阵,将图像检测的锚孔边界框通过透视投影关系生成锥形感兴趣区域(Region Of Interest,ROI),获得对应的目标点云团簇;③采用点云处理算法提取锚护孔位边界点云,获得孔位中心坐标及其法向量,并通过坐标深度差比较判断锚孔识别的正确性。文中搭建了锚杆台车机械臂钻孔定位系统,对算法自主定位的精度以及准确度进行验证,试验结果表明:IA-SimAM-YOLOv7-tiny模型的平均精度均值(Mean Average Precision,mAP)为87.3%,较YOLOv7-tiny模型提高了4.6%;提出的融合算法定位误差为3 mm,单锚孔情况下系统平均识别时间为0.77 s,与单一视觉方法相比,采用激光与视觉多源融合不仅可以降低环境和小样本训练对定位性能的影响,而且可以获得锚护孔位的法向量,为机械臂调整钻孔位姿实现精准锚固提供依据。
文摘针对多基地水下小目标分类识别问题,本文提出了一种基于核空间联合稀疏表示和指数平滑的多基地水下小目标识别方法 .对水下目标多角度散射信号提取6种典型的具有信息互补性和关联性的特征,提出一种随机森林(Random Forest,RF)和最小冗余最大相关(minimum Redundancy and Maximum Relevance,mRMR)相结合的特征选择方法(RF-mRMR),得出综合的特征重要性排序结果 .通过实验得出分类模型所需的最优特征子集,达到降低数据处理复杂度和提高目标分类结果的目的 .为了捕捉到数据中的高阶结构,在联合稀疏表示模型的基础上,使用核函数将线性不可分的特征数据映射到高维核特征空间.为了充分挖掘稀疏重构后包含在残差波段中的有用信息,使用指数平滑公式对具有一定意义的残差信息进行再利用,最后由核特征空间下的最小误差准则判定目标的类别.应用本文提出的方法对4类目标的海试数据进行识别,结果表明,相较于其他7种对比算法,本文提出的改进方法具有更好的分类性能,而且大多数情况下,本文提出的算法在双基地声呐模式下具有比单基地声呐更高的识别准确率和更低的虚警率.