Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans...Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.展开更多
To improve the quality of the infrared image and enhance the information of the object,a dual band infrared image fusion method based on feature extraction and a novel multiple pulse coupled neural network(multi-PCNN)...To improve the quality of the infrared image and enhance the information of the object,a dual band infrared image fusion method based on feature extraction and a novel multiple pulse coupled neural network(multi-PCNN)is proposed.In this multi-PCNN fusion scheme,the auxiliary PCNN which captures the characteristics of feature image extracting from the infrared image is used to modulate the main PCNN,whose input could be original infrared image.Meanwhile,to make the PCNN fusion effect consistent with the human vision system,Laplacian energy is adopted to obtain the value of adaptive linking strength in PCNN.After that,the original dual band infrared images are reconstructed by using a weight fusion rule with the fire mapping images generated by the main PCNNs to obtain the fused image.Compared to wavelet transforms,Laplacian pyramids and traditional multi-PCNNs,fusion images based on our method have more information,rich details and clear edges.展开更多
Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on ...Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively.展开更多
With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althou...With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althoughthis approach can achieve higher detection performance,it requires huge human labor and resources to maintainthe feature library.In contrast,semantic feature engineering can dynamically discover new semantic featuresand optimize feature selection by automatically analyzing the semantic information contained in the data itself,thus reducing dependence on prior knowledge.However,current semantic features still have the problem ofsemantic expression singularity,as they are extracted from a single semantic mode such as word segmentation,character segmentation,or arbitrary semantic feature extraction.This paper extracts features of web requestsfrom dual semantic granularity,and proposes a semantic feature fusion method to solve the above problems.Themethod first preprocesses web requests,and extracts word-level and character-level semantic features of URLs viaconvolutional neural network(CNN),respectively.By constructing three loss functions to reduce losses betweenfeatures,labels and categories.Experiments on the HTTP CSIC 2010,Malicious URLs and HttpParams datasetsverify the proposedmethod.Results show that compared withmachine learning,deep learningmethods and BERTmodel,the proposed method has better detection performance.And it achieved the best detection rate of 99.16%in the dataset HttpParams.展开更多
Faced with the massive amount of online shopping clothing images,how to classify them quickly and accurately is a challenging task in image classification.In this paper,we propose a novel method,named Multi_XMNet,to s...Faced with the massive amount of online shopping clothing images,how to classify them quickly and accurately is a challenging task in image classification.In this paper,we propose a novel method,named Multi_XMNet,to solve the clothing images classification problem.The proposed method mainly consists of two convolution neural network(CNN)branches.One branch extracts multiscale features from the whole expressional image by Multi_X which is designed by improving the Xception network,while the other extracts attention mechanism features from the whole expressional image by MobileNetV3-small network.Both multiscale and attention mechanism features are aggregated before making classification.Additionally,in the training stage,global average pooling(GAP),convolutional layers,and softmax classifiers are used instead of the fully connected layer to classify the final features,which speed up model training and alleviate the problem of overfitting caused by too many parameters.Experimental comparisons are made in the public DeepFashion dataset.The experimental results show that the classification accuracy of this method is 95.38%,which is better than InceptionV3,Xception and InceptionV3_Xception by 5.58%,3.32%,and 2.22%,respectively.The proposed Multi_XMNet image classification model can help enterprises and researchers in the field of clothing e-commerce to automaticly,efficiently and accurately classify massive clothing images.展开更多
The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study s...The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study super-resolution(SR)algorithms applied to CT images to improve the reso-lution of CT images.However,most of the existing SR algorithms are studied based on natural images,which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth,which is not suitable for machines with limited resources.To alleviate these issues,we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution(RFAFN).Specifically,we design a contextual feature extraction block(CFEB)that can extract CT image features more efficiently and accurately than ordinary residual blocks.In addition,we propose a feature-weighted cascading strategy(FWCS)based on attentional feature fusion blocks(AFFB)to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information.Finally,we suggest a global hierarchical feature fusion strategy(GHFFS),which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels.Numerous experiments show that our method performs better than most of the state-of-the-art(SOTA)methods on the COVID-19 chest CT dataset.In detail,the peak signal-to-noise ratio(PSNR)is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at×3 SR compared to the suboptimal method,but the number of parameters and multi-adds are reduced by 22K and 0.43G,respectively.Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19.展开更多
In pedestrian re-recognition,the traditional pedestrian re-recognition method will be affected by the changes of background,veil,clothing and so on,which will make the recognition effect decline.In order to reduce the...In pedestrian re-recognition,the traditional pedestrian re-recognition method will be affected by the changes of background,veil,clothing and so on,which will make the recognition effect decline.In order to reduce the impact of background,veil,clothing and other changes on the recognition effect,this paper proposes a pedestrian re-recognition method based on the cycle-consistent generative adversarial network and multifeature fusion.By comparing the measured distance between two pedestrians,pedestrian re-recognition is accomplished.Firstly,this paper uses Cycle GAN to transform and expand the data set,so as to reduce the influence of pedestrian posture changes as much as possible.The method consists of two branches:global feature extraction and local feature extraction.Then the global feature and local feature are fused.The fused features are used for comparison measurement learning,and the similarity scores are calculated to sort the samples.A large number of experimental results on large data sets CUHK03 and VIPER show that this new method reduces the influence of background,veil,clothing and other changes on the recognition effect.展开更多
In order to solve the problem of low recognition rates of weeds by a single feature,a method was proposed in this study to identify weeds in Asparagus(Asparagus officinalis L.)field using multi-feature fusion and back...In order to solve the problem of low recognition rates of weeds by a single feature,a method was proposed in this study to identify weeds in Asparagus(Asparagus officinalis L.)field using multi-feature fusion and backpropagation neural network(BPNN).A total of 382 images of weeds competing with asparagus growth were collected,including 135 of Cirsium arvense(L.)Scop.,138 of Conyza sumatrensis(Retz.)E.Walker,and 109 of Calystegia hederacea Wall.The grayscale images were extracted from the RGB images of weeds using the 2G-R-B factor.Threshold segmentation of the grayscale image of weeds was applied using Otsu method.Then the internal holes of the leaves were filled through the expansion and corrosion morphological operations,and other interference targets were removed to obtain the binary image.The foreground image was obtained by masking the binary image and the RGB image.Then,the color moment algorithm was used to extract weeds color feature,the gray level co-occurrence matrix and the Local Binary Pattern(LBP)algorithm was used to extract weeds texture features,and seven Hu invariant moment features and the roundness and slenderness ratio of weeds were extracted as their shape features.According to the shape,color,texture,and fusion features of the test samples,a weed identification model was built.The test results showed that the recognition rate of Cirsium arvense(L.)Scop.,Calystegia hederacea Wall.and Conyza sumatrensis(Retz.)E.Walker were 82.72%(color feature),72.41%(shape feature),86.73%(texture feature)and 93.51%(fusion feature),respectively.Therefore,this method can provide a reference for the study of weeds identification in the asparagus field.展开更多
针对由于血管类间具有强相似性造成的动静脉错误分类问题,提出了一种新的融合上下文信息的多尺度视网膜动静脉分类网络(multi-scale retinal artery and vein classification network,MCFNet),该网络使用多尺度特征(multi-scale feature...针对由于血管类间具有强相似性造成的动静脉错误分类问题,提出了一种新的融合上下文信息的多尺度视网膜动静脉分类网络(multi-scale retinal artery and vein classification network,MCFNet),该网络使用多尺度特征(multi-scale feature,MSF)提取模块及高效的全局上下文信息融合(efficient global contextual information aggregation,EGCA)模块结合U型分割网络进行动静脉分类,抑制了倾向于背景的特征并增强了血管的边缘、交点和末端特征,解决了段内动静脉错误分类问题。此外,在U型网络的解码器部分加入3层深度监督,使浅层信息得到充分训练,避免梯度消失,优化训练过程。在2个公开的眼底图像数据集(DRIVE-AV,LES-AV)上,与3种现有网络进行方法对比,该模型的F1评分分别提高了2.86、1.92、0.81个百分点,灵敏度分别提高了4.27、2.43、1.21个百分点,结果表明所提出的模型能够很好地解决动静脉分类错误的问题。展开更多
基金supported by the National Key R&D Program of China(2018AAA0102100)the National Natural Science Foundation of China(No.62376287)+3 种基金the International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province(2021CB1013)the Key Research and Development Program of Hunan Province(2022SK2054)the Natural Science Foundation of Hunan Province(No.2022JJ30762,2023JJ70016)the 111 Project under Grant(No.B18059).
文摘Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.
基金Supported by the National Natural Science Foundation of China(60905012,60572058)
文摘To improve the quality of the infrared image and enhance the information of the object,a dual band infrared image fusion method based on feature extraction and a novel multiple pulse coupled neural network(multi-PCNN)is proposed.In this multi-PCNN fusion scheme,the auxiliary PCNN which captures the characteristics of feature image extracting from the infrared image is used to modulate the main PCNN,whose input could be original infrared image.Meanwhile,to make the PCNN fusion effect consistent with the human vision system,Laplacian energy is adopted to obtain the value of adaptive linking strength in PCNN.After that,the original dual band infrared images are reconstructed by using a weight fusion rule with the fire mapping images generated by the main PCNNs to obtain the fused image.Compared to wavelet transforms,Laplacian pyramids and traditional multi-PCNNs,fusion images based on our method have more information,rich details and clear edges.
基金supported by the MSIT(Ministry of Science and ICT)Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2023-2018-0-01426)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)+1 种基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabiathe Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/12/6).
文摘Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively.
基金a grant from the National Natural Science Foundation of China(Nos.11905239,12005248 and 12105303).
文摘With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althoughthis approach can achieve higher detection performance,it requires huge human labor and resources to maintainthe feature library.In contrast,semantic feature engineering can dynamically discover new semantic featuresand optimize feature selection by automatically analyzing the semantic information contained in the data itself,thus reducing dependence on prior knowledge.However,current semantic features still have the problem ofsemantic expression singularity,as they are extracted from a single semantic mode such as word segmentation,character segmentation,or arbitrary semantic feature extraction.This paper extracts features of web requestsfrom dual semantic granularity,and proposes a semantic feature fusion method to solve the above problems.Themethod first preprocesses web requests,and extracts word-level and character-level semantic features of URLs viaconvolutional neural network(CNN),respectively.By constructing three loss functions to reduce losses betweenfeatures,labels and categories.Experiments on the HTTP CSIC 2010,Malicious URLs and HttpParams datasetsverify the proposedmethod.Results show that compared withmachine learning,deep learningmethods and BERTmodel,the proposed method has better detection performance.And it achieved the best detection rate of 99.16%in the dataset HttpParams.
基金Fundamental Research Funds for the Central Universities of Ministry of Education of China(No.19D111201)。
文摘Faced with the massive amount of online shopping clothing images,how to classify them quickly and accurately is a challenging task in image classification.In this paper,we propose a novel method,named Multi_XMNet,to solve the clothing images classification problem.The proposed method mainly consists of two convolution neural network(CNN)branches.One branch extracts multiscale features from the whole expressional image by Multi_X which is designed by improving the Xception network,while the other extracts attention mechanism features from the whole expressional image by MobileNetV3-small network.Both multiscale and attention mechanism features are aggregated before making classification.Additionally,in the training stage,global average pooling(GAP),convolutional layers,and softmax classifiers are used instead of the fully connected layer to classify the final features,which speed up model training and alleviate the problem of overfitting caused by too many parameters.Experimental comparisons are made in the public DeepFashion dataset.The experimental results show that the classification accuracy of this method is 95.38%,which is better than InceptionV3,Xception and InceptionV3_Xception by 5.58%,3.32%,and 2.22%,respectively.The proposed Multi_XMNet image classification model can help enterprises and researchers in the field of clothing e-commerce to automaticly,efficiently and accurately classify massive clothing images.
基金supported by the General Project of Natural Science Foundation of Hebei Province of China(H2019201378)the Foundation of the President of Hebei University(XZJJ201917)the Special Project for Cultivating Scientific and Technological Innovation Ability of University and Middle School Students of Hebei Province(2021H060306).
文摘The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study super-resolution(SR)algorithms applied to CT images to improve the reso-lution of CT images.However,most of the existing SR algorithms are studied based on natural images,which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth,which is not suitable for machines with limited resources.To alleviate these issues,we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution(RFAFN).Specifically,we design a contextual feature extraction block(CFEB)that can extract CT image features more efficiently and accurately than ordinary residual blocks.In addition,we propose a feature-weighted cascading strategy(FWCS)based on attentional feature fusion blocks(AFFB)to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information.Finally,we suggest a global hierarchical feature fusion strategy(GHFFS),which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels.Numerous experiments show that our method performs better than most of the state-of-the-art(SOTA)methods on the COVID-19 chest CT dataset.In detail,the peak signal-to-noise ratio(PSNR)is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at×3 SR compared to the suboptimal method,but the number of parameters and multi-adds are reduced by 22K and 0.43G,respectively.Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19.
文摘In pedestrian re-recognition,the traditional pedestrian re-recognition method will be affected by the changes of background,veil,clothing and so on,which will make the recognition effect decline.In order to reduce the impact of background,veil,clothing and other changes on the recognition effect,this paper proposes a pedestrian re-recognition method based on the cycle-consistent generative adversarial network and multifeature fusion.By comparing the measured distance between two pedestrians,pedestrian re-recognition is accomplished.Firstly,this paper uses Cycle GAN to transform and expand the data set,so as to reduce the influence of pedestrian posture changes as much as possible.The method consists of two branches:global feature extraction and local feature extraction.Then the global feature and local feature are fused.The fused features are used for comparison measurement learning,and the similarity scores are calculated to sort the samples.A large number of experimental results on large data sets CUHK03 and VIPER show that this new method reduces the influence of background,veil,clothing and other changes on the recognition effect.
基金This work was partially supported by the National Natural Science Foundation of China(Grant No.32071905No.61771224)+3 种基金the National Key Research and Development Plan of China(Grant No.2018YFF0213601)the National Natural Science Foundation of China(Grant No.61771224)the Jiangsu Demonstration Project of Modern Agricultural Machinery Equipment and Technology(Grant No.NJ2019-19)the China Agriculture Research System(CARS-23-C03).
文摘In order to solve the problem of low recognition rates of weeds by a single feature,a method was proposed in this study to identify weeds in Asparagus(Asparagus officinalis L.)field using multi-feature fusion and backpropagation neural network(BPNN).A total of 382 images of weeds competing with asparagus growth were collected,including 135 of Cirsium arvense(L.)Scop.,138 of Conyza sumatrensis(Retz.)E.Walker,and 109 of Calystegia hederacea Wall.The grayscale images were extracted from the RGB images of weeds using the 2G-R-B factor.Threshold segmentation of the grayscale image of weeds was applied using Otsu method.Then the internal holes of the leaves were filled through the expansion and corrosion morphological operations,and other interference targets were removed to obtain the binary image.The foreground image was obtained by masking the binary image and the RGB image.Then,the color moment algorithm was used to extract weeds color feature,the gray level co-occurrence matrix and the Local Binary Pattern(LBP)algorithm was used to extract weeds texture features,and seven Hu invariant moment features and the roundness and slenderness ratio of weeds were extracted as their shape features.According to the shape,color,texture,and fusion features of the test samples,a weed identification model was built.The test results showed that the recognition rate of Cirsium arvense(L.)Scop.,Calystegia hederacea Wall.and Conyza sumatrensis(Retz.)E.Walker were 82.72%(color feature),72.41%(shape feature),86.73%(texture feature)and 93.51%(fusion feature),respectively.Therefore,this method can provide a reference for the study of weeds identification in the asparagus field.
文摘针对由于血管类间具有强相似性造成的动静脉错误分类问题,提出了一种新的融合上下文信息的多尺度视网膜动静脉分类网络(multi-scale retinal artery and vein classification network,MCFNet),该网络使用多尺度特征(multi-scale feature,MSF)提取模块及高效的全局上下文信息融合(efficient global contextual information aggregation,EGCA)模块结合U型分割网络进行动静脉分类,抑制了倾向于背景的特征并增强了血管的边缘、交点和末端特征,解决了段内动静脉错误分类问题。此外,在U型网络的解码器部分加入3层深度监督,使浅层信息得到充分训练,避免梯度消失,优化训练过程。在2个公开的眼底图像数据集(DRIVE-AV,LES-AV)上,与3种现有网络进行方法对比,该模型的F1评分分别提高了2.86、1.92、0.81个百分点,灵敏度分别提高了4.27、2.43、1.21个百分点,结果表明所提出的模型能够很好地解决动静脉分类错误的问题。