期刊文献+
共找到158篇文章
< 1 2 8 >
每页显示 20 50 100
Bridge Crack Segmentation Method Based on Parallel Attention Mechanism and Multi-Scale Features Fusion
1
作者 Jianwei Yuan Xinli Song +2 位作者 Huaijian Pu Zhixiong Zheng Ziyang Niu 《Computers, Materials & Continua》 SCIE EI 2023年第3期6485-6503,共19页
Regular inspection of bridge cracks is crucial to bridge maintenance and repair.The traditional manual crack detection methods are timeconsuming,dangerous and subjective.At the same time,for the existing mainstream vi... Regular inspection of bridge cracks is crucial to bridge maintenance and repair.The traditional manual crack detection methods are timeconsuming,dangerous and subjective.At the same time,for the existing mainstream vision-based automatic crack detection algorithms,it is challenging to detect fine cracks and balance the detection accuracy and speed.Therefore,this paper proposes a new bridge crack segmentationmethod based on parallel attention mechanism and multi-scale features fusion on top of the DeeplabV3+network framework.First,the improved lightweight MobileNetv2 network and dilated separable convolution are integrated into the original DeeplabV3+network to improve the original backbone network Xception and atrous spatial pyramid pooling(ASPP)module,respectively,dramatically reducing the number of parameters in the network and accelerates the training and prediction speed of the model.Moreover,we introduce the parallel attention mechanism into the encoding and decoding stages.The attention to the crack regions can be enhanced from the aspects of both channel and spatial parts and significantly suppress the interference of various noises.Finally,we further improve the detection performance of the model for fine cracks by introducing a multi-scale features fusion module.Our research results are validated on the self-made dataset.The experiments show that our method is more accurate than other methods.Its intersection of union(IoU)and F1-score(F1)are increased to 77.96%and 87.57%,respectively.In addition,the number of parameters is only 4.10M,which is much smaller than the original network;also,the frames per second(FPS)is increased to 15 frames/s.The results prove that the proposed method fits well the requirements of rapid and accurate detection of bridge cracks and is superior to other methods. 展开更多
关键词 Crack detection DeeplabV3+ parallel attention mechanism feature fusion
下载PDF
Multi-Layered Deep Learning Features Fusion for Human Action Recognition
2
作者 Sadia Kiran Muhammad Attique Khan +5 位作者 Muhammad Younus Javed Majed Alhaisoni Usman Tariq Yunyoung Nam Robertas Damaševicius Muhammad Sharif 《Computers, Materials & Continua》 SCIE EI 2021年第12期4061-4075,共15页
Human Action Recognition(HAR)is an active research topic in machine learning for the last few decades.Visual surveillance,robotics,and pedestrian detection are the main applications for action recognition.Computer vis... Human Action Recognition(HAR)is an active research topic in machine learning for the last few decades.Visual surveillance,robotics,and pedestrian detection are the main applications for action recognition.Computer vision researchers have introduced many HAR techniques,but they still face challenges such as redundant features and the cost of computing.In this article,we proposed a new method for the use of deep learning for HAR.In the proposed method,video frames are initially pre-processed using a global contrast approach and later used to train a deep learning model using domain transfer learning.The Resnet-50 Pre-Trained Model is used as a deep learning model in this work.Features are extracted from two layers:Global Average Pool(GAP)and Fully Connected(FC).The features of both layers are fused by the Canonical Correlation Analysis(CCA).Then features are selected using the Shanon Entropy-based threshold function.The selected features are finally passed to multiple classifiers for final classification.Experiments are conducted on five publicly available datasets as IXMAS,UCF Sports,YouTube,UT-Interaction,and KTH.The accuracy of these data sets was 89.6%,99.7%,100%,96.7%and 96.6%,respectively.Comparison with existing techniques has shown that the proposed method provides improved accuracy for HAR.Also,the proposed method is computationally fast based on the time of execution. 展开更多
关键词 Action recognition transfer learning features fusion features selection CLASSIFICATION
下载PDF
Driver Fatigue Detection System Based on Colored and Infrared Eye Features Fusion
3
作者 Yuyang Sun Peizhou Yan +2 位作者 Zhengzheng Li Jiancheng Zou Don Hong 《Computers, Materials & Continua》 SCIE EI 2020年第6期1563-1574,共12页
Real-time detection of driver fatigue status is of great significance for road traffic safety.In this paper,a proposed novel driver fatigue detection method is able to detect the driver’s fatigue status around the cl... Real-time detection of driver fatigue status is of great significance for road traffic safety.In this paper,a proposed novel driver fatigue detection method is able to detect the driver’s fatigue status around the clock.The driver’s face images were captured by a camera with a colored lens and an infrared lens mounted above the dashboard.The landmarks of the driver’s face were labeled and the eye-area was segmented.By calculating the aspect ratios of the eyes,the duration of eye closure,frequency of blinks and PERCLOS of both colored and infrared,fatigue can be detected.Based on the change of light intensity detected by a photosensitive device,the weight matrix of the colored features and the infrared features was adjusted adaptively to reduce the impact of lighting on fatigue detection.Video samples of the driver’s face were recorded in the test vehicle.After training the classification model,the results showed that our method has high accuracy on driver fatigue detection in both daytime and nighttime. 展开更多
关键词 Driver fatigue detection feature fusion colored and infrared eye features
下载PDF
One-Class Arabic Signature Verification: A Progressive Fusion of Optimal Features
4
作者 Ansam A.Abdulhussien Mohammad F.Nasrudin +1 位作者 Saad M.Darwish Zaid A.Alyasseri 《Computers, Materials & Continua》 SCIE EI 2023年第4期219-242,共24页
Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and com... Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and commercial transactions due to its various advantages,including noninvasiveness,user-friendliness,and social and legal acceptability.According to the literature,extensive research has been conducted on signature verification systems in a variety of languages,including English,Hindi,Bangla,and Chinese.However,the Arabic Offline Signature Verification(OSV)system is still a challenging issue that has not been investigated as much by researchers due to the Arabic script being distinguished by changing letter shapes,diacritics,ligatures,and overlapping,making verification more difficult.Recently,signature verification systems have shown promising results for recognizing signatures that are genuine or forgeries;however,performance on skilled forgery detection is still unsatisfactory.Most existing methods require many learning samples to improve verification accuracy,which is a major drawback because the number of available signature samples is often limited in the practical application of signature verification systems.This study addresses these issues by presenting an OSV system based on multifeature fusion and discriminant feature selection using a genetic algorithm(GA).In contrast to existing methods,which use multiclass learning approaches,this study uses a oneclass learning strategy to address imbalanced signature data in the practical application of a signature verification system.The proposed approach is tested on three signature databases(SID)-Arabic handwriting signatures,CEDAR(Center of Excellence for Document Analysis and Recognition),and UTSIG(University of Tehran Persian Signature),and experimental results show that the proposed system outperforms existing systems in terms of reducing the False Acceptance Rate(FAR),False Rejection Rate(FRR),and Equal Error Rate(ERR).The proposed system achieved 5%improvement. 展开更多
关键词 Offline signature verification biometric system feature fusion one-class classifier
下载PDF
Attention Guided Multi Scale Feature Fusion Network for Automatic Prostate Segmentation
5
作者 Yuchun Li Mengxing Huang +1 位作者 Yu Zhang Zhiming Bai 《Computers, Materials & Continua》 SCIE EI 2024年第2期1649-1668,共20页
The precise and automatic segmentation of prostate magnetic resonance imaging(MRI)images is vital for assisting doctors in diagnosing prostate diseases.In recent years,many advanced methods have been applied to prosta... The precise and automatic segmentation of prostate magnetic resonance imaging(MRI)images is vital for assisting doctors in diagnosing prostate diseases.In recent years,many advanced methods have been applied to prostate segmentation,but due to the variability caused by prostate diseases,automatic segmentation of the prostate presents significant challenges.In this paper,we propose an attention-guided multi-scale feature fusion network(AGMSF-Net)to segment prostate MRI images.We propose an attention mechanism for extracting multi-scale features,and introduce a 3D transformer module to enhance global feature representation by adding it during the transition phase from encoder to decoder.In the decoder stage,a feature fusion module is proposed to obtain global context information.We evaluate our model on MRI images of the prostate acquired from a local hospital.The relative volume difference(RVD)and dice similarity coefficient(DSC)between the results of automatic prostate segmentation and ground truth were 1.21%and 93.68%,respectively.To quantitatively evaluate prostate volume on MRI,which is of significant clinical significance,we propose a unique AGMSF-Net.The essential performance evaluation and validation experiments have demonstrated the effectiveness of our method in automatic prostate segmentation. 展开更多
关键词 Prostate segmentation multi-scale attention 3D Transformer feature fusion MRI
下载PDF
Olive Leaf Disease Detection via Wavelet Transform and Feature Fusion of Pre-Trained Deep Learning Models
6
作者 Mahmood A.Mahmood Khalaf Alsalem 《Computers, Materials & Continua》 SCIE EI 2024年第3期3431-3448,共18页
Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wa... Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases. 展开更多
关键词 Olive leaf diseases wavelet transform deep learning feature fusion
下载PDF
Cross-Dimension Attentive Feature Fusion Network for Unsupervised Time-Series Anomaly Detection
7
作者 Rui Wang Yao Zhou +2 位作者 Guangchun Luo Peng Chen Dezhong Peng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3011-3027,共17页
Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconst... Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection. 展开更多
关键词 Time series anomaly detection unsupervised feature learning feature fusion
下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
8
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
9
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale Feature fusion
下载PDF
GaitDONet: Gait Recognition Using Deep Features Optimization and Neural Network
10
作者 Muhammad Attique Khan Awais Khan +6 位作者 Majed Alhaisoni Abdullah Alqahtani Ammar Armghan Sara A.Althubiti Fayadh Alenezi Senghour Mey Yunyoung Nam 《Computers, Materials & Continua》 SCIE EI 2023年第6期5087-5103,共17页
Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not e... Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not easy and makes the system difficult if any object is carried by a subject,such as a bag or coat.This article proposes an automated architecture based on deep features optimization for HGR.To our knowledge,it is the first architecture in which features are fused using multiset canonical correlation analysis(MCCA).In the proposed method,original video frames are processed for all 11 selected angles of the CASIA B dataset and utilized to train two fine-tuned deep learning models such as Squeezenet and Efficientnet.Deep transfer learning was used to train both fine-tuned models on selected angles,yielding two new targeted models that were later used for feature engineering.Features are extracted from the deep layer of both fine-tuned models and fused into one vector using MCCA.An improved manta ray foraging optimization algorithm is also proposed to select the best features from the fused feature matrix and classified using a narrow neural network classifier.The experimental process was conducted on all 11 angles of the large multi-view gait dataset(CASIA B)dataset and obtained improved accuracy than the state-of-the-art techniques.Moreover,a detailed confidence interval based analysis also shows the effectiveness of the proposed architecture for HGR. 展开更多
关键词 Human gait recognition BIOMETRIC deep learning features fusion OPTIMIZATION neural network
下载PDF
A Framework of Deep Optimal Features Selection for Apple Leaf Diseases Recognition
11
作者 Samra Rehman Muhammad Attique Khan +5 位作者 Majed Alhaisoni Ammar Armghan Usman Tariq Fayadh Alenezi Ye Jin Kim Byoungchol Chang 《Computers, Materials & Continua》 SCIE EI 2023年第4期697-714,共18页
Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a resul... Identifying fruit disease manually is time-consuming, expertrequired,and expensive;thus, a computer-based automated system is widelyrequired. Fruit diseases affect not only the quality but also the quantity.As a result, it is possible to detect the disease early on and cure the fruitsusing computer-based techniques. However, computer-based methods faceseveral challenges, including low contrast, a lack of dataset for training amodel, and inappropriate feature extraction for final classification. In thispaper, we proposed an automated framework for detecting apple fruit leafdiseases usingCNNand a hybrid optimization algorithm. Data augmentationis performed initially to balance the selected apple dataset. After that, twopre-trained deep models are fine-tuning and trained using transfer learning.Then, a fusion technique is proposed named Parallel Correlation Threshold(PCT). The fused feature vector is optimized in the next step using a hybridoptimization algorithm. The selected features are finally classified usingmachine learning algorithms. Four different experiments have been carriedout on the augmented Plant Village dataset and yielded the best accuracy of99.8%. The accuracy of the proposed framework is also compared to that ofseveral neural nets, and it outperforms them all. 展开更多
关键词 Convolutional neural networks deep learning features fusion features optimization CLASSIFICATION
下载PDF
HRNetO:Human Action Recognition Using Unified Deep Features Optimization Framework
12
作者 Tehseen Ahsan Sohail Khalid +3 位作者 Shaheryar Najam Muhammad Attique Khan Ye Jin Kim Byoungchol Chang 《Computers, Materials & Continua》 SCIE EI 2023年第4期1089-1105,共17页
Human action recognition(HAR)attempts to understand a subject’sbehavior and assign a label to each action performed.It is more appealingbecause it has a wide range of applications in computer vision,such asvideo surv... Human action recognition(HAR)attempts to understand a subject’sbehavior and assign a label to each action performed.It is more appealingbecause it has a wide range of applications in computer vision,such asvideo surveillance and smart cities.Many attempts have been made in theliterature to develop an effective and robust framework for HAR.Still,theprocess remains difficult and may result in reduced accuracy due to severalchallenges,such as similarity among actions,extraction of essential features,and reduction of irrelevant features.In this work,we proposed an end-toendframework using deep learning and an improved tree seed optimizationalgorithm for accurate HAR.The proposed design consists of a fewsignificantsteps.In the first step,frame preprocessing is performed.In the second step,two pre-trained deep learning models are fine-tuned and trained throughdeep transfer learning using preprocessed video frames.In the next step,deeplearning features of both fine-tuned models are fused using a new ParallelStandard Deviation Padding Max Value approach.The fused features arefurther optimized using an improved tree seed algorithm,and select the bestfeatures are finally classified by using the machine learning classifiers.Theexperiment was carried out on five publicly available datasets,including UTInteraction,Weizmann,KTH,Hollywood,and IXAMS,and achieved higheraccuracy than previous techniques. 展开更多
关键词 Action recognition features fusion deep learning features selection
下载PDF
Image Classification Based on the Fusion of Complementary Features 被引量:3
13
作者 Huilin Gao Wenjie Chen 《Journal of Beijing Institute of Technology》 EI CAS 2017年第2期197-205,共9页
Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this... Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this problem,this paper proposes to combine two ingredients:(i)Three features with functions of mutual complementation are adopted to describe the images,including pyramid histogram of words(PHOW),pyramid histogram of color(PHOC)and pyramid histogram of orientated gradients(PHOG).(ii)An adaptive feature-weight adjusted image categorization algorithm based on the SVM and the decision level fusion of multiple features are employed.Experiments are carried out on the Caltech101 database,which confirms the validity of the proposed approach.The experimental results show that the classification accuracy rate of the proposed method is improved by 7%-14%higher than that of the traditional BOW methods.With full utilization of global,local and spatial information,the algorithm is much more complete and flexible to describe the feature information of the image through the multi-feature fusion and the pyramid structure composed by image spatial multi-resolution decomposition.Significant improvements to the classification accuracy are achieved as the result. 展开更多
关键词 image classification complementary features bag-of-words (BOW) feature fusion
下载PDF
Three dimensional apple tree organs classification and yield estimation algorithm based on multifeatures fusion and support vector machine 被引量:3
14
作者 Luzhen Ge Kunlin Zou +4 位作者 Hang Zhou Xiaowei Yu Yuzhi Tan Chunlong Zhang Wei Li 《Information Processing in Agriculture》 EI 2022年第3期431-442,共12页
The automatic classification of apple tree organs is of great significance for automatic pruning of apple trees,automatic picking of apple fruits,and estimation of fruit yield.How-ever,there are some problems of dense... The automatic classification of apple tree organs is of great significance for automatic pruning of apple trees,automatic picking of apple fruits,and estimation of fruit yield.How-ever,there are some problems of dense foliage,partial occlusion and clustering of apple fruits.All of the problems above would contribute to the difficulties of organs classification and yield estimation of the apple trees.In this paper a method based on Color and Shape Multi-features Fusion and Support Vector Machine(SVM)for 3D apple tree organs classifi-cation and yield estimation was proposed.The method was designed for dwarf and densely planted apple trees at the early and late maturity stages.196-dimensional feature vectors composed with Red Green Blue(RGB),Hue Saturation Value(HSV),Curvatures,Fast Point Feature Histogram(FPFH),and Spin Image were extracted firstly.And then the SVM based on linear kernel function was trained,after that the trained SVM was used for apple tree organs classification.Then the position weighted smoothing algorithm was used for clas-sified apple tree organs smoothing.Then the agglomerative hierarchical clustering algo-rithm was used to recognize single apple fruit for yield estimation.On the same training and test set the experimental results showed that the SVM based on linear kernel function outperformed the KNN algorithm and Ensemble algorithm.The Recall,Precision and F1 score of the proposed method for yield estimation were 93.75%,96.15%and 94.93%respec-tively.In summary,to solve the problems of apple tree organs classification and yield esti-mation in natural apple orchard,a novelty method based on multi-features fusion and SVM was proposed and achieve good performance.Moreover,the proposed method could pro-vide technical support for automatic apple picking,automatic pruning of fruit trees,and automatic information acquisition and management in orchards. 展开更多
关键词 3D point cloud Organs classification Yield estimation Feature fusion SVM
原文传递
Residual Feature Attentional Fusion Network for Lightweight Chest CT Image Super-Resolution 被引量:1
15
作者 Kun Yang Lei Zhao +4 位作者 Xianghui Wang Mingyang Zhang Linyan Xue Shuang Liu Kun Liu 《Computers, Materials & Continua》 SCIE EI 2023年第6期5159-5176,共18页
The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study s... The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study super-resolution(SR)algorithms applied to CT images to improve the reso-lution of CT images.However,most of the existing SR algorithms are studied based on natural images,which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth,which is not suitable for machines with limited resources.To alleviate these issues,we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution(RFAFN).Specifically,we design a contextual feature extraction block(CFEB)that can extract CT image features more efficiently and accurately than ordinary residual blocks.In addition,we propose a feature-weighted cascading strategy(FWCS)based on attentional feature fusion blocks(AFFB)to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information.Finally,we suggest a global hierarchical feature fusion strategy(GHFFS),which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels.Numerous experiments show that our method performs better than most of the state-of-the-art(SOTA)methods on the COVID-19 chest CT dataset.In detail,the peak signal-to-noise ratio(PSNR)is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at×3 SR compared to the suboptimal method,but the number of parameters and multi-adds are reduced by 22K and 0.43G,respectively.Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19. 展开更多
关键词 SUPER-RESOLUTION COVID-19 chest CT lightweight network contextual feature extraction attentional feature fusion
下载PDF
Siamese Dense Pixel-Level Fusion Network for Real-Time UAV Tracking 被引量:1
16
作者 Zhenyu Huang Gun Li +4 位作者 Xudong Sun Yong Chen Jie Sun Zhangsong Ni Yang Yang 《Computers, Materials & Continua》 SCIE EI 2023年第9期3219-3238,共20页
Onboard visual object tracking in unmanned aerial vehicles(UAVs)has attractedmuch interest due to its versatility.Meanwhile,due to high precision,Siamese networks are becoming hot spots in visual object tracking.Howev... Onboard visual object tracking in unmanned aerial vehicles(UAVs)has attractedmuch interest due to its versatility.Meanwhile,due to high precision,Siamese networks are becoming hot spots in visual object tracking.However,most Siamese trackers fail to balance the tracking accuracy and time within onboard limited computational resources of UAVs.To meet the tracking precision and real-time requirements,this paper proposes a Siamese dense pixel-level network for UAV object tracking named SiamDPL.Specifically,the Siamese network extracts features of the search region and the template region through a parameter-shared backbone network,then performs correlationmatching to obtain the candidate regionwith high similarity.To improve the matching effect of template and search features,this paper designs a dense pixel-level feature fusion module to enhance the matching ability by pixel-wise correlation and enrich the feature diversity by dense connection.An attention module composed of self-attention and channel attention is introduced to learn global context information and selectively emphasize the target feature region in the spatial and channel dimensions.In addition,a target localization module is designed to improve target location accuracy.Compared with other advanced trackers,experiments on two public benchmarks,which are UAV123@10fps and UAV20L fromthe unmanned air vehicle123(UAV123)dataset,show that SiamDPL can achieve superior performance and low complexity with a running speed of 100.1 fps on NVIDIA TITAN RTX. 展开更多
关键词 Siamese network UAV object tracking dense pixel-level feature fusion attention module target localization
下载PDF
A Credit Card Fraud Detection Model Based on Multi-Feature Fusion and Generative Adversarial Network 被引量:1
17
作者 Yalong Xie Aiping Li +2 位作者 Biyin Hu Liqun Gao Hongkui Tu 《Computers, Materials & Continua》 SCIE EI 2023年第9期2707-2726,共20页
Credit Card Fraud Detection(CCFD)is an essential technology for banking institutions to control fraud risks and safeguard their reputation.Class imbalance and insufficient representation of feature data relating to cr... Credit Card Fraud Detection(CCFD)is an essential technology for banking institutions to control fraud risks and safeguard their reputation.Class imbalance and insufficient representation of feature data relating to credit card transactions are two prevalent issues in the current study field of CCFD,which significantly impact classification models’performance.To address these issues,this research proposes a novel CCFD model based on Multifeature Fusion and Generative Adversarial Networks(MFGAN).The MFGAN model consists of two modules:a multi-feature fusion module for integrating static and dynamic behavior data of cardholders into a unified highdimensional feature space,and a balance module based on the generative adversarial network to decrease the class imbalance ratio.The effectiveness of theMFGAN model is validated on two actual credit card datasets.The impacts of different class balance ratios on the performance of the four resamplingmodels are analyzed,and the contribution of the two different modules to the performance of the MFGAN model is investigated via ablation experiments.Experimental results demonstrate that the proposed model does better than state-of-the-art models in terms of recall,F1,and Area Under the Curve(AUC)metrics,which means that the MFGAN model can help banks find more fraudulent transactions and reduce fraud losses. 展开更多
关键词 Credit card fraud detection imbalanced classification feature fusion generative adversarial networks anti-fraud systems
下载PDF
Behavior Recognition of the Elderly in Indoor Environment Based on Feature Fusion of Wi-Fi Perception and Videos 被引量:1
18
作者 Yuebin Song Chunling Fan 《Journal of Beijing Institute of Technology》 EI CAS 2023年第2期142-155,共14页
With the intensifying aging of the population,the phenomenon of the elderly living alone is also increasing.Therefore,using modern internet of things technology to monitor the daily behavior of the elderly in indoors ... With the intensifying aging of the population,the phenomenon of the elderly living alone is also increasing.Therefore,using modern internet of things technology to monitor the daily behavior of the elderly in indoors is a meaningful study.Video-based action recognition tasks are easily affected by object occlusion and weak ambient light,resulting in poor recognition performance.Therefore,this paper proposes an indoor human behavior recognition method based on wireless fidelity(Wi-Fi)perception and video feature fusion by utilizing the ability of Wi-Fi signals to carry environmental information during the propagation process.This paper uses the public WiFi-based activity recognition dataset(WIAR)containing Wi-Fi channel state information and essential action videos,and then extracts video feature vectors and Wi-Fi signal feature vectors in the datasets through the two-stream convolutional neural network and standard statistical algorithms,respectively.Then the two sets of feature vectors are fused,and finally,the action classification and recognition are performed by the support vector machine(SVM).The experiments in this paper contrast experiments between the two-stream network model and the methods in this paper under three different environments.And the accuracy of action recognition after adding Wi-Fi signal feature fusion is improved by 10%on average. 展开更多
关键词 human behavior recognition two-stream convolution neural network channel status information feature fusion support vector machine(SVM)
下载PDF
Robust Visual Tracking with Hierarchical Deep Features Weighted Fusion
19
作者 Dianwei Wang Chunxiang Xu +3 位作者 Daxiang Li Ying Liu Zhijie Xu Jing Wang 《Journal of Beijing Institute of Technology》 EI CAS 2019年第4期770-776,共7页
To solve the problem of low robustness of trackers under significant appearance changes in complex background,a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation f... To solve the problem of low robustness of trackers under significant appearance changes in complex background,a novel moving target tracking method based on hierarchical deep features weighted fusion and correlation filter is proposed.Firstly,multi-layer features are extracted by a deep model pre-trained on massive object recognition datasets.The linearly separable features of Relu3-1,Relu4-1 and Relu5-4 layers from VGG-Net-19 are especially suitable for target tracking.Then,correlation filters over hierarchical convolutional features are learned to generate their correlation response maps.Finally,a novel approach of weight adjustment is presented to fuse response maps.The maximum value of the final response map is just the location of the target.Extensive experiments on the object tracking benchmark datasets demonstrate the high robustness and recognition precision compared with several state-of-the-art trackers under the different conditions. 展开更多
关键词 visual tracking convolution neural network correlation filter feature fusion
下载PDF
Adaptive Multi-Feature Fusion for Vehicle Micro-Motor Noise Recognition Considering Auditory Perception 被引量:1
20
作者 Ting Zhao Weiping Ding +1 位作者 Haibo Huang Yudong Wu 《Sound & Vibration》 EI 2023年第1期133-153,共21页
The deployment of vehicle micro-motors has witnessed an expansion owing to the progression in electrification and intelligent technologies.However,some micro-motors may exhibit design deficiencies,component wear,assem... The deployment of vehicle micro-motors has witnessed an expansion owing to the progression in electrification and intelligent technologies.However,some micro-motors may exhibit design deficiencies,component wear,assembly errors,and other imperfections that may arise during the design or manufacturing phases.Conse-quently,these micro-motors might generate anomalous noises during their operation,consequently exerting a substantial adverse influence on the overall comfort of drivers and passengers.Automobile micro-motors exhibit a diverse array of structural variations,consequently leading to the manifestation of a multitude of distinctive auditory irregularities.To address the identification of diverse forms of abnormal noise,this research presents a novel approach rooted in the utilization of vibro-acoustic fusion-convolutional neural network(VAF-CNN).This method entails the deployment of distinct network branches,each serving to capture disparate features from the multi-sensor data,all the while considering the auditory perception traits inherent in the human auditory sys-tem.The intermediary layer integrates the concept of adaptive weighting of multi-sensor features,thus affording a calibration mechanism for the features hailing from multiple sensors,thereby enabling a further refinement of features within the branch network.For optimal model efficacy,a feature fusion mechanism is implemented in the concluding layer.To substantiate the efficacy of the proposed approach,this paper initially employs an augmented data methodology inspired by modified SpecAugment,applied to the dataset of abnormal noise sam-ples,encompassing scenarios both with and without in-vehicle interior noise.This serves to mitigate the issue of limited sample availability.Subsequent comparative evaluations are executed,contrasting the performance of the model founded upon single-sensor data against other feature fusion models reliant on multi-sensor data.The experimental results substantiate that the suggested methodology yields heightened recognition accuracy and greater resilience against interference.Moreover,it holds notable practical significance in the engineering domain,as it furnishes valuable support for the targeted management of noise emanating from vehicle micro-motors. 展开更多
关键词 Auditory perception MULTI-SENSOR feature adaptive fusion abnormal noise recognition vehicle interior noise
下载PDF
上一页 1 2 8 下一页 到第
使用帮助 返回顶部