期刊文献+
共找到8,445篇文章
< 1 2 250 >
每页显示 20 50 100
A CNN-LSTM-PSO tool wear prediction method based on multi-channel feature fusion
1
作者 Shuo WANG Zhenliang YU +1 位作者 Yongqi GUO Xu LIU 《Mechanical Engineering Science》 2022年第2期39-48,I0006,共11页
In order to achieve predictive maintenance of CNC machining tools and to be able to change tools intelligently before tool wear is at a critical threshold,a CNN-LSTM tool wear prediction model based on particle swarm ... In order to achieve predictive maintenance of CNC machining tools and to be able to change tools intelligently before tool wear is at a critical threshold,a CNN-LSTM tool wear prediction model based on particle swarm algorithm(PSO)optimization with multi-channel feature fusion is proposed.Firstly,the raw signals of seven channels of the machining process are collected using sensor technology and processed for noise reduction;secondly,the time-domain,frequency-domain and time-frequency-domain features of each channel signal are extracted,and a sample data set of spatio-temporal correlation of traffic flow is constructed by dimensionality reduction processing and information fusion of the above features;finally,the data set is input to the CNN-LSTM-PSO model for training and testing.The results show that the CNN-LSTM-PSO model can effectively predict tool wear with an average absolute error MAE value of 0.5848,a root mean square error RMSE value of 0.7281,and a coefficient of determination R2 value of 0.9964;and compared with the BP model,CNN model,LSTM model and CNN-LSTM model,its tool wear prediction accuracy improved by 7.56%,2.60%,2.98%,and 1.63%,respectively. 展开更多
关键词 feature fusion CNN-LSTM tool wear life prediction
下载PDF
Cross-Dimension Attentive Feature Fusion Network for Unsupervised Time-Series Anomaly Detection 被引量:1
2
作者 Rui Wang Yao Zhou +2 位作者 Guangchun Luo Peng Chen Dezhong Peng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3011-3027,共17页
Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconst... Time series anomaly detection is crucial in various industrial applications to identify unusual behaviors within the time series data.Due to the challenges associated with annotating anomaly events,time series reconstruction has become a prevalent approach for unsupervised anomaly detection.However,effectively learning representations and achieving accurate detection results remain challenging due to the intricate temporal patterns and dependencies in real-world time series.In this paper,we propose a cross-dimension attentive feature fusion network for time series anomaly detection,referred to as CAFFN.Specifically,a series and feature mixing block is introduced to learn representations in 1D space.Additionally,a fast Fourier transform is employed to convert the time series into 2D space,providing the capability for 2D feature extraction.Finally,a cross-dimension attentive feature fusion mechanism is designed that adaptively integrates features across different dimensions for anomaly detection.Experimental results on real-world time series datasets demonstrate that CAFFN performs better than other competing methods in time series anomaly detection. 展开更多
关键词 Time series anomaly detection unsupervised feature learning feature fusion
下载PDF
FusionNN:A Semantic Feature Fusion Model Based on Multimodal for Web Anomaly Detection
3
作者 Li Wang Mingshan Xia +3 位作者 Hao Hu Jianfang Li Fengyao Hou Gang Chen 《Computers, Materials & Continua》 SCIE EI 2024年第5期2991-3006,共16页
With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althou... With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althoughthis approach can achieve higher detection performance,it requires huge human labor and resources to maintainthe feature library.In contrast,semantic feature engineering can dynamically discover new semantic featuresand optimize feature selection by automatically analyzing the semantic information contained in the data itself,thus reducing dependence on prior knowledge.However,current semantic features still have the problem ofsemantic expression singularity,as they are extracted from a single semantic mode such as word segmentation,character segmentation,or arbitrary semantic feature extraction.This paper extracts features of web requestsfrom dual semantic granularity,and proposes a semantic feature fusion method to solve the above problems.Themethod first preprocesses web requests,and extracts word-level and character-level semantic features of URLs viaconvolutional neural network(CNN),respectively.By constructing three loss functions to reduce losses betweenfeatures,labels and categories.Experiments on the HTTP CSIC 2010,Malicious URLs and HttpParams datasetsverify the proposedmethod.Results show that compared withmachine learning,deep learningmethods and BERTmodel,the proposed method has better detection performance.And it achieved the best detection rate of 99.16%in the dataset HttpParams. 展开更多
关键词 feature fusion web anomaly detection MULTIMODAL convolutional neural network(CNN) semantic feature extraction
下载PDF
Hierarchical Optimization Method for Federated Learning with Feature Alignment and Decision Fusion
4
作者 Ke Li Xiaofeng Wang Hu Wang 《Computers, Materials & Continua》 SCIE EI 2024年第10期1391-1407,共17页
In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate... In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate accuracy of the global model.Utilizing shared feature representations alongside customized classifiers for individual clients emerges as a promising personalized solution.Nonetheless,previous research has frequently neglected the integration of global knowledge into local representation learning and the synergy between global and local classifiers,thereby limiting model performance.To tackle these issues,this study proposes a hierarchical optimization method for federated learning with feature alignment and the fusion of classification decisions(FedFCD).FedFCD regularizes the relationship between global and local feature representations to achieve alignment and incorporates decision information from the global classifier,facilitating the late fusion of decision outputs from both global and local classifiers.Additionally,FedFCD employs a hierarchical optimization strategy to flexibly optimize model parameters.Through experiments on the Fashion-MNIST,CIFAR-10 and CIFAR-100 datasets,we demonstrate the effectiveness and superiority of FedFCD.For instance,on the CIFAR-100 dataset,FedFCD exhibited a significant improvement in average test accuracy by 6.83%compared to four outstanding personalized federated learning approaches.Furthermore,extended experiments confirm the robustness of FedFCD across various hyperparameter values. 展开更多
关键词 Federated learning data heterogeneity feature alignment decision fusion hierarchical optimization
下载PDF
Advancements in Remote Sensing Image Dehazing: Introducing URA-Net with Multi-Scale Dense Feature Fusion Clusters and Gated Jump Connection
5
作者 Hongchi Liu Xing Deng Haijian Shao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2397-2424,共28页
The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivot... The degradation of optical remote sensing images due to atmospheric haze poses a significant obstacle,profoundly impeding their effective utilization across various domains.Dehazing methodologies have emerged as pivotal components of image preprocessing,fostering an improvement in the quality of remote sensing imagery.This enhancement renders remote sensing data more indispensable,thereby enhancing the accuracy of target iden-tification.Conventional defogging techniques based on simplistic atmospheric degradation models have proven inadequate for mitigating non-uniform haze within remotely sensed images.In response to this challenge,a novel UNet Residual Attention Network(URA-Net)is proposed.This paradigmatic approach materializes as an end-to-end convolutional neural network distinguished by its utilization of multi-scale dense feature fusion clusters and gated jump connections.The essence of our methodology lies in local feature fusion within dense residual clusters,enabling the extraction of pertinent features from both preceding and current local data,depending on contextual demands.The intelligently orchestrated gated structures facilitate the propagation of these features to the decoder,resulting in superior outcomes in haze removal.Empirical validation through a plethora of experiments substantiates the efficacy of URA-Net,demonstrating its superior performance compared to existing methods when applied to established datasets for remote sensing image defogging.On the RICE-1 dataset,URA-Net achieves a Peak Signal-to-Noise Ratio(PSNR)of 29.07 dB,surpassing the Dark Channel Prior(DCP)by 11.17 dB,the All-in-One Network for Dehazing(AOD)by 7.82 dB,the Optimal Transmission Map and Adaptive Atmospheric Light For Dehazing(OTM-AAL)by 5.37 dB,the Unsupervised Single Image Dehazing(USID)by 8.0 dB,and the Superpixel-based Remote Sensing Image Dehazing(SRD)by 8.5 dB.Particularly noteworthy,on the SateHaze1k dataset,URA-Net attains preeminence in overall performance,yielding defogged images characterized by consistent visual quality.This underscores the contribution of the research to the advancement of remote sensing technology,providing a robust and efficient solution for alleviating the adverse effects of haze on image quality. 展开更多
关键词 Remote sensing image image dehazing deep learning feature fusion
下载PDF
Research on Multi-Scale Feature Fusion Network Algorithm Based on Brain Tumor Medical Image Classification
6
作者 Yuting Zhou Xuemei Yang +1 位作者 Junping Yin Shiqi Liu 《Computers, Materials & Continua》 SCIE EI 2024年第6期5313-5333,共21页
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier... Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect. 展开更多
关键词 Medical image classification feature fusion TRANSFORMER
下载PDF
Attention Guided Multi Scale Feature Fusion Network for Automatic Prostate Segmentation
7
作者 Yuchun Li Mengxing Huang +1 位作者 Yu Zhang Zhiming Bai 《Computers, Materials & Continua》 SCIE EI 2024年第2期1649-1668,共20页
The precise and automatic segmentation of prostate magnetic resonance imaging(MRI)images is vital for assisting doctors in diagnosing prostate diseases.In recent years,many advanced methods have been applied to prosta... The precise and automatic segmentation of prostate magnetic resonance imaging(MRI)images is vital for assisting doctors in diagnosing prostate diseases.In recent years,many advanced methods have been applied to prostate segmentation,but due to the variability caused by prostate diseases,automatic segmentation of the prostate presents significant challenges.In this paper,we propose an attention-guided multi-scale feature fusion network(AGMSF-Net)to segment prostate MRI images.We propose an attention mechanism for extracting multi-scale features,and introduce a 3D transformer module to enhance global feature representation by adding it during the transition phase from encoder to decoder.In the decoder stage,a feature fusion module is proposed to obtain global context information.We evaluate our model on MRI images of the prostate acquired from a local hospital.The relative volume difference(RVD)and dice similarity coefficient(DSC)between the results of automatic prostate segmentation and ground truth were 1.21%and 93.68%,respectively.To quantitatively evaluate prostate volume on MRI,which is of significant clinical significance,we propose a unique AGMSF-Net.The essential performance evaluation and validation experiments have demonstrated the effectiveness of our method in automatic prostate segmentation. 展开更多
关键词 Prostate segmentation multi-scale attention 3D Transformer feature fusion MRI
下载PDF
Olive Leaf Disease Detection via Wavelet Transform and Feature Fusion of Pre-Trained Deep Learning Models
8
作者 Mahmood A.Mahmood Khalaf Alsalem 《Computers, Materials & Continua》 SCIE EI 2024年第3期3431-3448,共18页
Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wa... Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases. 展开更多
关键词 Olive leaf diseases wavelet transform deep learning feature fusion
下载PDF
A Fusion Localization Method Based on Target Measurement Error Feature Complementarity and Its Application
9
作者 Xin Yang Hongming Liu +3 位作者 Xiaoke Wang Wen Yu Jingqiu Liu Sipei Zhang 《Journal of Beijing Institute of Technology》 EI CAS 2024年第1期75-88,共14页
In the multi-radar networking system,aiming at the problem of locating long-distance targets synergistically with difficulty and low accuracy,a dual-station joint positioning method based on the target measurement err... In the multi-radar networking system,aiming at the problem of locating long-distance targets synergistically with difficulty and low accuracy,a dual-station joint positioning method based on the target measurement error feature complementarity is proposed.For dual-station joint positioning,by constructing the target positioning error distribution model and using the complementarity of spatial measurement errors of the same long-distance target,the area with high probability of target existence can be obtained.Then,based on the target distance information,the midpoint of the intersection between the target positioning sphere and the positioning tangent plane can be solved to acquire the target's optimal positioning result.The simulation demonstrates that this method greatly improves the positioning accuracy of target in azimuth direction.Compared with the traditional the dynamic weighted fusion(DWF)algorithm and the filter-based dynamic weighted fusion(FBDWF)algorithm,it not only effectively eliminates the influence of systematic error in the azimuth direction,but also has low computational complexity.Furthermore,for the application scenarios of multi-radar collaborative positioning and multi-sensor data compression filtering in centralized information fusion,it is recommended that using radar with higher ranging accuracy and the lengths of baseline between radars are 20–100 km. 展开更多
关键词 dual-station positioning feature complementarity information fusion engineering applicability
下载PDF
A Power Data Anomaly Detection Model Based on Deep Learning with Adaptive Feature Fusion
10
作者 Xiu Liu Liang Gu +3 位作者 Xin Gong Long An Xurui Gao Juying Wu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4045-4061,共17页
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi... With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed. 展开更多
关键词 Data alignment dimension reduction feature fusion data anomaly detection deep learning
下载PDF
Attention Guided Food Recognition via Multi-Stage Local Feature Fusion
11
作者 Gonghui Deng Dunzhi Wu Weizhen Chen 《Computers, Materials & Continua》 SCIE EI 2024年第8期1985-2003,共19页
The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregula... The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregular and multi-scale nature of food images.Addressing these complexities,our study introduces an advanced model that leverages multiple attention mechanisms and multi-stage local fusion,grounded in the ConvNeXt architecture.Our model employs hybrid attention(HA)mechanisms to pinpoint critical discriminative regions within images,substantially mitigating the influence of background noise.Furthermore,it introduces a multi-stage local fusion(MSLF)module,fostering long-distance dependencies between feature maps at varying stages.This approach facilitates the assimilation of complementary features across scales,significantly bolstering the model’s capacity for feature extraction.Furthermore,we constructed a dataset named Roushi60,which consists of 60 different categories of common meat dishes.Empirical evaluation of the ETH Food-101,ChineseFoodNet,and Roushi60 datasets reveals that our model achieves recognition accuracies of 91.12%,82.86%,and 92.50%,respectively.These figures not only mark an improvement of 1.04%,3.42%,and 1.36%over the foundational ConvNeXt network but also surpass the performance of most contemporary food image recognition methods.Such advancements underscore the efficacy of our proposed model in navigating the intricate landscape of food image recognition,setting a new benchmark for the field. 展开更多
关键词 Fine-grained image recognition food image recognition attention mechanism local feature fusion
下载PDF
Source Camera Identification Algorithm Based on Multi-Scale Feature Fusion
12
作者 Jianfeng Lu Caijin Li +2 位作者 Xiangye Huang Chen Cui Mahmoud Emam 《Computers, Materials & Continua》 SCIE EI 2024年第8期3047-3065,共19页
The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.Howeve... The widespread availability of digital multimedia data has led to a new challenge in digital forensics.Traditional source camera identification algorithms usually rely on various traces in the capturing process.However,these traces have become increasingly difficult to extract due to wide availability of various image processing algorithms.Convolutional Neural Networks(CNN)-based algorithms have demonstrated good discriminative capabilities for different brands and even different models of camera devices.However,their performances is not ideal in case of distinguishing between individual devices of the same model,because cameras of the same model typically use the same optical lens,image sensor,and image processing algorithms,that result in minimal overall differences.In this paper,we propose a camera forensics algorithm based on multi-scale feature fusion to address these issues.The proposed algorithm extracts different local features from feature maps of different scales and then fuses them to obtain a comprehensive feature representation.This representation is then fed into a subsequent camera fingerprint classification network.Building upon the Swin-T network,we utilize Transformer Blocks and Graph Convolutional Network(GCN)modules to fuse multi-scale features from different stages of the backbone network.Furthermore,we conduct experiments on established datasets to demonstrate the feasibility and effectiveness of the proposed approach. 展开更多
关键词 Source camera identification camera forensics convolutional neural network feature fusion transformer block graph convolutional network
下载PDF
Multiscale Feature Fusion for Gesture Recognition Using Commodity Millimeter-Wave Radar
13
作者 Lingsheng Li Weiqing Bai Chong Han 《Computers, Materials & Continua》 SCIE EI 2024年第10期1613-1640,共28页
Gestures are one of the most natural and intuitive approach for human-computer interaction.Compared with traditional camera-based or wearable sensors-based solutions,gesture recognition using the millimeter wave radar... Gestures are one of the most natural and intuitive approach for human-computer interaction.Compared with traditional camera-based or wearable sensors-based solutions,gesture recognition using the millimeter wave radar has attracted growing attention for its characteristics of contact-free,privacy-preserving and less environmentdependence.Although there have been many recent studies on hand gesture recognition,the existing hand gesture recognition methods still have recognition accuracy and generalization ability shortcomings in shortrange applications.In this paper,we present a hand gesture recognition method named multiscale feature fusion(MSFF)to accurately identify micro hand gestures.In MSFF,not only the overall action recognition of the palm but also the subtle movements of the fingers are taken into account.Specifically,we adopt hand gesture multiangle Doppler-time and gesture trajectory range-angle map multi-feature fusion to comprehensively extract hand gesture features and fuse high-level deep neural networks to make it pay more attention to subtle finger movements.We evaluate the proposed method using data collected from 10 users and our proposed solution achieves an average recognition accuracy of 99.7%.Extensive experiments on a public mmWave gesture dataset demonstrate the superior effectiveness of the proposed system. 展开更多
关键词 Gesture recognition millimeter-wave(mmWave)radar radio frequency(RF)sensing human-computer interaction multiscale feature fusion
下载PDF
HOG-VGG:VGG Network with HOG Feature Fusion for High-Precision PolSAR Terrain Classification
14
作者 Jiewen Li Zhicheng Zhao +2 位作者 Yanlan Wu Jiaqiu Ai Jun Shi 《Journal of Harbin Institute of Technology(New Series)》 CAS 2024年第5期1-15,共15页
This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep ... This article proposes a VGG network with histogram of oriented gradient(HOG) feature fusion(HOG-VGG) for polarization synthetic aperture radar(PolSAR) image terrain classification.VGG-Net has a strong ability of deep feature extraction,which can fully extract the global deep features of different terrains in PolSAR images,so it is widely used in PolSAR terrain classification.However,VGG-Net ignores the local edge & shape features,resulting in incomplete feature representation of the PolSAR terrains,as a consequence,the terrain classification accuracy is not promising.In fact,edge and shape features play an important role in PolSAR terrain classification.To solve this problem,a new VGG network with HOG feature fusion was specifically proposed for high-precision PolSAR terrain classification.HOG-VGG extracts both the global deep semantic features and the local edge & shape features of the PolSAR terrains,so the terrain feature representation completeness is greatly elevated.Moreover,HOG-VGG optimally fuses the global deep features and the local edge & shape features to achieve the best classification results.The superiority of HOG-VGG is verified on the Flevoland,San Francisco and Oberpfaffenhofen datasets.Experiments show that the proposed HOG-VGG achieves much better PolSAR terrain classification performance,with overall accuracies of 97.54%,94.63%,and 96.07%,respectively. 展开更多
关键词 PolSAR terrain classification high⁃precision HOG⁃VGG feature representation completeness elevation multi⁃level feature fusion
下载PDF
A Progressive Feature Fusion-Based Manhole Cover Defect Recognition Method
15
作者 Tingting Hu Xiangyu Ren +2 位作者 Wanfa Sun Shengying Yang Boyang Feng 《Journal of Computer and Communications》 2024年第8期307-316,共10页
Manhole cover defect recognition is of significant practical importance as it can accurately identify damaged or missing covers, enabling timely replacement and maintenance. Traditional manhole cover detection techniq... Manhole cover defect recognition is of significant practical importance as it can accurately identify damaged or missing covers, enabling timely replacement and maintenance. Traditional manhole cover detection techniques primarily focus on detecting the presence of covers rather than classifying the types of defects. However, manhole cover defects exhibit small inter-class feature differences and large intra-class feature variations, which makes their recognition challenging. To improve the classification of manhole cover defect types, we propose a Progressive Dual-Branch Feature Fusion Network (PDBFFN). The baseline backbone network adopts a multi-stage hierarchical architecture design using Res-Net50 as the visual feature extractor, from which both local and global information is obtained. Additionally, a Feature Enhancement Module (FEM) and a Fusion Module (FM) are introduced to enhance the network’s ability to learn critical features. Experimental results demonstrate that our model achieves a classification accuracy of 82.6% on a manhole cover defect dataset, outperforming several state-of-the-art fine-grained image classification models. 展开更多
关键词 feature Enhancement PROGRESSIVE Dual-Branch feature fusion
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
16
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale feature fusion
下载PDF
A Model for Detecting Fake News by Integrating Domain-Specific Emotional and Semantic Features
17
作者 Wen Jiang Mingshu Zhang +4 位作者 Xu’an Wang Wei Bin Xiong Zhang Kelan Ren Facheng Yan 《Computers, Materials & Continua》 SCIE EI 2024年第8期2161-2179,共19页
With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature t... With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible. 展开更多
关键词 Fake news detection domain-related emotional features semantic features feature fusion
下载PDF
FIBTNet:Building Change Detection for Remote Sensing Images Using Feature Interactive Bi-Temporal Network
18
作者 Jing Wang Tianwen Lin +1 位作者 Chen Zhang Jun Peng 《Computers, Materials & Continua》 SCIE EI 2024年第9期4621-4641,共21页
In this paper,a feature interactive bi-temporal change detection network(FIBTNet)is designed to solve the problem of pseudo change in remote sensing image building change detection.The network improves the accuracy of... In this paper,a feature interactive bi-temporal change detection network(FIBTNet)is designed to solve the problem of pseudo change in remote sensing image building change detection.The network improves the accuracy of change detection through bi-temporal feature interaction.FIBTNet designs a bi-temporal feature exchange architecture(EXA)and a bi-temporal difference extraction architecture(DFA).EXA improves the feature exchange ability of the model encoding process through multiple space,channel or hybrid feature exchange methods,while DFA uses the change residual(CR)module to improve the ability of the model decoding process to extract different features at multiple scales.Additionally,at the junction of encoder and decoder,channel exchange is combined with the CR module to achieve an adaptive channel exchange,which further improves the decision-making performance of model feature fusion.Experimental results on the LEVIR-CD and S2Looking datasets demonstrate that iCDNet achieves superior F1 scores,Intersection over Union(IoU),and Recall compared to mainstream building change detectionmodels,confirming its effectiveness and superiority in the field of remote sensing image change detection. 展开更多
关键词 Change detection change residual module feature exchange mechanism feature fusion
下载PDF
Automatic detection method of bladder tumor cells based on color and shape features
19
作者 Zitong Zhao Yanbo Wang +6 位作者 Jiaqi Chen Mingjia Wang Shulong Feng Jin Yang Nan Song Jinyu Wang Ci Sun 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第6期1-13,共13页
Bladder urothelial carcinoma is the most common malignant tumor disease in urinary system,and its incidence rate ranks ninth in the world.In recent years,the continuous development of hyperspectral imaging technology ... Bladder urothelial carcinoma is the most common malignant tumor disease in urinary system,and its incidence rate ranks ninth in the world.In recent years,the continuous development of hyperspectral imaging technology has provided a new tool for the auxiliary diagnosis of bladder cancer.In this study,based on microscopic hyperspectral data,an automatic detection algorithm of bladder tumor cells combining color features and shape features is proposed.Support vector machine(SVM)is used to build classification models and compare the classification performance of spectral feature,spectral and shape fusion feature,and the fusion feature proposed in this paper on the same classifier.The results show that the sensitivity,specificity,and accuracy of our classification algorithm based on shape and color fusion features are 0.952,0.897,and 0.920,respectively,which are better than the classification algorithm only using spectral features.Therefore,this study can effectively extract the cell features of bladder urothelial carcinoma smear,thus achieving automatic,real-time,and noninvasive detection of bladder tumor cells,and then helping doctors improve the efficiency of pathological diagnosis of bladder urothelial cancer,and providing a reliable basis for doctors to choose treatment plans and judge the prognosis of the disease. 展开更多
关键词 Bladder tumor cells microscopic hyperspectral fusion feature support vector machine automatic detection.
下载PDF
Residual Feature Attentional Fusion Network for Lightweight Chest CT Image Super-Resolution 被引量:1
20
作者 Kun Yang Lei Zhao +4 位作者 Xianghui Wang Mingyang Zhang Linyan Xue Shuang Liu Kun Liu 《Computers, Materials & Continua》 SCIE EI 2023年第6期5159-5176,共18页
The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study s... The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study super-resolution(SR)algorithms applied to CT images to improve the reso-lution of CT images.However,most of the existing SR algorithms are studied based on natural images,which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth,which is not suitable for machines with limited resources.To alleviate these issues,we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution(RFAFN).Specifically,we design a contextual feature extraction block(CFEB)that can extract CT image features more efficiently and accurately than ordinary residual blocks.In addition,we propose a feature-weighted cascading strategy(FWCS)based on attentional feature fusion blocks(AFFB)to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information.Finally,we suggest a global hierarchical feature fusion strategy(GHFFS),which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels.Numerous experiments show that our method performs better than most of the state-of-the-art(SOTA)methods on the COVID-19 chest CT dataset.In detail,the peak signal-to-noise ratio(PSNR)is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at×3 SR compared to the suboptimal method,but the number of parameters and multi-adds are reduced by 22K and 0.43G,respectively.Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19. 展开更多
关键词 SUPER-RESOLUTION COVID-19 chest CT lightweight network contextual feature extraction attentional feature fusion
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部