期刊文献+
共找到1,550篇文章
< 1 2 78 >
每页显示 20 50 100
Bridge Crack Segmentation Method Based on Parallel Attention Mechanism and Multi-Scale Features Fusion 被引量:1
1
作者 Jianwei Yuan Xinli Song +2 位作者 Huaijian Pu Zhixiong Zheng Ziyang Niu 《Computers, Materials & Continua》 SCIE EI 2023年第3期6485-6503,共19页
Regular inspection of bridge cracks is crucial to bridge maintenance and repair.The traditional manual crack detection methods are timeconsuming,dangerous and subjective.At the same time,for the existing mainstream vi... Regular inspection of bridge cracks is crucial to bridge maintenance and repair.The traditional manual crack detection methods are timeconsuming,dangerous and subjective.At the same time,for the existing mainstream vision-based automatic crack detection algorithms,it is challenging to detect fine cracks and balance the detection accuracy and speed.Therefore,this paper proposes a new bridge crack segmentationmethod based on parallel attention mechanism and multi-scale features fusion on top of the DeeplabV3+network framework.First,the improved lightweight MobileNetv2 network and dilated separable convolution are integrated into the original DeeplabV3+network to improve the original backbone network Xception and atrous spatial pyramid pooling(ASPP)module,respectively,dramatically reducing the number of parameters in the network and accelerates the training and prediction speed of the model.Moreover,we introduce the parallel attention mechanism into the encoding and decoding stages.The attention to the crack regions can be enhanced from the aspects of both channel and spatial parts and significantly suppress the interference of various noises.Finally,we further improve the detection performance of the model for fine cracks by introducing a multi-scale features fusion module.Our research results are validated on the self-made dataset.The experiments show that our method is more accurate than other methods.Its intersection of union(IoU)and F1-score(F1)are increased to 77.96%and 87.57%,respectively.In addition,the number of parameters is only 4.10M,which is much smaller than the original network;also,the frames per second(FPS)is increased to 15 frames/s.The results prove that the proposed method fits well the requirements of rapid and accurate detection of bridge cracks and is superior to other methods. 展开更多
关键词 Crack detection DeeplabV3+ parallel attention mechanism feature fusion
下载PDF
Attention Guided Multi Scale Feature Fusion Network for Automatic Prostate Segmentation
2
作者 Yuchun Li Mengxing Huang +1 位作者 Yu Zhang Zhiming Bai 《Computers, Materials & Continua》 SCIE EI 2024年第2期1649-1668,共20页
The precise and automatic segmentation of prostate magnetic resonance imaging(MRI)images is vital for assisting doctors in diagnosing prostate diseases.In recent years,many advanced methods have been applied to prosta... The precise and automatic segmentation of prostate magnetic resonance imaging(MRI)images is vital for assisting doctors in diagnosing prostate diseases.In recent years,many advanced methods have been applied to prostate segmentation,but due to the variability caused by prostate diseases,automatic segmentation of the prostate presents significant challenges.In this paper,we propose an attention-guided multi-scale feature fusion network(AGMSF-Net)to segment prostate MRI images.We propose an attention mechanism for extracting multi-scale features,and introduce a 3D transformer module to enhance global feature representation by adding it during the transition phase from encoder to decoder.In the decoder stage,a feature fusion module is proposed to obtain global context information.We evaluate our model on MRI images of the prostate acquired from a local hospital.The relative volume difference(RVD)and dice similarity coefficient(DSC)between the results of automatic prostate segmentation and ground truth were 1.21%and 93.68%,respectively.To quantitatively evaluate prostate volume on MRI,which is of significant clinical significance,we propose a unique AGMSF-Net.The essential performance evaluation and validation experiments have demonstrated the effectiveness of our method in automatic prostate segmentation. 展开更多
关键词 Prostate segmentation multi-scale attention 3D Transformer feature fusion MRI
下载PDF
Attention Guided Food Recognition via Multi-Stage Local Feature Fusion
3
作者 Gonghui Deng Dunzhi Wu Weizhen Chen 《Computers, Materials & Continua》 SCIE EI 2024年第8期1985-2003,共19页
The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregula... The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregular and multi-scale nature of food images.Addressing these complexities,our study introduces an advanced model that leverages multiple attention mechanisms and multi-stage local fusion,grounded in the ConvNeXt architecture.Our model employs hybrid attention(HA)mechanisms to pinpoint critical discriminative regions within images,substantially mitigating the influence of background noise.Furthermore,it introduces a multi-stage local fusion(MSLF)module,fostering long-distance dependencies between feature maps at varying stages.This approach facilitates the assimilation of complementary features across scales,significantly bolstering the model’s capacity for feature extraction.Furthermore,we constructed a dataset named Roushi60,which consists of 60 different categories of common meat dishes.Empirical evaluation of the ETH Food-101,ChineseFoodNet,and Roushi60 datasets reveals that our model achieves recognition accuracies of 91.12%,82.86%,and 92.50%,respectively.These figures not only mark an improvement of 1.04%,3.42%,and 1.36%over the foundational ConvNeXt network but also surpass the performance of most contemporary food image recognition methods.Such advancements underscore the efficacy of our proposed model in navigating the intricate landscape of food image recognition,setting a new benchmark for the field. 展开更多
关键词 Fine-grained image recognition food image recognition attention mechanism local feature fusion
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
4
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight Convolutional Neural Network Depthwise Dilated Separable Convolution Hierarchical multi-scale feature fusion
下载PDF
Multi-Scale Mixed Attention Tea Shoot Instance Segmentation Model
5
作者 Dongmei Chen Peipei Cao +5 位作者 Lijie Yan Huidong Chen Jia Lin Xin Li Lin Yuan Kaihua Wu 《Phyton-International Journal of Experimental Botany》 SCIE 2024年第2期261-275,共15页
Tea leaf picking is a crucial stage in tea production that directly influences the quality and value of the tea.Traditional tea-picking machines may compromise the quality of the tea leaves.High-quality teas are often... Tea leaf picking is a crucial stage in tea production that directly influences the quality and value of the tea.Traditional tea-picking machines may compromise the quality of the tea leaves.High-quality teas are often handpicked and need more delicate operations in intelligent picking machines.Compared with traditional image processing techniques,deep learning models have stronger feature extraction capabilities,and better generalization and are more suitable for practical tea shoot harvesting.However,current research mostly focuses on shoot detection and cannot directly accomplish end-to-end shoot segmentation tasks.We propose a tea shoot instance segmentation model based on multi-scale mixed attention(Mask2FusionNet)using a dataset from the tea garden in Hangzhou.We further analyzed the characteristics of the tea shoot dataset,where the proportion of small to medium-sized targets is 89.9%.Our algorithm is compared with several mainstream object segmentation algorithms,and the results demonstrate that our model achieves an accuracy of 82%in recognizing the tea shoots,showing a better performance compared to other models.Through ablation experiments,we found that ResNet50,PointRend strategy,and the Feature Pyramid Network(FPN)architecture can improve performance by 1.6%,1.4%,and 2.4%,respectively.These experiments demonstrated that our proposed multi-scale and point selection strategy optimizes the feature extraction capability for overlapping small targets.The results indicate that the proposed Mask2FusionNet model can perform the shoot segmentation in unstructured environments,realizing the individual distinction of tea shoots,and complete extraction of the shoot edge contours with a segmentation accuracy of 82.0%.The research results can provide algorithmic support for the segmentation and intelligent harvesting of premium tea shoots at different scales. 展开更多
关键词 Tea shoots attention mechanism multi-scale feature extraction instance segmentation deep learning
下载PDF
Few-shot image recognition based on multi-scale features prototypical network
6
作者 LIU Jiatong DUAN Yong 《High Technology Letters》 EI CAS 2024年第3期280-289,共10页
In order to improve the models capability in expressing features during few-shot learning,a multi-scale features prototypical network(MS-PN)algorithm is proposed.The metric learning algo-rithm is employed to extract i... In order to improve the models capability in expressing features during few-shot learning,a multi-scale features prototypical network(MS-PN)algorithm is proposed.The metric learning algo-rithm is employed to extract image features and project them into a feature space,thus evaluating the similarity between samples based on their relative distances within the metric space.To sufficiently extract feature information from limited sample data and mitigate the impact of constrained data vol-ume,a multi-scale feature extraction network is presented to capture data features at various scales during the process of image feature extraction.Additionally,the position of the prototype is fine-tuned by assigning weights to data points to mitigate the influence of outliers on the experiment.The loss function integrates contrastive loss and label-smoothing to bring similar data points closer and separate dissimilar data points within the metric space.Experimental evaluations are conducted on small-sample datasets mini-ImageNet and CUB200-2011.The method in this paper can achieve higher classification accuracy.Specifically,in the 5-way 1-shot experiment,classification accuracy reaches 50.13%and 66.79%respectively on these two datasets.Moreover,in the 5-way 5-shot ex-periment,accuracy of 66.79%and 85.91%are observed,respectively. 展开更多
关键词 few-shot learning multi-scale feature prototypical network channel attention label-smoothing
下载PDF
Residual Feature Attentional Fusion Network for Lightweight Chest CT Image Super-Resolution 被引量:1
7
作者 Kun Yang Lei Zhao +4 位作者 Xianghui Wang Mingyang Zhang Linyan Xue Shuang Liu Kun Liu 《Computers, Materials & Continua》 SCIE EI 2023年第6期5159-5176,共18页
The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study s... The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study super-resolution(SR)algorithms applied to CT images to improve the reso-lution of CT images.However,most of the existing SR algorithms are studied based on natural images,which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth,which is not suitable for machines with limited resources.To alleviate these issues,we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution(RFAFN).Specifically,we design a contextual feature extraction block(CFEB)that can extract CT image features more efficiently and accurately than ordinary residual blocks.In addition,we propose a feature-weighted cascading strategy(FWCS)based on attentional feature fusion blocks(AFFB)to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information.Finally,we suggest a global hierarchical feature fusion strategy(GHFFS),which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels.Numerous experiments show that our method performs better than most of the state-of-the-art(SOTA)methods on the COVID-19 chest CT dataset.In detail,the peak signal-to-noise ratio(PSNR)is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at×3 SR compared to the suboptimal method,but the number of parameters and multi-adds are reduced by 22K and 0.43G,respectively.Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19. 展开更多
关键词 SUPER-RESOLUTION COVID-19 chest CT lightweight network contextual feature extraction attentional feature fusion
下载PDF
AF-Net:A Medical Image Segmentation Network Based on Attention Mechanism and Feature Fusion 被引量:4
8
作者 Guimin Hou Jiaohua Qin +2 位作者 Xuyu Xiang Yun Tan Neal N.Xiong 《Computers, Materials & Continua》 SCIE EI 2021年第11期1877-1891,共15页
Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation ... Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods. 展开更多
关键词 Deep learning medical image segmentation feature fusion attention mechanism
下载PDF
基于改进蜣螂算法优化CNN-BiLSTM-Attention的串联电弧故障检测方法
9
作者 李海波 《电器与能效管理技术》 2024年第8期57-68,共12页
针对故障电弧特征提取不足、检测精度不高等问题,提出一种多特征融合的改进蜣螂算法(IDBO)优化融合注意力(Attention)机制的卷积神经网络(CNN)和双向长短期记忆(BiLSTM)神经网络的串联电弧故障检测方法。通过实验平台提取电流的时域、... 针对故障电弧特征提取不足、检测精度不高等问题,提出一种多特征融合的改进蜣螂算法(IDBO)优化融合注意力(Attention)机制的卷积神经网络(CNN)和双向长短期记忆(BiLSTM)神经网络的串联电弧故障检测方法。通过实验平台提取电流的时域、频域、时频域以及信号自回归参数模型特征;利用核主成分分析(KPCA)对特征进行降维融合,并将求取的特征向量作为CNN-BiLSTM-Attention的输入向量;引入Cubic混沌映射、螺旋搜索策略、动态权重系数、高斯柯西变异策略对蜣螂算法进行改进,利用改进蜣螂算法对CNN-BiLSTM-Attention超参数优化实现串联电弧故障诊断。结果表明,所提方法故障电弧检测准确率达到97.92%,可高效识别串联电弧故障。 展开更多
关键词 电弧故障 改进蜣螂算法 多特征融合 CNN-BiLSTM-attention
下载PDF
基于Attention-BiTCN的网络入侵检测方法 被引量:4
10
作者 孙红哲 王坚 +1 位作者 王鹏 安雨龙 《信息网络安全》 CSCD 北大核心 2024年第2期309-318,共10页
为解决网络入侵检测领域多分类准确率不高的问题,文章根据网络流量数据具有时序特征的特点,提出一种基于注意力机制和双向时间卷积神经网络(BiDirectional Temporal Convolutional Network,BiTCN)的网络入侵检测模型。首先,该模型对数... 为解决网络入侵检测领域多分类准确率不高的问题,文章根据网络流量数据具有时序特征的特点,提出一种基于注意力机制和双向时间卷积神经网络(BiDirectional Temporal Convolutional Network,BiTCN)的网络入侵检测模型。首先,该模型对数据集进行独热编码和归一化处置等预处理,解决网络流量数据离散性强和标度不统一的问题;其次,将预处理好的数据经双向滑窗法生成双向序列,并同步输入Attention-Bi TCN模型中;然后,提取双向时序特征并通过加性方式融合,得到时序信息被增强后的融合特征;最后,使用Softmax函数对融合特征进行多种攻击行为检测识别。文章所提模型在NSL-KDD和UNSW-NB15数据集上进行实验验证,多分类准确率分别达到99.70%和84.07%,优于传统网络入侵检测算法,且比其他深度学习模型在检测性能上有显著提升。 展开更多
关键词 入侵检测 注意力机制 BiTCN 双向滑窗法 融合特征
下载PDF
Image Inpainting Technique Incorporating Edge Prior and Attention Mechanism
11
作者 Jinxian Bai Yao Fan +1 位作者 Zhiwei Zhao Lizhi Zheng 《Computers, Materials & Continua》 SCIE EI 2024年第1期999-1025,共27页
Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images wit... Recently,deep learning-based image inpainting methods have made great strides in reconstructing damaged regions.However,these methods often struggle to produce satisfactory results when dealing with missing images with large holes,leading to distortions in the structure and blurring of textures.To address these problems,we combine the advantages of transformers and convolutions to propose an image inpainting method that incorporates edge priors and attention mechanisms.The proposed method aims to improve the results of inpainting large holes in images by enhancing the accuracy of structure restoration and the ability to recover texture details.This method divides the inpainting task into two phases:edge prediction and image inpainting.Specifically,in the edge prediction phase,a transformer architecture is designed to combine axial attention with standard self-attention.This design enhances the extraction capability of global structural features and location awareness.It also balances the complexity of self-attention operations,resulting in accurate prediction of the edge structure in the defective region.In the image inpainting phase,a multi-scale fusion attention module is introduced.This module makes full use of multi-level distant features and enhances local pixel continuity,thereby significantly improving the quality of image inpainting.To evaluate the performance of our method.comparative experiments are conducted on several datasets,including CelebA,Places2,and Facade.Quantitative experiments show that our method outperforms the other mainstream methods.Specifically,it improves Peak Signal-to-Noise Ratio(PSNR)and Structure Similarity Index Measure(SSIM)by 1.141~3.234 db and 0.083~0.235,respectively.Moreover,it reduces Learning Perceptual Image Patch Similarity(LPIPS)and Mean Absolute Error(MAE)by 0.0347~0.1753 and 0.0104~0.0402,respectively.Qualitative experiments reveal that our method excels at reconstructing images with complete structural information and clear texture details.Furthermore,our model exhibits impressive performance in terms of the number of parameters,memory cost,and testing time. 展开更多
关键词 Image inpainting TRANSFORMER edge prior axial attention multi-scale fusion attention
下载PDF
Ship recognition based on HRRP via multi-scale sparse preserving method
12
作者 YANG Xueling ZHANG Gong SONG Hu 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期599-608,共10页
In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) ba... In order to extract the richer feature information of ship targets from sea clutter, and address the high dimensional data problem, a method termed as multi-scale fusion kernel sparse preserving projection(MSFKSPP) based on the maximum margin criterion(MMC) is proposed for recognizing the class of ship targets utilizing the high-resolution range profile(HRRP). Multi-scale fusion is introduced to capture the local and detailed information in small-scale features, and the global and contour information in large-scale features, offering help to extract the edge information from sea clutter and further improving the target recognition accuracy. The proposed method can maximally preserve the multi-scale fusion sparse of data and maximize the class separability in the reduced dimensionality by reproducing kernel Hilbert space. Experimental results on the measured radar data show that the proposed method can effectively extract the features of ship target from sea clutter, further reduce the feature dimensionality, and improve target recognition performance. 展开更多
关键词 ship target recognition high-resolution range profile(HRRP) multi-scale fusion kernel sparse preserving projection(MSFKSPP) feature extraction dimensionality reduction
下载PDF
Improving VQA via Dual-Level Feature Embedding Network
13
作者 Yaru Song Huahu Xu Dikai Fang 《Intelligent Automation & Soft Computing》 2024年第3期397-416,共20页
Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual r... Visual Question Answering(VQA)has sparked widespread interest as a crucial task in integrating vision and language.VQA primarily uses attention mechanisms to effectively answer questions to associate relevant visual regions with input questions.The detection-based features extracted by the object detection network aim to acquire the visual attention distribution on a predetermined detection frame and provide object-level insights to answer questions about foreground objects more effectively.However,it cannot answer the question about the background forms without detection boxes due to the lack of fine-grained details,which is the advantage of grid-based features.In this paper,we propose a Dual-Level Feature Embedding(DLFE)network,which effectively integrates grid-based and detection-based image features in a unified architecture to realize the complementary advantages of both features.Specifically,in DLFE,In DLFE,firstly,a novel Dual-Level Self-Attention(DLSA)modular is proposed to mine the intrinsic properties of the two features,where Positional Relation Attention(PRA)is designed to model the position information.Then,we propose a Feature Fusion Attention(FFA)to address the semantic noise caused by the fusion of two features and construct an alignment graph to enhance and align the grid and detection features.Finally,we use co-attention to learn the interactive features of the image and question and answer questions more accurately.Our method has significantly improved compared to the baseline,increasing accuracy from 66.01%to 70.63%on the test-std dataset of VQA 1.0 and from 66.24%to 70.91%for the test-std dataset of VQA 2.0. 展开更多
关键词 Visual question answering multi-modal feature processing attention mechanisms cross-model fusion
下载PDF
基于SBERT-Attention-LDA与ML-LSTM特征融合的烟草问句意图识别方法
14
作者 朱波 黎魁 +1 位作者 邱兰 黎博 《农业机械学报》 EI CAS CSCD 北大核心 2024年第5期273-281,共9页
针对烟草领域中问句意图识别存在的特征稀疏、术语繁多和捕捉文本内部的语义关联困难等问题,提出了一种基于SBERT-Attention-LDA(Sentence-bidirectional encoder representational from transformers-Attention mechanism-Latent diric... 针对烟草领域中问句意图识别存在的特征稀疏、术语繁多和捕捉文本内部的语义关联困难等问题,提出了一种基于SBERT-Attention-LDA(Sentence-bidirectional encoder representational from transformers-Attention mechanism-Latent dirichlet allocation)与ML-LSTM(Multi layers-Long short term memory)特征融合的问句意图识别方法。该方法首先基于SBERT预训练模型和Attention机制对烟草问句进行动态编码,转换为富含语义信息的特征向量,同时利用LDA模型建模出问句的主题向量,捕捉问句中的主题信息;然后通过更改后的模型级特征融合方法ML-LSTM获得具有更为完整、准确问句语义的联合特征表示;再使用3通道的卷积神经网络(Convolutional neural network,CNN)提取问句混合语义表示中隐藏特征,输入到全连接层和Softmax函数中实现对问句意图的分类。基于烟草行业权威网站上获取的数据集开展了实验验证,实验结果表明,所提方法相比其他几种深度学习结合注意力机制的方法精确率、召回率和F1值上有显著提升,与BERT和ERNIE(Enhanced representation through knowledge integration and embedding)-CNN模型相比提升明显,F1值分别提升2.07、2.88个百分点。 展开更多
关键词 烟草问句分类 自然语言处理 特征融合 自注意力机制
下载PDF
Feature Fusion-Based Deep Learning Network to Recognize Table Tennis Actions
15
作者 Chih-Ta Yen Tz-Yun Chen +1 位作者 Un-Hung Chen Guo-Chang WangZong-Xian Chen 《Computers, Materials & Continua》 SCIE EI 2023年第1期83-99,共17页
A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.M... A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.Multiple kernel sizes were used in convolutional neural network(CNN)to evaluate their performance for extracting features.Moreover,a multiscale CNN with two kernel sizes was used to perform feature fusion at different scales in a concatenated manner.The CNN achieved recognition of the four table tennis strokes.Experimental data were obtained from20 research participants who wore sensors on the back of their hands while performing the four table tennis strokes in a laboratory environment.The data were collected to verify the performance of the proposed models for wearable devices.Finally,the sensor and multi-scale CNN designed in this study achieved accuracy and F1 scores of 99.58%and 99.16%,respectively,for the four strokes.The accuracy for five-fold cross validation was 99.87%.This result also shows that the multi-scale convolutional neural network has better robustness after fivefold cross validation. 展开更多
关键词 Wearable devices deep learning six-axis sensor feature fusion multi-scale convolutional neural networks action recognit
下载PDF
Grasp Detection with Hierarchical Multi-Scale Feature Fusion and Inverted Shuffle Residual
16
作者 Wenjie Geng Zhiqiang Cao +3 位作者 Peiyu Guan Fengshui Jing Min Tan Junzhi Yu 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第1期244-256,共13页
Grasp detection plays a critical role for robot manipulation.Mainstream pixel-wise grasp detection networks with encoder-decoder structure receive much attention due to good accuracy and efficiency.However,they usuall... Grasp detection plays a critical role for robot manipulation.Mainstream pixel-wise grasp detection networks with encoder-decoder structure receive much attention due to good accuracy and efficiency.However,they usually transmit the high-level feature in the encoder to the decoder,and low-level features are neglected.It is noted that low-level features contain abundant detail information,and how to fully exploit low-level features remains unsolved.Meanwhile,the channel information in high-level feature is also not well mined.Inevitably,the performance of grasp detection is degraded.To solve these problems,we propose a grasp detection network with hierarchical multi-scale feature fusion and inverted shuffle residual.Both low-level and high-level features in the encoder are firstly fused by the designed skip connections with attention module,and the fused information is then propagated to corresponding layers of the decoder for in-depth feature fusion.Such a hierarchical fusion guarantees the quality of grasp prediction.Furthermore,an inverted shuffle residual module is created,where the high-level feature from encoder is split in channel and the resultant split features are processed in their respective branches.By such differentiation processing,more high-dimensional channel information is kept,which enhances the representation ability of the network.Besides,an information enhancement module is added before the encoder to reinforce input information.The proposed method attains 98.9%and 97.8%in image-wise and object-wise accuracy on the Cornell grasping dataset,respectively,and the experimental results verify the effectiveness of the method. 展开更多
关键词 grasp detection hierarchical multi-scale feature fusion skip connections with attention inverted shuffle residual
原文传递
Multiscale feature learning and attention mechanism for infrared and visible image fusion
17
作者 GAO Li LUO DeLin WANG Song 《Science China(Technological Sciences)》 SCIE EI CAS CSCD 2024年第2期408-422,共15页
Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared... Current fusion methods for infrared and visible images tend to extract features at a single scale,which results in insufficient detail and incomplete feature preservation.To address these issues,we propose an infrared and visible image fusion network based on a multiscale feature learning and attention mechanism(MsAFusion).A multiscale dilation convolution framework is employed to capture image features across various scales and broaden the perceptual scope.Furthermore,an attention network is introduced to enhance the focus on salient targets in infrared images and detailed textures in visible images.To compensate for information loss during convolution,jump connections are utilized during the image reconstruction phase.The fusion process utilizes a combined loss function consisting of pixel loss and gradient loss for unsupervised fusion of infrared and visible images.Extensive experiments on the dataset of electricity facilities demonstrate that our proposed method outperforms nine state-of-theart methods in terms of visual perception and four objective evaluation metrics. 展开更多
关键词 infrared and visible images image fusion attention mechanism CNN feature extraction
原文传递
Low‑light enhancement method with dual branch feature fusion and learnable regularized attention
18
作者 Yixiang Sun Mengyao Ni +3 位作者 Ming Zhao Zhenyu Yang Yuanlong Peng Danhua Cao 《Frontiers of Optoelectronics》 EI CSCD 2024年第3期93-111,共19页
Restricted by the lighting conditions,the images captured at night tend to sufer from color aberration,noise,and other unfavorable factors,making it difcult for subsequent vision-based applications.To solve this probl... Restricted by the lighting conditions,the images captured at night tend to sufer from color aberration,noise,and other unfavorable factors,making it difcult for subsequent vision-based applications.To solve this problem,we propose a two-stage size-controllable low-light enhancement method,named Dual Fusion Enhancement Net(DFEN).The whole algorithm is built on a double U-Net structure,implementing brightness adjustment and detail revision respectively.A dual branch feature fusion module is adopted to enhance its ability of feature extraction and aggregation.We also design a learnable regularized attention module to balance the enhancement efect on diferent regions.Besides,we introduce a cosine training strategy to smooth the transition of the training target from the brightness adjustment stage to the detail revision stage during the training process.The proposed DFEN is tested on several low-light datasets,and the experimental results demonstrate that the algorithm achieves superior enhancement results with the similar parameters.It is worth noting that the lightest DFEN model reaches 11 FPS for image size of 1224×10^(24)in an RTX 3090 GPU. 展开更多
关键词 Power inspection Low-light enhancement feature fusion Learnable regularized attention
原文传递
Multi-Scale Feature Fusion Model for Bridge Appearance Defect Detection
19
作者 Rong Pang Yan Yang +3 位作者 Aiguo Huang Yan Liu Peng Zhang Guangwu Tang 《Big Data Mining and Analytics》 EI CSCD 2024年第1期1-11,共11页
Although the Faster Region-based Convolutional Neural Network(Faster R-CNN)model has obvious advantages in defect recognition,it still cannot overcome challenging problems,such as time-consuming,small targets,irregula... Although the Faster Region-based Convolutional Neural Network(Faster R-CNN)model has obvious advantages in defect recognition,it still cannot overcome challenging problems,such as time-consuming,small targets,irregular shapes,and strong noise interference in bridge defect detection.To deal with these issues,this paper proposes a novel Multi-scale Feature Fusion(MFF)model for bridge appearance disease detection.First,the Faster R-CNN model adopts Region Of Interest(ROl)pooling,which omits the edge information of the target area,resulting in some missed detections and inaccuracies in both detecting and localizing bridge defects.Therefore,this paper proposes an MFF based on regional feature Aggregation(MFF-A),which reduces the missed detection rate of bridge defect detection and improves the positioning accuracy of the target area.Second,the Faster R-CNN model is insensitive to small targets,irregular shapes,and strong noises in bridge defect detection,which results in a long training time and low recognition accuracy.Accordingly,a novel Lightweight MFF(namely MFF-L)model for bridge appearance defect detection using a lightweight network EfficientNetV2 and a feature pyramid network is proposed,which fuses multi-scale features to shorten the training speed and improve recognition accuracy.Finally,the effectiveness of the proposed method is evaluated on the bridge disease dataset and public computational fluid dynamic dataset. 展开更多
关键词 defect detection multi-scale feature fusion(MFF) Region Of Interest(ROl)alignment lightweight network
原文传递
联合Self-attention与Axial-attention的机场跑道裂缝分割 被引量:1
20
作者 李海丰 范天啸 +2 位作者 黄睿 侯谨毅 桂仲成 《郑州大学学报(理学版)》 CAS 北大核心 2023年第4期30-38,共9页
机场跑道裂缝形态多样、方向各异、长短不一且粗细不均,通常不具有统计规律。现有的各类裂缝分割算法难以在此类复杂场景中落地。针对上述问题,提出了联合self-attention与axial-attention的机场跑道裂缝分割网络(CSA-net),通过引入自... 机场跑道裂缝形态多样、方向各异、长短不一且粗细不均,通常不具有统计规律。现有的各类裂缝分割算法难以在此类复杂场景中落地。针对上述问题,提出了联合self-attention与axial-attention的机场跑道裂缝分割网络(CSA-net),通过引入自注意力模块、轴向注意力模块、可变形卷积模块,提取裂缝的局部特征和全局语义特征。通过transformer decoder还原特征图的原始尺寸,融合了不同尺度间的分割结果,保留尽可能多的细节信息,使得CSA-net有更好的分割精度。在机场跑道实拍的数据集上进行的测试表明,针对裂缝的像素级分割指标F1-score达到了78.91%,高于目前各类裂缝分割算法。 展开更多
关键词 人工智能 CSA-net 自注意力 机场跑道裂缝分割 轴向注意力 特征融合
下载PDF
上一页 1 2 78 下一页 到第
使用帮助 返回顶部