期刊文献+
共找到808篇文章
< 1 2 41 >
每页显示 20 50 100
An improved deep dilated convolutional neural network for seismic facies interpretation
1
作者 Na-Xia Yang Guo-Fa Li +2 位作者 Ting-Hui Li Dong-Feng Zhao Wei-Wei Gu 《Petroleum Science》 SCIE EI CAS CSCD 2024年第3期1569-1583,共15页
With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural network... With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural networks.These intelligent and automated methods significantly reduce manual labor,particularly in the laborious task of manually labeling seismic facies.However,the extensive demand for training data imposes limitations on their wider application.To overcome this challenge,we adopt the UNet architecture as the foundational network structure for seismic facies classification,which has demonstrated effective segmentation results even with small-sample training data.Additionally,we integrate spatial pyramid pooling and dilated convolution modules into the network architecture to enhance the perception of spatial information across a broader range.The seismic facies classification test on the public data from the F3 block verifies the superior performance of our proposed improved network structure in delineating seismic facies boundaries.Comparative analysis against the traditional UNet model reveals that our method achieves more accurate predictive classification results,as evidenced by various evaluation metrics for image segmentation.Obviously,the classification accuracy reaches an impressive 96%.Furthermore,the results of seismic facies classification in the seismic slice dimension provide further confirmation of the superior performance of our proposed method,which accurately defines the range of different seismic facies.This approach holds significant potential for analyzing geological patterns and extracting valuable depositional information. 展开更多
关键词 Seismic facies interpretation dilated convolution Spatial pyramid pooling Internal feature maps Compound loss function
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
2
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight convolutional Neural Network Depthwise dilated Separable convolution Hierarchical multi-scale Feature Fusion
下载PDF
Two Stages Segmentation Algorithm of Breast Tumor in DCE-MRI Based on Multi-Scale Feature and Boundary Attention Mechanism
3
作者 Bing Li Liangyu Wang +3 位作者 Xia Liu Hongbin Fan Bo Wang Shoudi Tong 《Computers, Materials & Continua》 SCIE EI 2024年第7期1543-1561,共19页
Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low a... Nuclearmagnetic resonance imaging of breasts often presents complex backgrounds.Breast tumors exhibit varying sizes,uneven intensity,and indistinct boundaries.These characteristics can lead to challenges such as low accuracy and incorrect segmentation during tumor segmentation.Thus,we propose a two-stage breast tumor segmentation method leveraging multi-scale features and boundary attention mechanisms.Initially,the breast region of interest is extracted to isolate the breast area from surrounding tissues and organs.Subsequently,we devise a fusion network incorporatingmulti-scale features and boundary attentionmechanisms for breast tumor segmentation.We incorporate multi-scale parallel dilated convolution modules into the network,enhancing its capability to segment tumors of various sizes through multi-scale convolution and novel fusion techniques.Additionally,attention and boundary detection modules are included to augment the network’s capacity to locate tumors by capturing nonlocal dependencies in both spatial and channel domains.Furthermore,a hybrid loss function with boundary weight is employed to address sample class imbalance issues and enhance the network’s boundary maintenance capability through additional loss.Themethod was evaluated using breast data from 207 patients at RuijinHospital,resulting in a 6.64%increase in Dice similarity coefficient compared to the benchmarkU-Net.Experimental results demonstrate the superiority of the method over other segmentation techniques,with fewer model parameters. 展开更多
关键词 Dynamic contrast-enhanced magnetic resonance imaging(DCE-MRI) breast tumor segmentation multi-scale dilated convolution boundary attention the hybrid loss function with boundary weight
下载PDF
MSSTNet:Multi-scale facial videos pulse extraction network based on separable spatiotemporal convolution and dimension separable attention
4
作者 Changchen ZHAO Hongsheng WANG Yuanjing FENG 《Virtual Reality & Intelligent Hardware》 2023年第2期124-141,共18页
Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale regi... Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale region of interest(ROI).However,some noise signals that are not easily separated in a single-scale space can be easily separated in a multi-scale space.Also,existing spatiotemporal networks mainly focus on local spatiotemporal information and do not emphasize temporal information,which is crucial in pulse extraction problems,resulting in insufficient spatiotemporal feature modelling.Methods Here,we propose a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution(SSTC)and dimension separable attention(DSAT).First,to solve the problem of a single-scale ROI,we constructed a multi-scale feature space for initial signal separation.Second,SSTC and DSAT were designed for efficient spatiotemporal correlation modeling,which increased the information interaction between the long-span time and space dimensions;this placed more emphasis on temporal features.Results The signal-to-noise ratio(SNR)of the proposed network reached 9.58dB on the PURE dataset and 6.77dB on the UBFC-rPPG dataset,outperforming state-of-the-art algorithms.Conclusions The results showed that fusing multi-scale signals yielded better results than methods based on only single-scale signals.The proposed SSTC and dimension-separable attention mechanism will contribute to more accurate pulse signal extraction. 展开更多
关键词 Remote photoplethysmography Heart rate Separable spatiotemporal convolution Dimension separable attention multi-scale Neural network
下载PDF
Long Text Classification Algorithm Using a Hybrid Model of Bidirectional Encoder Representation from Transformers-Hierarchical Attention Networks-Dilated Convolutions Network 被引量:1
5
作者 ZHAO Yuanyuan GAO Shining +1 位作者 LIU Yang GONG Xiaohui 《Journal of Donghua University(English Edition)》 CAS 2021年第4期341-350,共10页
Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid mo... Text format information is full of most of the resources of Internet,which puts forward higher and higher requirements for the accuracy of text classification.Therefore,in this manuscript,firstly,we design a hybrid model of bidirectional encoder representation from transformers-hierarchical attention networks-dilated convolutions networks(BERT_HAN_DCN)which based on BERT pre-trained model with superior ability of extracting characteristic.The advantages of HAN model and DCN model are taken into account which can help gain abundant semantic information,fusing context semantic features and hierarchical characteristics.Secondly,the traditional softmax algorithm increases the learning difficulty of the same kind of samples,making it more difficult to distinguish similar features.Based on this,AM-softmax is introduced to replace the traditional softmax.Finally,the fused model is validated,which shows superior performance in the accuracy rate and F1-score of this hybrid model on two datasets and the experimental analysis shows the general single models such as HAN,DCN,based on BERT pre-trained model.Besides,the improved AM-softmax network model is superior to the general softmax network model. 展开更多
关键词 long text classification dilated convolution BERT fusing context semantic features hierarchical characteristics BERT_HAN_DCN AM-softmax
下载PDF
A multi-scale convolutional auto-encoder and its application in fault diagnosis of rolling bearings 被引量:10
6
作者 Ding Yunhao Jia Minping 《Journal of Southeast University(English Edition)》 EI CAS 2019年第4期417-423,共7页
Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on ... Aiming at the difficulty of fault identification caused by manual extraction of fault features of rotating machinery,a one-dimensional multi-scale convolutional auto-encoder fault diagnosis model is proposed,based on the standard convolutional auto-encoder.In this model,the parallel convolutional and deconvolutional kernels of different scales are used to extract the features from the input signal and reconstruct the input signal;then the feature map extracted by multi-scale convolutional kernels is used as the input of the classifier;and finally the parameters of the whole model are fine-tuned using labeled data.Experiments on one set of simulation fault data and two sets of rolling bearing fault data are conducted to validate the proposed method.The results show that the model can achieve 99.75%,99.3%and 100%diagnostic accuracy,respectively.In addition,the diagnostic accuracy and reconstruction error of the one-dimensional multi-scale convolutional auto-encoder are compared with traditional machine learning,convolutional neural networks and a traditional convolutional auto-encoder.The final results show that the proposed model has a better recognition effect for rolling bearing fault data. 展开更多
关键词 fault diagnosis deep learning convolutional auto-encoder multi-scale convolutional kernel feature extraction
下载PDF
Advanced Face Mask Detection Model Using Hybrid Dilation Convolution Based Method 被引量:1
7
作者 Shaohan Wang Xiangyu Wang Xin Guo 《Journal of Software Engineering and Applications》 2023年第1期1-19,共19页
A face-mask object detection model incorporating hybrid dilation convolutional network termed ResNet Hybrid-dilation-convolution Face-mask-detector (RHF) is proposed in this paper. Furthermore, a lightweight face-mask... A face-mask object detection model incorporating hybrid dilation convolutional network termed ResNet Hybrid-dilation-convolution Face-mask-detector (RHF) is proposed in this paper. Furthermore, a lightweight face-mask dataset named Light Masked Face Dataset (LMFD) and a medium-sized face-mask dataset named Masked Face Dataset (MFD) with data augmentation methods applied is also constructed in this paper. The hybrid dilation convolutional network is able to expand the perception of the convolutional kernel without concern about the discontinuity of image information during the convolution process. For the given two datasets being constructed above, the trained models are significantly optimized in terms of detection performance, training time, and other related metrics. By using the MFD dataset of 55,905 images, the RHF model requires roughly 10 hours less training time compared to ResNet50 with better detection results with mAP of 93.45%. 展开更多
关键词 Face Mask Detection Object Detection Hybrid dilation convolution Computer Vision
下载PDF
DcNet: Dilated Convolutional Neural Networks for Side-Scan Sonar Image Semantic Segmentation 被引量:2
8
作者 ZHAO Xiaohong QIN Rixia +3 位作者 ZHANG Qilei YU Fei WANG Qi HE Bo 《Journal of Ocean University of China》 SCIE CAS CSCD 2021年第5期1089-1096,共8页
In ocean explorations,side-scan sonar(SSS)plays a very important role and can quickly depict seabed topography.As-sembling the SSS to an autonomous underwater vehicle(AUV)and performing semantic segmentation of an SSS... In ocean explorations,side-scan sonar(SSS)plays a very important role and can quickly depict seabed topography.As-sembling the SSS to an autonomous underwater vehicle(AUV)and performing semantic segmentation of an SSS image in real time can realize online submarine geomorphology or target recognition,which is conducive to submarine detection.However,because of the complexity of the marine environment,various noises in the ocean pollute the sonar image,which also encounters the intensity inhomogeneity problem.In this paper,we propose a novel neural network architecture named dilated convolutional neural network(DcNet)that can run in real time while addressing the above-mentioned issues and providing accurate semantic segmentation.The proposed architecture presents an encoder-decoder network to gradually reduce the spatial dimension of the input image and recover the details of the target,respectively.The core of our network is a novel block connection named DCblock,which mainly uses dilated convolution and depthwise separable convolution between the encoder and decoder to attain more context while still retaining high accuracy.Furthermore,our proposed method performs a super-resolution reconstruction to enlarge the dataset with high-quality im-ages.We compared our network to other common semantic segmentation networks performed on an NVIDIA Jetson TX2 using our sonar image datasets.Experimental results show that while the inference speed of the proposed network significantly outperforms state-of-the-art architectures,the accuracy of our method is still comparable,which indicates its potential applications not only in AUVs equipped with SSS but also in marine exploration. 展开更多
关键词 side-scan sonar(SSS) semantic segmentation dilated convolutions SUPER-RESOLUTION
下载PDF
Multi-Scale Convolutional Gated Recurrent Unit Networks for Tool Wear Prediction in Smart Manufacturing 被引量:2
9
作者 Weixin Xu Huihui Miao +3 位作者 Zhibin Zhao Jinxin Liu Chuang Sun Ruqiang Yan 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2021年第3期130-145,共16页
As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symboli... As an integrated application of modern information technologies and artificial intelligence,Prognostic and Health Management(PHM)is important for machine health monitoring.Prediction of tool wear is one of the symbolic applications of PHM technology in modern manufacturing systems and industry.In this paper,a multi-scale Convolutional Gated Recurrent Unit network(MCGRU)is proposed to address raw sensory data for tool wear prediction.At the bottom of MCGRU,six parallel and independent branches with different kernel sizes are designed to form a multi-scale convolutional neural network,which augments the adaptability to features of different time scales.These features of different scales extracted from raw data are then fed into a Deep Gated Recurrent Unit network to capture long-term dependencies and learn significant representations.At the top of the MCGRU,a fully connected layer and a regression layer are built for cutting tool wear prediction.Two case studies are performed to verify the capability and effectiveness of the proposed MCGRU network and results show that MCGRU outperforms several state-of-the-art baseline models. 展开更多
关键词 Tool wear prediction multi-scale convolutional neural networks Gated recurrent unit
下载PDF
Clothing Parsing Based on Multi-Scale Fusion and Improved Self-Attention Mechanism
10
作者 陈诺 王绍宇 +3 位作者 陆然 李文萱 覃志东 石秀金 《Journal of Donghua University(English Edition)》 CAS 2023年第6期661-666,共6页
Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.Th... Due to the lack of long-range association and spatial location information,fine details and accurate boundaries of complex clothing images cannot always be obtained by using the existing deep learning-based methods.This paper presents a convolutional structure with multi-scale fusion to optimize the step of clothing feature extraction and a self-attention module to capture long-range association information.The structure enables the self-attention mechanism to directly participate in the process of information exchange through the down-scaling projection operation of the multi-scale framework.In addition,the improved self-attention module introduces the extraction of 2-dimensional relative position information to make up for its lack of ability to extract spatial position features from clothing images.The experimental results based on the colorful fashion parsing dataset(CFPD)show that the proposed network structure achieves 53.68%mean intersection over union(mIoU)and has better performance on the clothing parsing task. 展开更多
关键词 clothing parsing convolutional neural network multi-scale fusion self-attention mechanism vision Transformer
下载PDF
Convolution-Transformer for Image Feature Extraction
11
作者 Lirong Yin Lei Wang +10 位作者 Siyu Lu Ruiyang Wang Youshuai Yang Bo Yang Shan Liu Ahmed AlSanad Salman A.AlQahtani Zhengtong Yin Xiaolu Li Xiaobing Chen Wenfeng Zheng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期87-106,共20页
This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers a... This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers are more sensitive to different hyperparameters of optimizers,which leads to a lack of stability and slow convergence.To tackle these challenges,we propose the Convolution-based Efficient Transformer Image Feature Extraction Network(CEFormer)as an enhancement of the Transformer architecture.Our model incorporates E-Attention,depthwise separable convolution,and dilated convolution to introduce crucial inductive biases,such as translation invariance,locality,and scale invariance,into the Transformer framework.Additionally,we implement a lightweight convolution module to process the input images,resulting in faster convergence and improved stability.This results in an efficient convolution combined Transformer image feature extraction network.Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed.It achieves up to 85.0%accuracy across various model sizes on image classification,outperforming various baseline models.When integrated into the Mask Region-ConvolutionalNeuralNetwork(R-CNN)framework as a backbone network,CEFormer outperforms other models and achieves the highest mean Average Precision(mAP)scores.This research presents a significant advancement in Transformer-based image feature extraction,balancing performance and computational efficiency. 展开更多
关键词 TRANSFORMER E-Attention depth convolution dilated convolution CEFormer
下载PDF
TSCND:Temporal Subsequence-Based Convolutional Network with Difference for Time Series Forecasting
12
作者 Haoran Huang Weiting Chen Zheming Fan 《Computers, Materials & Continua》 SCIE EI 2024年第3期3665-3681,共17页
Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t... Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN. 展开更多
关键词 DIFFERENCE data prediction time series temporal convolutional network dilated convolution
下载PDF
Pedestrian attribute classification with multi-scale and multi-label convolutional neural networks
13
作者 朱建清 Zeng Huanqiang +2 位作者 Zhang Yuzhao Zheng Lixin Cai Canhui 《High Technology Letters》 EI CAS 2018年第1期53-61,共9页
Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label c... Pedestrian attribute classification from a pedestrian image captured in surveillance scenarios is challenging due to diverse clothing appearances,varied poses and different camera views. A multiscale and multi-label convolutional neural network( MSMLCNN) is proposed to predict multiple pedestrian attributes simultaneously. The pedestrian attribute classification problem is firstly transformed into a multi-label problem including multiple binary attributes needed to be classified. Then,the multi-label problem is solved by fully connecting all binary attributes to multi-scale features with logistic regression functions. Moreover,the multi-scale features are obtained by concatenating those featured maps produced from multiple pooling layers of the MSMLCNN at different scales. Extensive experiment results show that the proposed MSMLCNN outperforms state-of-the-art pedestrian attribute classification methods with a large margin. 展开更多
关键词 PEDESTRIAN ATTRIBUTE CLASSIFICATION multi-scale features MULTI-LABEL CLASSIFICATION convolutional NEURAL network (CNN)
下载PDF
Multi⁃Scale Dilated Convolutional Neural Network for Hyperspectral Image Classification
14
作者 Shanshan Zheng Wen Liu +3 位作者 Rui Shan Jingyi Zhao Guoqian Jiang Zhi Zhang 《Journal of Harbin Institute of Technology(New Series)》 CAS 2021年第4期25-32,共8页
Aiming at the problem of image information loss,dilated convolution is introduced and a novel multi⁃scale dilated convolutional neural network(MDCNN)is proposed.Dilated convolution can polymerize image multi⁃scale inf... Aiming at the problem of image information loss,dilated convolution is introduced and a novel multi⁃scale dilated convolutional neural network(MDCNN)is proposed.Dilated convolution can polymerize image multi⁃scale information without reducing the resolution.The first layer of the network used spectral convolutional step to reduce dimensionality.Then the multi⁃scale aggregation extracted multi⁃scale features through applying dilated convolution and shortcut connection.The extracted features which represent properties of data were fed through Softmax to predict the samples.MDCNN achieved the overall accuracy of 99.58% and 99.92% on two public datasets,Indian Pines and Pavia University.Compared with four other existing models,the results illustrate that MDCNN can extract better discriminative features and achieve higher classification performance. 展开更多
关键词 multi⁃scale aggregation dilated convolution hyperspectral image classification(HSIC) shortcut connection
下载PDF
Lightweight Image Super-Resolution via Weighted Multi-Scale Residual Network 被引量:6
15
作者 Long Sun Zhenbing Liu +3 位作者 Xiyan Sun Licheng Liu Rushi Lan Xiaonan Luo 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第7期1271-1280,共10页
The tradeoff between efficiency and model size of the convolutional neural network(CNN)is an essential issue for applications of CNN-based algorithms to diverse real-world tasks.Although deep learning-based methods ha... The tradeoff between efficiency and model size of the convolutional neural network(CNN)is an essential issue for applications of CNN-based algorithms to diverse real-world tasks.Although deep learning-based methods have achieved significant improvements in image super-resolution(SR),current CNNbased techniques mainly contain massive parameters and a high computational complexity,limiting their practical applications.In this paper,we present a fast and lightweight framework,named weighted multi-scale residual network(WMRN),for a better tradeoff between SR performance and computational efficiency.With the modified residual structure,depthwise separable convolutions(DS Convs)are employed to improve convolutional operations’efficiency.Furthermore,several weighted multi-scale residual blocks(WMRBs)are stacked to enhance the multi-scale representation capability.In the reconstruction subnetwork,a group of Conv layers are introduced to filter feature maps to reconstruct the final high-quality image.Extensive experiments were conducted to evaluate the proposed model,and the comparative results with several state-of-the-art algorithms demonstrate the effectiveness of WMRN. 展开更多
关键词 convolutional neural network(CNN) lightweight framework multi-scale SUPER-RESOLUTION
下载PDF
1D-CNN:Speech Emotion Recognition System Using a Stacked Network with Dilated CNN Features 被引量:6
16
作者 Mustaqeem Soonil Kwon 《Computers, Materials & Continua》 SCIE EI 2021年第6期4039-4059,共21页
Emotion recognition from speech data is an active and emerging area of research that plays an important role in numerous applications,such as robotics,virtual reality,behavior assessments,and emergency call centers.Re... Emotion recognition from speech data is an active and emerging area of research that plays an important role in numerous applications,such as robotics,virtual reality,behavior assessments,and emergency call centers.Recently,researchers have developed many techniques in this field in order to ensure an improvement in the accuracy by utilizing several deep learning approaches,but the recognition rate is still not convincing.Our main aim is to develop a new technique that increases the recognition rate with reasonable cost computations.In this paper,we suggested a new technique,which is a one-dimensional dilated convolutional neural network(1D-DCNN)for speech emotion recognition(SER)that utilizes the hierarchical features learning blocks(HFLBs)with a bi-directional gated recurrent unit(BiGRU).We designed a one-dimensional CNN network to enhance the speech signals,which uses a spectral analysis,and to extract the hidden patterns from the speech signals that are fed into a stacked one-dimensional dilated network that are called HFLBs.Each HFLB contains one dilated convolution layer(DCL),one batch normalization(BN),and one leaky_relu(Relu)layer in order to extract the emotional features using a hieratical correlation strategy.Furthermore,the learned emotional features are feed into a BiGRU in order to adjust the global weights and to recognize the temporal cues.The final state of the deep BiGRU is passed from a softmax classifier in order to produce the probabilities of the emotions.The proposed model was evaluated over three benchmarked datasets that included the IEMOCAP,EMO-DB,and RAVDESS,which achieved 72.75%,91.14%,and 78.01%accuracy,respectively. 展开更多
关键词 Affective computing one-dimensional dilated convolutional neural network emotion recognition gated recurrent unit raw audio clips
下载PDF
A Multi-Scale Network with the Encoder-Decoder Structure for CMR Segmentation 被引量:1
17
作者 Chaoyang Xia Jing Peng +1 位作者 Zongqing Ma Xiaojie Li 《Journal of Information Hiding and Privacy Protection》 2019年第3期109-117,共9页
Cardiomyopathy is one of the most serious public health threats.The precise structural and functional cardiac measurement is an essential step for clinical diagnosis and follow-up treatment planning.Cardiologists are ... Cardiomyopathy is one of the most serious public health threats.The precise structural and functional cardiac measurement is an essential step for clinical diagnosis and follow-up treatment planning.Cardiologists are often required to draw endocardial and epicardial contours of the left ventricle(LV)manually in routine clinical diagnosis or treatment planning period.This task is time-consuming and error-prone.Therefore,it is necessary to develop a fully automated end-to-end semantic segmentation method on cardiac magnetic resonance(CMR)imaging datasets.However,due to the low image quality and the deformation caused by heartbeat,there is no effective tool for fully automated end-to-end cardiac segmentation task.In this work,we propose a multi-scale segmentation network(MSSN)for left ventricle segmentation.It can effectively learn myocardium and blood pool structure representations from 2D short-axis CMR image slices in a multi-scale way.Specifically,our method employs both parallel and serial of dilated convolution layers with different dilation rates to capture multi-scale semantic features.Moreover,we design graduated up-sampling layers with subpixel layers as the decoder to reconstruct lost spatial information and produce accurate segmentation masks.We validated our method using 164 T1 Mapping CMR images and showed that it outperforms the advanced convolutional neural network(CNN)models.In validation metrics,we archived the Dice Similarity Coefficient(DSC)metric of 78.96%. 展开更多
关键词 Cardiac magnetic resonance imaging multi-scale semantic segmentation convolutional neural networks
下载PDF
Multi-Classification of Polyps in Colonoscopy Images Based on an Improved Deep Convolutional Neural Network 被引量:1
18
作者 Shuang Liu Xiao Liu +9 位作者 Shilong Chang Yufeng Sun Kaiyuan Li Ya Hou Shiwei Wang Jie Meng Qingliang Zhao Sibei Wu Kun Yang Linyan Xue 《Computers, Materials & Continua》 SCIE EI 2023年第6期5837-5852,共16页
Achieving accurate classification of colorectal polyps during colonoscopy can avoid unnecessary endoscopic biopsy or resection.This study aimed to develop a deep learning model that can automatically classify colorect... Achieving accurate classification of colorectal polyps during colonoscopy can avoid unnecessary endoscopic biopsy or resection.This study aimed to develop a deep learning model that can automatically classify colorectal polyps histologically on white-light and narrow-band imaging(NBI)colonoscopy images based on World Health Organization(WHO)and Workgroup serrAted polypS and Polyposis(WASP)classification criteria for colorectal polyps.White-light and NBI colonoscopy images of colorectal polyps exhibiting pathological results were firstly collected and classified into four categories:conventional adenoma,hyperplastic polyp,sessile serrated adenoma/polyp(SSAP)and normal,among which conventional adenoma could be further divided into three sub-categories of tubular adenoma,villous adenoma and villioustublar adenoma,subsequently the images were re-classified into six categories.In this paper,we proposed a novel convolutional neural network termed Polyp-DedNet for the four-and six-category classification tasks of colorectal polyps.Based on the existing classification network ResNet50,Polyp-DedNet adopted dilated convolution to retain more high-dimensional spatial information and an Efficient Channel Attention(ECA)module to improve the classification performance further.To eliminate gridding artifacts caused by dilated convolutions,traditional convolutional layers were used instead of the max pooling layer,and two convolutional layers with progressively decreasing dilation were added at the end of the network.Due to the inevitable imbalance of medical image data,a regularization method DropBlock and a Class-Balanced(CB)Loss were performed to prevent network overfitting.Furthermore,the 5-fold cross-validation was adopted to estimate the performance of Polyp-DedNet for the multi-classification task of colorectal polyps.Mean accuracies of the proposed Polyp-DedNet for the four-and six-category classifications of colorectal polyps were 89.91%±0.92%and 85.13%±1.10%,respectively.The metrics of precision,recall and F1-score were also improved by 1%∼2%compared to the baseline ResNet50.The proposed Polyp-DedNet presented state-of-the-art performance for colorectal polyp classifying on white-light and NBI colonoscopy images,highlighting its considerable potential as an AI-assistant system for accurate colorectal polyp diagnosis in colonoscopy. 展开更多
关键词 Colorectal polyps four-and six-category classifications convolutional neural network dilated residual network
下载PDF
数据驱动的半无限介质裂纹识别模型研究 被引量:1
19
作者 江守燕 邓王涛 +1 位作者 孙立国 杜成斌 《力学学报》 EI CAS CSCD 北大核心 2024年第6期1727-1739,共13页
缺陷识别是结构健康监测的重要研究内容,对评估工程结构的安全性具有重要的指导意义,然而,准确确定结构缺陷的尺寸十分困难.论文提出了一种创新的数据驱动算法,将比例边界有限元法(scaled boundary finite element methods,SBFEM)与自... 缺陷识别是结构健康监测的重要研究内容,对评估工程结构的安全性具有重要的指导意义,然而,准确确定结构缺陷的尺寸十分困难.论文提出了一种创新的数据驱动算法,将比例边界有限元法(scaled boundary finite element methods,SBFEM)与自编码器(autoencoder,AE)、因果膨胀卷积神经网络(causal dilated convolutional neural network,CDCNN)相结合用于半无限介质中的裂纹识别.在该模型中,SBFEM用于模拟波在含不同裂纹状缺陷半无限介质中的传播过程,对于不同的裂纹状缺陷,仅需改变裂纹尖端的比例中心和裂纹开口处节点的位置,避免了复杂的重网格过程,可高效地生成足够的训练数据.模拟波在半无限介质中传播时,建立了基于瑞利阻尼的吸收边界模型,避免了对结构全域模型进行计算.搭建了CDCNN,确保了时序数据的有序性,并获得更大的感受野而不增加神经网络的复杂性,可捕捉更多的历史信息,AE具有较强的非线性特征提取能力,可将高维的原始输入特征向量空间映射到低维潜在特征向量空间,以获得低维潜在特征用于网络模型训练,有效提升了网络模型的学习效率.数值算例表明:提出的模型能够高效且准确地识别半无限介质中裂纹的量化信息,且AE-CDCNN模型的识别效率较单CDCNN模型提高了约2.7倍. 展开更多
关键词 数据驱动 比例边界有限元法 自编码器 因果膨胀卷积神经网络 裂纹识别
下载PDF
一种道路裂缝检测的变尺度VS-UNet模型 被引量:1
20
作者 赵志宏 何朋 郝子晔 《湖南大学学报(自然科学版)》 EI CAS CSCD 北大核心 2024年第6期63-72,共10页
为解决目前现有的图像分割算法存在检测精度低、对裂缝检测缺乏针对性等问题,采用多尺度特征融合方法,提出一种扩展LG Block模块Extend-LG Block,其由多个并行不同膨胀率的空洞卷积组成.通过参数可调节分支数量和空洞卷积膨胀率,从而改... 为解决目前现有的图像分割算法存在检测精度低、对裂缝检测缺乏针对性等问题,采用多尺度特征融合方法,提出一种扩展LG Block模块Extend-LG Block,其由多个并行不同膨胀率的空洞卷积组成.通过参数可调节分支数量和空洞卷积膨胀率,从而改变其感受野大小,进而提取和融合不同尺度的裂缝特征.对比在深层使用多尺度特征融合模块的网络以及使用固定尺度结构进行多尺度特征融合的网络的优劣,提出一种变尺度结构的UNet模型VS-UNet,使用多个不同参数的Extend-LG Block替换UNet网络中的基本卷积块.该结构在网络浅层进行多尺度特征融合,多尺度特征融合模块提取的尺度随网络层加深逐渐减少.此结构在加强图像的细节特征提取能力的同时保持原有的抽象特征提取能力,还可避免网络参数的增加.在DeepCrack数据集以及CFD数据集上进行实验验证,结果表明,相较于其他两种结构和方法,提出的变尺度结构的网络在有更高检测精度的同时,在可视化实验对比上对各种大小的裂缝有更好的分割效果.最后与其他图像分割算法进行对比,各项指标与UNet相比均有一定程度提升,证明了网络改进的有效性.研究结果可为进一步提升道路裂缝检测效果提供参考. 展开更多
关键词 U-Net 多尺度 裂缝检测 空洞卷积 深度学习
下载PDF
上一页 1 2 41 下一页 到第
使用帮助 返回顶部