期刊文献+
共找到6,743篇文章
< 1 2 250 >
每页显示 20 50 100
Chinese Clinical Named Entity Recognition Using Multi-Feature Fusion and Multi-Scale Local Context Enhancement
1
作者 Meijing Li Runqing Huang Xianxian Qi 《Computers, Materials & Continua》 SCIE EI 2024年第8期2283-2299,共17页
Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity... Chinese Clinical Named Entity Recognition(CNER)is a crucial step in extracting medical information and is of great significance in promoting medical informatization.However,CNER poses challenges due to the specificity of clinical terminology,the complexity of Chinese text semantics,and the uncertainty of Chinese entity boundaries.To address these issues,we propose an improved CNER model,which is based on multi-feature fusion and multi-scale local context enhancement.The model simultaneously fuses multi-feature representations of pinyin,radical,Part of Speech(POS),word boundary with BERT deep contextual representations to enhance the semantic representation of text for more effective entity recognition.Furthermore,to address the model’s limitation of focusing just on global features,we incorporate Convolutional Neural Networks(CNNs)with various kernel sizes to capture multi-scale local features of the text and enhance the model’s comprehension of the text.Finally,we integrate the obtained global and local features,and employ multi-head attention mechanism(MHA)extraction to enhance the model’s focus on characters associated with medical entities,hence boosting the model’s performance.We obtained 92.74%,and 87.80%F1 scores on the two CNER benchmark datasets,CCKS2017 and CCKS2019,respectively.The results demonstrate that our model outperforms the latest models in CNER,showcasing its outstanding overall performance.It can be seen that the CNER model proposed in this study has an important application value in constructing clinical medical knowledge graph and intelligent Q&A system. 展开更多
关键词 CNER multi-feature fusion BiLSTM CNN MHA
下载PDF
A Credit Card Fraud Detection Model Based on Multi-Feature Fusion and Generative Adversarial Network 被引量:1
2
作者 Yalong Xie Aiping Li +2 位作者 Biyin Hu Liqun Gao Hongkui Tu 《Computers, Materials & Continua》 SCIE EI 2023年第9期2707-2726,共20页
Credit Card Fraud Detection(CCFD)is an essential technology for banking institutions to control fraud risks and safeguard their reputation.Class imbalance and insufficient representation of feature data relating to cr... Credit Card Fraud Detection(CCFD)is an essential technology for banking institutions to control fraud risks and safeguard their reputation.Class imbalance and insufficient representation of feature data relating to credit card transactions are two prevalent issues in the current study field of CCFD,which significantly impact classification models’performance.To address these issues,this research proposes a novel CCFD model based on Multifeature Fusion and Generative Adversarial Networks(MFGAN).The MFGAN model consists of two modules:a multi-feature fusion module for integrating static and dynamic behavior data of cardholders into a unified highdimensional feature space,and a balance module based on the generative adversarial network to decrease the class imbalance ratio.The effectiveness of theMFGAN model is validated on two actual credit card datasets.The impacts of different class balance ratios on the performance of the four resamplingmodels are analyzed,and the contribution of the two different modules to the performance of the MFGAN model is investigated via ablation experiments.Experimental results demonstrate that the proposed model does better than state-of-the-art models in terms of recall,F1,and Area Under the Curve(AUC)metrics,which means that the MFGAN model can help banks find more fraudulent transactions and reduce fraud losses. 展开更多
关键词 Credit card fraud detection imbalanced classification feature fusion generative adversarial networks anti-fraud systems
下载PDF
Multi-Feature Fusion Book Recommendation Model Based on Deep Neural Network
3
作者 Zhaomin Liang Tingting Liang 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期205-219,共15页
The traditional recommendation algorithm represented by the collaborative filtering algorithm is the most classical and widely recommended algorithm in the practical industry.Most book recommendation systems also use ... The traditional recommendation algorithm represented by the collaborative filtering algorithm is the most classical and widely recommended algorithm in the practical industry.Most book recommendation systems also use this algorithm.However,the traditional recommendation algorithm represented by the collaborative filtering algorithm cannot deal with the data sparsity well.This algorithm only uses the shallow feature design of the interaction between readers and books,so it fails to achieve the high-level abstract learning of the relevant attribute features of readers and books,leading to a decline in recommendation performance.Given the above problems,this study uses deep learning technology to model readers’book borrowing probability.It builds a recommendation system model through themulti-layer neural network and inputs the features extracted from readers and books into the network,and then profoundly integrates the features of readers and books through the multi-layer neural network.The hidden deep interaction between readers and books is explored accordingly.Thus,the quality of book recommendation performance will be significantly improved.In the experiment,the evaluation indexes ofHR@10,MRR,andNDCGof the deep neural network recommendation model constructed in this paper are higher than those of the traditional recommendation algorithm,which verifies the effectiveness of the model in the book recommendation. 展开更多
关键词 Book recommendation deep learning neural network multi-feature fusion personalized prediction
下载PDF
Fake News Detection Based on Cross-Modal Message Aggregation and Gated Fusion Network
4
作者 Fangfang Shan Mengyao Liu +1 位作者 Menghan Zhang Zhenyu Wang 《Computers, Materials & Continua》 SCIE EI 2024年第7期1521-1542,共22页
Social media has become increasingly significant in modern society,but it has also turned into a breeding ground for the propagation of misleading information,potentially causing a detrimental impact on public opinion... Social media has become increasingly significant in modern society,but it has also turned into a breeding ground for the propagation of misleading information,potentially causing a detrimental impact on public opinion and daily life.Compared to pure text content,multmodal content significantly increases the visibility and share ability of posts.This has made the search for efficient modality representations and cross-modal information interaction methods a key focus in the field of multimodal fake news detection.To effectively address the critical challenge of accurately detecting fake news on social media,this paper proposes a fake news detection model based on crossmodal message aggregation and a gated fusion network(MAGF).MAGF first uses BERT to extract cumulative textual feature representations and word-level features,applies Faster Region-based ConvolutionalNeuralNetwork(Faster R-CNN)to obtain image objects,and leverages ResNet-50 and Visual Geometry Group-19(VGG-19)to obtain image region features and global features.The image region features and word-level text features are then projected into a low-dimensional space to calculate a text-image affinity matrix for cross-modal message aggregation.The gated fusion network combines text and image region features to obtain adaptively aggregated features.The interaction matrix is derived through an attention mechanism and further integrated with global image features using a co-attention mechanism to producemultimodal representations.Finally,these fused features are fed into a classifier for news categorization.Experiments were conducted on two public datasets,Twitter and Weibo.Results show that the proposed model achieves accuracy rates of 91.8%and 88.7%on the two datasets,respectively,significantly outperforming traditional unimodal and existing multimodal models. 展开更多
关键词 Fake news detection cross-modalmessage aggregation gate fusion network co-attention mechanism multi-modal representation
下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
5
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
下载PDF
A Lightweight Convolutional Neural Network with Hierarchical Multi-Scale Feature Fusion for Image Classification
6
作者 Adama Dembele Ronald Waweru Mwangi Ananda Omutokoh Kube 《Journal of Computer and Communications》 2024年第2期173-200,共28页
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso... Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline. 展开更多
关键词 MobileNet Image Classification Lightweight Convolutional Neural network Depthwise Dilated Separable Convolution Hierarchical Multi-Scale Feature fusion
下载PDF
Research on Facial Fatigue Detection of Drivers with Multi-feature Fusion 被引量:1
7
作者 YE Yuxuan ZHOU Xianchun +2 位作者 WANG Wenyan YANG Chuanbin ZOU Qingyu 《Instrumentation》 2023年第1期23-31,共9页
In order to solve the shortcomings of current fatigue detection methods such as low accuracy or poor real-time performance,a fatigue detection method based on multi-feature fusion is proposed.Firstly,the HOG face dete... In order to solve the shortcomings of current fatigue detection methods such as low accuracy or poor real-time performance,a fatigue detection method based on multi-feature fusion is proposed.Firstly,the HOG face detection algorithm and KCF target tracking algorithm are integrated and deformable convolutional neural network is introduced to identify the state of extracted eyes and mouth,fast track the detected faces and extract continuous and stable target faces for more efficient extraction.Then the head pose algorithm is introduced to detect the driver’s head in real time and obtain the driver’s head state information.Finally,a multi-feature fusion fatigue detection method is proposed based on the state of the eyes,mouth and head.According to the experimental results,the proposed method can detect the driver’s fatigue state in real time with high accuracy and good robustness compared with the current fatigue detection algorithms. 展开更多
关键词 HOG Face Posture Detection Deformable Convolution multi-feature fusion Fatigue Detection
下载PDF
SA-Model:Multi-Feature Fusion Poetic Sentiment Analysis Based on a Hybrid Word Vector Model
8
作者 Lingli Zhang Yadong Wu +5 位作者 Qikai Chu Pan Li Guijuan Wang Weihan Zhang Yu Qiu Yi Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期631-645,共15页
Sentiment analysis in Chinese classical poetry has become a prominent topic in historical and cultural tracing,ancient literature research,etc.However,the existing research on sentiment analysis is relatively small.It... Sentiment analysis in Chinese classical poetry has become a prominent topic in historical and cultural tracing,ancient literature research,etc.However,the existing research on sentiment analysis is relatively small.It does not effectively solve the problems such as the weak feature extraction ability of poetry text,which leads to the low performance of the model on sentiment analysis for Chinese classical poetry.In this research,we offer the SA-Model,a poetic sentiment analysis model.SA-Model firstly extracts text vector information and fuses it through Bidirectional encoder representation from transformers-Whole word masking-extension(BERT-wwmext)and Enhanced representation through knowledge integration(ERNIE)to enrich text vector information;Secondly,it incorporates numerous encoders to remove text features at multiple levels,thereby increasing text feature information,improving text semantics accuracy,and enhancing the model’s learning and generalization capabilities;finally,multi-feature fusion poetry sentiment analysis model is constructed.The feasibility and accuracy of the model are validated through the ancient poetry sentiment corpus.Compared with other baseline models,the experimental findings indicate that SA-Model may increase the accuracy of text semantics and hence improve the capability of poetry sentiment analysis. 展开更多
关键词 Sentiment analysis Chinese classical poetry natural language processing BERT-wwm-ext ERNIE multi-feature fusion
下载PDF
Multi-Feature Fusion-Guided Multiscale Bidirectional Attention Networks for Logistics Pallet Segmentation 被引量:1
9
作者 Weiwei Cai Yaping Song +2 位作者 Huan Duan Zhenwei Xia Zhanguo Wei 《Computer Modeling in Engineering & Sciences》 SCIE EI 2022年第6期1539-1555,共17页
In the smart logistics industry,unmanned forklifts that intelligently identify logistics pallets can improve work efficiency in warehousing and transportation and are better than traditional manual forklifts driven by... In the smart logistics industry,unmanned forklifts that intelligently identify logistics pallets can improve work efficiency in warehousing and transportation and are better than traditional manual forklifts driven by humans.Therefore,they play a critical role in smart warehousing,and semantics segmentation is an effective method to realize the intelligent identification of logistics pallets.However,most current recognition algorithms are ineffective due to the diverse types of pallets,their complex shapes,frequent blockades in production environments,and changing lighting conditions.This paper proposes a novel multi-feature fusion-guided multiscale bidirectional attention(MFMBA)neural network for logistics pallet segmentation.To better predict the foreground category(the pallet)and the background category(the cargo)of a pallet image,our approach extracts three types of features(grayscale,texture,and Hue,Saturation,Value features)and fuses them.The multiscale architecture deals with the problem that the size and shape of the pallet may appear different in the image in the actual,complex environment,which usually makes feature extraction difficult.Our study proposes a multiscale architecture that can extract additional semantic features.Also,since a traditional attention mechanism only assigns attention rights from a single direction,we designed a bidirectional attention mechanism that assigns cross-attention weights to each feature from two directions,horizontally and vertically,significantly improving segmentation.Finally,comparative experimental results show that the precision of the proposed algorithm is 0.53%–8.77%better than that of other methods we compared. 展开更多
关键词 Logistics pallet segmentation image segmentation multi-feature fusion multiscale network bidirectional attention mechanism HSV neural networks deep learning
下载PDF
Medical image fusion based on pulse coupled neural networks and multi-feature fuzzy clustering 被引量:1
10
作者 Xiaoqing Luo Xiaojun Wu 《Journal of Biomedical Science and Engineering》 2012年第12期878-883,共6页
Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and g... Medical image fusion plays an important role in clinical applications such as image-guided surgery, image-guided radiotherapy, noninvasive diagnosis, and treatment planning. In order to retain useful information and get more reliable results, a novel medical image fusion algorithm based on pulse coupled neural networks (PCNN) and multi-feature fuzzy clustering is proposed, which makes use of the multi-feature of image and combines the advantages of the local entropy and variance of local entropy based PCNN. The results of experiments indicate that the proposed image fusion method can better preserve the image details and robustness and significantly improve the image visual effect than the other fusion methods with less information distortion. 展开更多
关键词 PCNN multi-feature MEDICAL IMAGE IMAGE fusion LOCAL ENTROPY
下载PDF
Residual Feature Attentional Fusion Network for Lightweight Chest CT Image Super-Resolution 被引量:1
11
作者 Kun Yang Lei Zhao +4 位作者 Xianghui Wang Mingyang Zhang Linyan Xue Shuang Liu Kun Liu 《Computers, Materials & Continua》 SCIE EI 2023年第6期5159-5176,共18页
The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study s... The diagnosis of COVID-19 requires chest computed tomography(CT).High-resolution CT images can provide more diagnostic information to help doctors better diagnose the disease,so it is of clinical importance to study super-resolution(SR)algorithms applied to CT images to improve the reso-lution of CT images.However,most of the existing SR algorithms are studied based on natural images,which are not suitable for medical images;and most of these algorithms improve the reconstruction quality by increasing the network depth,which is not suitable for machines with limited resources.To alleviate these issues,we propose a residual feature attentional fusion network for lightweight chest CT image super-resolution(RFAFN).Specifically,we design a contextual feature extraction block(CFEB)that can extract CT image features more efficiently and accurately than ordinary residual blocks.In addition,we propose a feature-weighted cascading strategy(FWCS)based on attentional feature fusion blocks(AFFB)to utilize the high-frequency detail information extracted by CFEB as much as possible via selectively fusing adjacent level feature information.Finally,we suggest a global hierarchical feature fusion strategy(GHFFS),which can utilize the hierarchical features more effectively than dense concatenation by progressively aggregating the feature information at various levels.Numerous experiments show that our method performs better than most of the state-of-the-art(SOTA)methods on the COVID-19 chest CT dataset.In detail,the peak signal-to-noise ratio(PSNR)is 0.11 dB and 0.47 dB higher on CTtest1 and CTtest2 at×3 SR compared to the suboptimal method,but the number of parameters and multi-adds are reduced by 22K and 0.43G,respectively.Our method can better recover chest CT image quality with fewer computational resources and effectively assist in COVID-19. 展开更多
关键词 SUPER-RESOLUTION COVID-19 chest CT lightweight network contextual feature extraction attentional feature fusion
下载PDF
Siamese Dense Pixel-Level Fusion Network for Real-Time UAV Tracking 被引量:1
12
作者 Zhenyu Huang Gun Li +4 位作者 Xudong Sun Yong Chen Jie Sun Zhangsong Ni Yang Yang 《Computers, Materials & Continua》 SCIE EI 2023年第9期3219-3238,共20页
Onboard visual object tracking in unmanned aerial vehicles(UAVs)has attractedmuch interest due to its versatility.Meanwhile,due to high precision,Siamese networks are becoming hot spots in visual object tracking.Howev... Onboard visual object tracking in unmanned aerial vehicles(UAVs)has attractedmuch interest due to its versatility.Meanwhile,due to high precision,Siamese networks are becoming hot spots in visual object tracking.However,most Siamese trackers fail to balance the tracking accuracy and time within onboard limited computational resources of UAVs.To meet the tracking precision and real-time requirements,this paper proposes a Siamese dense pixel-level network for UAV object tracking named SiamDPL.Specifically,the Siamese network extracts features of the search region and the template region through a parameter-shared backbone network,then performs correlationmatching to obtain the candidate regionwith high similarity.To improve the matching effect of template and search features,this paper designs a dense pixel-level feature fusion module to enhance the matching ability by pixel-wise correlation and enrich the feature diversity by dense connection.An attention module composed of self-attention and channel attention is introduced to learn global context information and selectively emphasize the target feature region in the spatial and channel dimensions.In addition,a target localization module is designed to improve target location accuracy.Compared with other advanced trackers,experiments on two public benchmarks,which are UAV123@10fps and UAV20L fromthe unmanned air vehicle123(UAV123)dataset,show that SiamDPL can achieve superior performance and low complexity with a running speed of 100.1 fps on NVIDIA TITAN RTX. 展开更多
关键词 Siamese network UAV object tracking dense pixel-level feature fusion attention module target localization
下载PDF
Identification Method of Gas-Liquid Two-phase Flow Regime Based on Image Multi-feature Fusion and Support Vector Machine 被引量:6
13
作者 周云龙 陈飞 孙斌 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2008年第6期832-840,共9页
The knowledge of flow regime is very important for quantifying the pressure drop, the stability and safety of two-phase flow systems. Based on image multi-feature fusion and support vector machine, a new method to ide... The knowledge of flow regime is very important for quantifying the pressure drop, the stability and safety of two-phase flow systems. Based on image multi-feature fusion and support vector machine, a new method to identify flow regime in two-phase flow was presented. Firstly, gas-liquid two-phase flow images including bub- bly flow, plug flow, slug flow, stratified flow, wavy flow, annular flow and mist flow were captured by digital high speed video systems in the horizontal tube. The image moment invariants and gray level co-occurrence matrix texture features were extracted using image processing techniques. To improve the performance of a multiple classifier system, the rough sets theory was used for reducing the inessential factors. Furthermore, the support vector machine was trained by using these eigenvectors to reduce the dimension as flow regime samples, and the flow regime intelligent identification was realized. The test results showed that image features which were reduced with the rough sets theory could excellently reflect the difference between seven typical flow regimes, and successful training the support vector machine could quickly and accurately identify seven typical flow regimes of gas-liquid two-phase flow in the horizontal tube. Image multi-feature fusion method provided a new way to identify the gas-liquid two-phase flow, and achieved higher identification ability than that of single characteristic. The overall identification accuracy was 100%, and an estimate of the image processing time was 8 ms for online flow regime identification. 展开更多
关键词 flow regime identification gas-liquid two-phase flow image processing multi-feature fusion support vector machine
下载PDF
The detection method of low-rate DoS attack based on multi-feature fusion 被引量:3
14
作者 Liang Liu Huaiyuan Wang +1 位作者 Zhijun Wu Meng Yue 《Digital Communications and Networks》 SCIE 2020年第4期504-513,共10页
As a new type of Denial of Service(DoS)attacks,the Low-rate Denial of Service(LDoS)attacks make the traditional method of detecting Distributed Denial of Service Attack(DDoS)attacks useless due to the characteristics ... As a new type of Denial of Service(DoS)attacks,the Low-rate Denial of Service(LDoS)attacks make the traditional method of detecting Distributed Denial of Service Attack(DDoS)attacks useless due to the characteristics of a low average rate and concealment.With features extracted from the network traffic,a new detection approach based on multi-feature fusion is proposed to solve the problem in this paper.An attack feature set containing the Acknowledge character(ACK)sequence number,the packet size,and the queue length is used to classify normal and LDoS attack traffics.Each feature is digitalized and preprocessed to fit the input of the K-Nearest Neighbor(KNN)classifier separately,and to obtain the decision contour matrix.Then a posteriori probability in the matrix is fused,and the fusion decision index D is used as the basis of detecting the LDoS attacks.Experiments proved that the detection rate of the multi-feature fusion algorithm is higher than those of the single-based detection method and other algorithms. 展开更多
关键词 Low-rate denial of service attacks Attack features KNN classifier multi-feature fusion
下载PDF
Smoke root detection from video sequences based on multi-feature fusion 被引量:1
15
作者 Liming Lou Feng Chen +1 位作者 Pengle Cheng Ying Huang 《Journal of Forestry Research》 SCIE CAS CSCD 2022年第6期1841-1856,共16页
Smoke detection is the most commonly used method in early warning of fire and is widely used in forest detection.Most existing smoke detection methods contain empty spaces and obstacles which interfere with detection ... Smoke detection is the most commonly used method in early warning of fire and is widely used in forest detection.Most existing smoke detection methods contain empty spaces and obstacles which interfere with detection and extract false smoke roots.This study developed a new smoke roots search algorithm based on a multi-feature fusion dynamic extraction strategy.This determines smoke origin candidate points and region based on a multi-frame discrete confidence level.The results show that the new method provides a more complete smoke contour with no background interference,compared to the results using existing methods.Unlike video-based methods that rely on continuous frames,an adaptive threshold method was developed to build the judgment image set composed of non-consecutive frames.The smoke roots origin search algorithm increased the detection rate and significantly reduced false detection rate compared to existing methods. 展开更多
关键词 Smoke detection multi-feature fusion Search strategy ViBe Choquet
下载PDF
The deep spatiotemporal network with dual-flow fusion for video-oriented facial expression recognition
16
作者 Chenquan Gan Jinhui Yao +2 位作者 Shuaiying Ma Zufan Zhang Lianxiang Zhu 《Digital Communications and Networks》 SCIE CSCD 2023年第6期1441-1447,共7页
The video-oriented facial expression recognition has always been an important issue in emotion perception.At present,the key challenge in most existing methods is how to effectively extract robust features to characte... The video-oriented facial expression recognition has always been an important issue in emotion perception.At present,the key challenge in most existing methods is how to effectively extract robust features to characterize facial appearance and geometry changes caused by facial motions.On this basis,the video in this paper is divided into multiple segments,each of which is simultaneously described by optical flow and facial landmark trajectory.To deeply delve the emotional information of these two representations,we propose a Deep Spatiotemporal Network with Dual-flow Fusion(defined as DSN-DF),which highlights the region and strength of expressions by spatiotemporal appearance features and the speed of change by spatiotemporal geometry features.Finally,experiments are implemented on CKþand MMI datasets to demonstrate the superiority of the proposed method. 展开更多
关键词 Facial expression recognition Deep spatiotemporal network Optical flow Facial landmark trajectory Dual-flow fusion
下载PDF
MFF-Net: Multimodal Feature Fusion Network for 3D Object Detection
17
作者 Peicheng Shi Zhiqiang Liu +1 位作者 Heng Qi Aixi Yang 《Computers, Materials & Continua》 SCIE EI 2023年第6期5615-5637,共23页
In complex traffic environment scenarios,it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance.The accuracy of 3D object detection ... In complex traffic environment scenarios,it is very important for autonomous vehicles to accurately perceive the dynamic information of other vehicles around the vehicle in advance.The accuracy of 3D object detection will be affected by problems such as illumination changes,object occlusion,and object detection distance.To this purpose,we face these challenges by proposing a multimodal feature fusion network for 3D object detection(MFF-Net).In this research,this paper first uses the spatial transformation projection algorithm to map the image features into the feature space,so that the image features are in the same spatial dimension when fused with the point cloud features.Then,feature channel weighting is performed using an adaptive expression augmentation fusion network to enhance important network features,suppress useless features,and increase the directionality of the network to features.Finally,this paper increases the probability of false detection and missed detection in the non-maximum suppression algo-rithm by increasing the one-dimensional threshold.So far,this paper has constructed a complete 3D target detection network based on multimodal feature fusion.The experimental results show that the proposed achieves an average accuracy of 82.60%on the Karlsruhe Institute of Technology and Toyota Technological Institute(KITTI)dataset,outperforming previous state-of-the-art multimodal fusion networks.In Easy,Moderate,and hard evaluation indicators,the accuracy rate of this paper reaches 90.96%,81.46%,and 75.39%.This shows that the MFF-Net network has good performance in 3D object detection. 展开更多
关键词 3D object detection multimodal fusion neural network autonomous driving attention mechanism
下载PDF
Feature Fusion-Based Deep Learning Network to Recognize Table Tennis Actions
18
作者 Chih-Ta Yen Tz-Yun Chen +1 位作者 Un-Hung Chen Guo-Chang WangZong-Xian Chen 《Computers, Materials & Continua》 SCIE EI 2023年第1期83-99,共17页
A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.M... A system for classifying four basic table tennis strokes using wearable devices and deep learning networks is proposed in this study.The wearable device consisted of a six-axis sensor,Raspberry Pi 3,and a power bank.Multiple kernel sizes were used in convolutional neural network(CNN)to evaluate their performance for extracting features.Moreover,a multiscale CNN with two kernel sizes was used to perform feature fusion at different scales in a concatenated manner.The CNN achieved recognition of the four table tennis strokes.Experimental data were obtained from20 research participants who wore sensors on the back of their hands while performing the four table tennis strokes in a laboratory environment.The data were collected to verify the performance of the proposed models for wearable devices.Finally,the sensor and multi-scale CNN designed in this study achieved accuracy and F1 scores of 99.58%and 99.16%,respectively,for the four strokes.The accuracy for five-fold cross validation was 99.87%.This result also shows that the multi-scale convolutional neural network has better robustness after fivefold cross validation. 展开更多
关键词 Wearable devices deep learning six-axis sensor feature fusion multi-scale convolutional neural networks action recognit
下载PDF
Improved Weather Radar Echo Extrapolation Through Wind Speed Data Fusion Using a New Spatiotemporal Neural Network Model
19
作者 耿焕同 谢博洋 +2 位作者 葛晓燕 闵锦忠 庄潇然 《Journal of Tropical Meteorology》 SCIE 2023年第4期482-492,共11页
Weather radar echo extrapolation plays a crucial role in weather forecasting.However,traditional weather radar echo extrapolation methods are not very accurate and do not make full use of historical data.Deep learning... Weather radar echo extrapolation plays a crucial role in weather forecasting.However,traditional weather radar echo extrapolation methods are not very accurate and do not make full use of historical data.Deep learning algorithms based on Recurrent Neural Networks also have the problem of accumulating errors.Moreover,it is difficult to obtain higher accuracy by relying on a single historical radar echo observation.Therefore,in this study,we constructed the Fusion GRU module,which leverages a cascade structure to effectively combine radar echo data and mean wind data.We also designed the Top Connection so that the model can capture the global spatial relationship to construct constraints on the predictions.Based on the Jiangsu Province dataset,we compared some models.The results show that our proposed model,Cascade Fusion Spatiotemporal Network(CFSN),improved the critical success index(CSI)by 10.7%over the baseline at the threshold of 30 dBZ.Ablation experiments further validated the effectiveness of our model.Similarly,the CSI of the complete CFSN was 0.004 higher than the suboptimal solution without the cross-attention module at the threshold of 30 dBZ. 展开更多
关键词 deep learning spatiotemporal prediction radar echo extrapolation recurrent neural network multimodal fusion
下载PDF
Efficacy and safety of different anti-osteoporotic drugs for the spinal fusion surgery: A network meta-analysis
20
作者 Xiao-Yuan He Huan-Xiong Chen Zhi-Rong Zhao 《World Journal of Clinical Cases》 SCIE 2023年第30期7350-7362,共13页
BACKGROUND Administering anti-osteoporotic agents to patients perioperatively is a widely accepted approach for improving bone fusion rates and reducing the risk of complications.The best anti-osteoporotic agents for ... BACKGROUND Administering anti-osteoporotic agents to patients perioperatively is a widely accepted approach for improving bone fusion rates and reducing the risk of complications.The best anti-osteoporotic agents for spinal fusion surgery remain unclear.AIM To investigate the efficacy and safety of different anti-osteoporotic agents in spinal fusion surgery via network meta-analysis.METHODS Searches were conducted in four electronic databases(PubMed,EMBASE),Web of Science,the Cochrane Library and China National Knowledge Infrastructure(CNKI)from inception to November 2022.Any studies that compared antiosteoporotic agents vs placebo for spinal fusion surgery were included in this network meta-analysis.Outcomes included fusion rate,Oswestry disability index(ODI),and adverse events.Network meta-analysis was performed by R software with the gemtc package.RESULTS In total,13 randomized controlled trials were included in this network metaanalysis.Only teriparatide(OR 3.2,95%CI:1.4 to 7.8)was more effective than placebo in increasing the fusion rate.The surface under the cumulative ranking curve(SUCRA)of teriparatide combined with denosumab was the highest(SUCRA,90.9%),followed by teriparatide(SUCRA,74.0%),zoledronic acid(SUCRA,43.7%),alendronate(SUCRA,41.1%)and risedronate(SUCRA,35.0%).Teriparatide(MD-15,95%CI:-28 to-2.7)and teriparatide combined with denosumab(MD-20,95%CI:-40 to-0.43)were more effective than placebo in decreasing the ODI.The SUCRA of teriparatide combined with denosumab was highest(SUCRA,90.8%),followed by teriparatide(SUCRA,74.5%),alendronate(SURCA,52.7),risedronate(SURCA,52.1%),zoledronic acid(SURCA,24.2%)and placebo(SURCA,5.6%)for ODI.The adverse events were not different between groups.CONCLUSION This network meta-analysis suggests that teriparatide combined with denosumab and teriparatide alone significantly increase the fusion rate and decrease the ODI without increasing adverse events.Based on current evidence,teriparatide combined with denosumab or teriparatide alone is recommended to increase the fusion rate and to reduce ODI in spinal fusion patients. 展开更多
关键词 Anti-osteoporotic agents Spinal fusion procedure network meta-analysis Systematic review DENOSUMAB
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部