期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
TGNet:Intelligent Identification of Thunderstorm Wind Gusts Using Multimodal Fusion
1
作者 Xiaowen ZHANG Yongguang ZHENG +3 位作者 Hengde ZHANG Jie SHENG Bingjian LU Shuo FENG 《Advances in Atmospheric Sciences》 2025年第1期146-164,共19页
Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.There... Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.Therefore,it is necessary to establish thunderstorm wind gust identification techniques based on multisource high-resolution observations.This paper introduces a new algorithm,called thunderstorm wind gust identification network(TGNet).It leverages multimodal feature fusion to fuse the temporal and spatial features of thunderstorm wind gust events.The shapelet transform is first used to extract the temporal features of wind speeds from automatic weather stations,which is aimed at distinguishing thunderstorm wind gusts from those caused by synoptic-scale systems or typhoons.Then,the encoder,structured upon the U-shaped network(U-Net)and incorporating recurrent residual convolutional blocks(R2U-Net),is employed to extract the corresponding spatial convective characteristics of satellite,radar,and lightning observations.Finally,by using the multimodal deep fusion module based on multi-head cross-attention,the temporal features of wind speed at each automatic weather station are incorporated into the spatial features to obtain 10-minutely classification of thunderstorm wind gusts.TGNet products have high accuracy,with a critical success index reaching 0.77.Compared with those of U-Net and R2U-Net,the false alarm rate of TGNet products decreases by 31.28%and 24.15%,respectively.The new algorithm provides grid products of thunderstorm wind gusts with a spatial resolution of 0.01°,updated every 10minutes.The results are finer and more accurate,thereby helping to improve the accuracy of operational warnings for thunderstorm wind gusts. 展开更多
关键词 thunderstorm wind gusts shapelet transform multimodal deep feature fusion
下载PDF
Advanced Feature Fusion Algorithm Based on Multiple Convolutional Neural Network for Scene Recognition 被引量:5
2
作者 Lei Chen Kanghu Bo +1 位作者 Feifei Lee Qiu Chen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第2期505-523,共19页
Scene recognition is a popular open problem in the computer vision field.Among lots of methods proposed in recent years,Convolutional Neural Network(CNN)based approaches achieve the best performance in scene recogniti... Scene recognition is a popular open problem in the computer vision field.Among lots of methods proposed in recent years,Convolutional Neural Network(CNN)based approaches achieve the best performance in scene recognition.We propose in this paper an advanced feature fusion algorithm using Multiple Convolutional Neural Network(Multi-CNN)for scene recognition.Unlike existing works that usually use individual convolutional neural network,a fusion of multiple different convolutional neural networks is applied for scene recognition.Firstly,we split training images in two directions and apply to three deep CNN model,and then extract features from the last full-connected(FC)layer and probabilistic layer on each model.Finally,feature vectors are fused with different fusion strategies in groups forwarded into SoftMax classifier.Our proposed algorithm is evaluated on three scene datasets for scene recognition.The experimental results demonstrate the effectiveness of proposed algorithm compared with other state-of-art approaches. 展开更多
关键词 Scene recognition deep feature fusion multiple convolutional neural network.
下载PDF
Robust Local Light Field Synthesis via Occlusion-aware Sampling and Deep Visual Feature Fusion
3
作者 Wenpeng Xing Jie Chen Yike Guo 《Machine Intelligence Research》 EI CSCD 2023年第3期408-420,共13页
Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseli... Novel view synthesis has attracted tremendous research attention recently for its applications in virtual reality and immersive telepresence.Rendering a locally immersive light field(LF)based on arbitrary large baseline RGB references is a challenging problem that lacks efficient solutions with existing novel view synthesis techniques.In this work,we aim at truthfully rendering local immersive novel views/LF images based on large baseline LF captures and a single RGB image in the target view.To fully explore the precious information from source LF captures,we propose a novel occlusion-aware source sampler(OSS)module which efficiently transfers the pixels of source views to the target view′s frustum in an occlusion-aware manner.An attention-based deep visual fusion module is proposed to fuse the revealed occluded background content with a preliminary LF into a final refined LF.The proposed source sampling and fusion mechanism not only helps to provide information for occluded regions from varying observation angles,but also proves to be able to effectively enhance the visual rendering quality.Experimental results show that our proposed method is able to render high-quality LF images/novel views with sparse RGB references and outperforms state-of-the-art LF rendering and novel view synthesis methods. 展开更多
关键词 Novel view synthesis light field(LF)imaging multi-view stereo occlusion sampling deep visual feature(DVF)fusion
原文传递
DFF-ResNet: An Insect Pest Recognition Model Based on Residual Networks 被引量:6
4
作者 Wenjie Liu Guoqing Wu +1 位作者 Fuji Ren Xin Kang 《Big Data Mining and Analytics》 EI 2020年第4期300-310,共11页
Insect pest control is considered as a significant factor in the yield of commercial crops.Thus,to avoid economic losses,we need a valid method for insect pest recognition.In this paper,we proposed a feature fusion re... Insect pest control is considered as a significant factor in the yield of commercial crops.Thus,to avoid economic losses,we need a valid method for insect pest recognition.In this paper,we proposed a feature fusion residual block to perform the insect pest recognition task.Based on the original residual block,we fused the feature from a previous layer between two 11 convolution layers in a residual signal branch to improve the capacity of the block.Furthermore,we explored the contribution of each residual group to the model performance.We found that adding the residual blocks of earlier residual groups promotes the model performance significantly,which improves the capacity of generalization of the model.By stacking the feature fusion residual block,we constructed the Deep Feature Fusion Residual Network(DFF-ResNet).To prove the validity and adaptivity of our approach,we constructed it with two common residual networks(Pre-ResNet and Wide Residual Network(WRN))and validated these models on the Canadian Institute For Advanced Research(CIFAR)and Street View House Number(SVHN)benchmark datasets.The experimental results indicate that our models have a lower test error than those of baseline models.Then,we applied our models to recognize insect pests and obtained validity on the IP102 benchmark dataset.The experimental results show that our models outperform the original ResNet and other state-of-the-art methods. 展开更多
关键词 insect pest recognition deep feature fusion residual network image classification
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部