期刊文献+
共找到2,154篇文章
< 1 2 108 >
每页显示 20 50 100
An Improved Deep Fusion CNN for Image Recognition 被引量:6
1
作者 Rongyu Chen Lili Pan +3 位作者 Cong Li Yan Zhou Aibin Chen Eric Beckman 《Computers, Materials & Continua》 SCIE EI 2020年第11期1691-1706,共16页
With the development of Deep Convolutional Neural Networks(DCNNs),the extracted features for image recognition tasks have shifted from low-level features to the high-level semantic features of DCNNs.Previous studies h... With the development of Deep Convolutional Neural Networks(DCNNs),the extracted features for image recognition tasks have shifted from low-level features to the high-level semantic features of DCNNs.Previous studies have shown that the deeper the network is,the more abstract the features are.However,the recognition ability of deep features would be limited by insufficient training samples.To address this problem,this paper derives an improved Deep Fusion Convolutional Neural Network(DF-Net)which can make full use of the differences and complementarities during network learning and enhance feature expression under the condition of limited datasets.Specifically,DF-Net organizes two identical subnets to extract features from the input image in parallel,and then a well-designed fusion module is introduced to the deep layer of DF-Net to fuse the subnet’s features in multi-scale.Thus,the more complex mappings are created and the more abundant and accurate fusion features can be extracted to improve recognition accuracy.Furthermore,a corresponding training strategy is also proposed to speed up the convergence and reduce the computation overhead of network training.Finally,DF-Nets based on the well-known ResNet,DenseNet and MobileNetV2 are evaluated on CIFAR100,Stanford Dogs,and UECFOOD-100.Theoretical analysis and experimental results strongly demonstrate that DF-Net enhances the performance of DCNNs and increases the accuracy of image recognition. 展开更多
关键词 deep convolutional neural networks deep features image recognition deep fusion feature fusion.
下载PDF
A deep multimodal fusion and multitasking trajectory prediction model for typhoon trajectory prediction to reduce flight scheduling cancellation
2
作者 TANG Jun QIN Wanting +1 位作者 PAN Qingtao LAO Songyang 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期666-678,共13页
Natural events have had a significant impact on overall flight activity,and the aviation industry plays a vital role in helping society cope with the impact of these events.As one of the most impactful weather typhoon... Natural events have had a significant impact on overall flight activity,and the aviation industry plays a vital role in helping society cope with the impact of these events.As one of the most impactful weather typhoon seasons appears and continues,airlines operating in threatened areas and passengers having travel plans during this time period will pay close attention to the development of tropical storms.This paper proposes a deep multimodal fusion and multitasking trajectory prediction model that can improve the reliability of typhoon trajectory prediction and reduce the quantity of flight scheduling cancellation.The deep multimodal fusion module is formed by deep fusion of the feature output by multiple submodal fusion modules,and the multitask generation module uses longitude and latitude as two related tasks for simultaneous prediction.With more dependable data accuracy,problems can be analysed rapidly and more efficiently,enabling better decision-making with a proactive versus reactive posture.When multiple modalities coexist,features can be extracted from them simultaneously to supplement each other’s information.An actual case study,the typhoon Lichma that swept China in 2019,has demonstrated that the algorithm can effectively reduce the number of unnecessary flight cancellations compared to existing flight scheduling and assist the new generation of flight scheduling systems under extreme weather. 展开更多
关键词 flight scheduling optimization deep multimodal fusion multitasking trajectory prediction typhoon weather flight cancellation prediction reliability
下载PDF
The Fusion of Temporal Sequence with Scene Priori Information in Deep Learning Object Recognition
3
作者 Yongkang Cao Fengjun Liu +2 位作者 Xian Wang Wenyun Wang Zhaoxin Peng 《Open Journal of Applied Sciences》 2024年第9期2610-2627,共18页
For some important object recognition applications such as intelligent robots and unmanned driving, images are collected on a consecutive basis and associated among themselves, besides, the scenes have steady prior fe... For some important object recognition applications such as intelligent robots and unmanned driving, images are collected on a consecutive basis and associated among themselves, besides, the scenes have steady prior features. Yet existing technologies do not take full advantage of this information. In order to take object recognition further than existing algorithms in the above application, an object recognition method that fuses temporal sequence with scene priori information is proposed. This method first employs YOLOv3 as the basic algorithm to recognize objects in single-frame images, then the DeepSort algorithm to establish association among potential objects recognized in images of different moments, and finally the confidence fusion method and temporal boundary processing method designed herein to fuse, at the decision level, temporal sequence information with scene priori information. Experiments using public datasets and self-built industrial scene datasets show that due to the expansion of information sources, the quality of single-frame images has less impact on the recognition results, whereby the object recognition is greatly improved. It is presented herein as a widely applicable framework for the fusion of information under multiple classes. All the object recognition algorithms that output object class, location information and recognition confidence at the same time can be integrated into this information fusion framework to improve performance. 展开更多
关键词 Computer Vison Object Recognition deep Learning Consecutive Scene Information fusion
下载PDF
Seismic velocity inversion based on CNN-LSTM fusion deep neural network 被引量:7
4
作者 Cao Wei Guo Xue-Bao +4 位作者 Tian Feng Shi Ying Wang Wei-Hong Sun Hong-Ri Ke Xuan 《Applied Geophysics》 SCIE CSCD 2021年第4期499-514,593,共17页
Based on the CNN-LSTM fusion deep neural network,this paper proposes a seismic velocity model building method that can simultaneously estimate the root mean square(RMS)velocity and interval velocity from the common-mi... Based on the CNN-LSTM fusion deep neural network,this paper proposes a seismic velocity model building method that can simultaneously estimate the root mean square(RMS)velocity and interval velocity from the common-midpoint(CMP)gather.In the proposed method,a convolutional neural network(CNN)Encoder and two long short-term memory networks(LSTMs)are used to extract spatial and temporal features from seismic signals,respectively,and a CNN Decoder is used to recover RMS velocity and interval velocity of underground media from various feature vectors.To address the problems of unstable gradients and easily fall into a local minimum in the deep neural network training process,we propose to use Kaiming normal initialization with zero negative slopes of rectifi ed units and to adjust the network learning process by optimizing the mean square error(MSE)loss function with the introduction of a freezing factor.The experiments on testing dataset show that CNN-LSTM fusion deep neural network can predict RMS velocity as well as interval velocity more accurately,and its inversion accuracy is superior to that of single neural network models.The predictions on the complex structures and Marmousi model are consistent with the true velocity variation trends,and the predictions on fi eld data can eff ectively correct the phase axis,improve the lateral continuity of phase axis and quality of stack section,indicating the eff ectiveness and decent generalization capability of the proposed method. 展开更多
关键词 Velocity inversion CNN-LSTM fusion deep neural network weight initialization training strategy
下载PDF
一种耦合DeepLab与Transformer的农作物种植类型遥感精细分类方法 被引量:2
5
作者 林云浩 王艳军 +1 位作者 李少春 蔡恒藩 《测绘学报》 EI CSCD 北大核心 2024年第2期353-366,共14页
如何精细遥感监测复杂的不同类型农田作物种植情况,是智慧农业农村领域实现农耕面积调查与农作物估产的关键。目前的高分辨率影像的作物种植像素级语义分割中,深度卷积神经网络难以兼顾空间多尺度全局特征和局部细节特征,从而导致各类... 如何精细遥感监测复杂的不同类型农田作物种植情况,是智慧农业农村领域实现农耕面积调查与农作物估产的关键。目前的高分辨率影像的作物种植像素级语义分割中,深度卷积神经网络难以兼顾空间多尺度全局特征和局部细节特征,从而导致各类农田地块之间边界轮廓模糊和同类农田区域内部完整性不高等问题。针对这些不足,本文提出了一种耦合DeepLabv3+和Transformer编码器的双分支并行特征融合网络FDTNet,以实现农作物种植类型的精细遥感监测。首先,在FDTNet中并行嵌入DeepLabv3+和Transformer分别捕获农田影像的局部特征和全局特征;其次,应用耦合注意力融合模块CAFM有效融合两者的特征;然后,在解码器阶段应用卷积注意力模块CBAM增强卷积层有效特征的权重;最后,采用渐进式多层特征融合策略将编码器和解码器中的有效特征全面融合并输出特征图,以实现晚稻、中稻、藕田、菜地和大棚的高精度分类识别。为了验证FDTNet网络模型在高分辨率作物分类应用的有效性,本文选择不同高分辨率的Yuhu数据集和Zhejiang数据集验证,mIoU分别达到74.7%和81.4%。相比于已有的UNet、DeepLabv3、DeepLabv3+、ResT和Res-Swin等深度学习方法,FDTNet的mIoU可分别高2.2%和3.6%。结果表明,FDTNet在纹理单一、大样本量,以及纹理多样、小样本量的两类农田场景中同时表现出优于对比方法的性能,具有较全面的多类别农作物有效特征提取能力。 展开更多
关键词 高分辨率遥感影像 农作物种植类型 语义分割 特征融合 深度学习
下载PDF
Deep Bimodal Fusion Approach for Apparent Personality Analysis
6
作者 Saman Riaz Ali Arshad +1 位作者 Shahab S.Band Amir Mosavi 《Computers, Materials & Continua》 SCIE EI 2023年第4期2301-2312,共12页
Personality distinguishes individuals’ patterns of feeling, thinking,and behaving. Predicting personality from small video series is an excitingresearch area in computer vision. The majority of the existing research ... Personality distinguishes individuals’ patterns of feeling, thinking,and behaving. Predicting personality from small video series is an excitingresearch area in computer vision. The majority of the existing research concludespreliminary results to get immense knowledge from visual and Audio(sound) modality. To overcome the deficiency, we proposed the Deep BimodalFusion (DBF) approach to predict five traits of personality-agreeableness,extraversion, openness, conscientiousness and neuroticism. In the proposedframework, regarding visual modality, the modified convolution neural networks(CNN), more specifically Descriptor Aggregator Model (DAN) areused to attain significant visual modality. The proposed model extracts audiorepresentations for greater efficiency to construct the long short-termmemory(LSTM) for the audio modality. Moreover, employing modality-based neuralnetworks allows this framework to independently determine the traits beforecombining them with weighted fusion to achieve a conclusive prediction of thegiven traits. The proposed approach attains the optimal mean accuracy score,which is 0.9183. It is achieved based on the average of five personality traitsand is thus better than previously proposed frameworks. 展开更多
关键词 Apparent personality analysis deep bimodal fusion convolutional neural network long short-term memory bimodal information fusion approach
下载PDF
Application of Attributes Fusion Technology in Prediction of Deep Reservoirs in Paleogene of Bohai Sea
7
作者 ZHANG Daxiang YIN Taiju +1 位作者 SUN Shaochuan SHI Qian 《Acta Geologica Sinica(English Edition)》 SCIE CAS CSCD 2017年第S1期148-149,共2页
1 Introduction The Paleogene strata(with a depth of more than 2500m)in the Bohai sea is complex(Xu Changgui,2006),the reservoir buried deeply,the reservoir prediction is difficult(LAI Weicheng,XU Changgui,2012),and more
关键词 In DATA Application of Attributes fusion Technology in Prediction of deep Reservoirs in Paleogene of Bohai Sea RGB
下载PDF
Deep Convolutional Feature Fusion Model for Multispectral Maritime Imagery Ship Recognition
8
作者 Xiaohua Qiu Min Li +1 位作者 Liqiong Zhang Rui Zhao 《Journal of Computer and Communications》 2020年第11期23-43,共21页
Combining both visible and infrared object information, multispectral data is a promising source data for automatic maritime ship recognition. In this paper, in order to take advantage of deep convolutional neural net... Combining both visible and infrared object information, multispectral data is a promising source data for automatic maritime ship recognition. In this paper, in order to take advantage of deep convolutional neural network and multispectral data, we model multispectral ship recognition task into a convolutional feature fusion problem, and propose a feature fusion architecture called Hybrid Fusion. We fine-tune the VGG-16 model pre-trained on ImageNet through three channels single spectral image and four channels multispectral images, and use existing regularization techniques to avoid over-fitting problem. Hybrid Fusion as well as the other three feature fusion architectures is investigated. Each fusion architecture consists of visible image and infrared image feature extraction branches, in which the pre-trained and fine-tuned VGG-16 models are taken as feature extractor. In each fusion architecture, image features of two branches are firstly extracted from the same layer or different layers of VGG-16 model. Subsequently, the features extracted from the two branches are flattened and concatenated to produce a multispectral feature vector, which is finally fed into a classifier to achieve ship recognition task. Furthermore, based on these fusion architectures, we also evaluate recognition performance of a feature vector normalization method and three combinations of feature extractors. Experimental results on the visible and infrared ship (VAIS) dataset show that the best Hybrid Fusion achieves 89.6% mean per-class recognition accuracy on daytime paired images and 64.9% on nighttime infrared images, and outperforms the state-of-the-art method by 1.4% and 3.9%, respectively. 展开更多
关键词 deep Convolutional Neural Network Feature fusion Multispectral Data Ob-ject Recognition
下载PDF
Method of Multi-Mode Sensor Data Fusion with an Adaptive Deep Coupling Convolutional Auto-Encoder
9
作者 Xiaoxiong Feng Jianhua Liu 《Journal of Sensor Technology》 2023年第4期69-85,共17页
To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features e... To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features extracted synchronously by the CCAE were stacked and fed to the multi-channel convolution layers for fusion. Then, the fused data was passed to all connection layers for compression and fed to the Softmax module for classification. Finally, the coupling loss function coefficients and the network parameters were optimized through an adaptive approach using the gray wolf optimization (GWO) algorithm. Experimental comparisons showed that the proposed ADCCAE fusion model was superior to existing models for multi-mode data fusion. 展开更多
关键词 Multi-Mode Data fusion Coupling Convolutional Auto-Encoder Adaptive Optimization deep Learning
下载PDF
基于堆栈自编码器和DeepAR的航空发动机剩余寿命预测 被引量:8
10
作者 李浩 王卓健 +2 位作者 李哲 陈煊 李园 《推进技术》 EI CAS CSCD 北大核心 2022年第11期67-75,共9页
针对现有航空发动机剩余寿命预测大多基于单点预测模式,不能准确给出预测结果置信区间的问题,提出了一种基于堆栈自编码器结合DeepAR模型的概率分布预测模型。首先,堆栈自编码器通过无监督式深度学习对发动机监测数据进行特征提取,构建... 针对现有航空发动机剩余寿命预测大多基于单点预测模式,不能准确给出预测结果置信区间的问题,提出了一种基于堆栈自编码器结合DeepAR模型的概率分布预测模型。首先,堆栈自编码器通过无监督式深度学习对发动机监测数据进行特征提取,构建反映性能退化的健康指标(HI),基于双向长短期记忆(BiLSTM)网络构建DeepAR预测模型,将提取后的HI序列输入到DeepAR模型中,预测模型对HI序列与使用时间的隐含关系进行全局学习,并输出发动机剩余寿命的概率分布参数。利用CMPASS涡扇发动机退化数据集进行实验,验证所提方法的有效性。结果表明,本文所提预测方法同其他方法相比,对监测数据融合的效果更好,预测模型性能提高6.4%,实际剩余寿命基本在95%置信区间内。 展开更多
关键词 航空发动机 寿命预测 预测模型 深度学习 数据融合
下载PDF
基于DeepFM和XGBoost融合模型的静脉血栓预测 被引量:1
11
作者 李莉 谢超 吴迪 《计算机系统应用》 2022年第9期376-381,共6页
外周穿刺置入中心静脉导管(PICC)技术被广泛运用于中长期静脉治疗.在PICC置管时会导致各种并发症和不良反应,如PICC相关性血栓.随着机器学习和深度神经网络的不断发展与完善,为PICC相关性血栓的辅助诊断提供了基于临床医学数据的解决方... 外周穿刺置入中心静脉导管(PICC)技术被广泛运用于中长期静脉治疗.在PICC置管时会导致各种并发症和不良反应,如PICC相关性血栓.随着机器学习和深度神经网络的不断发展与完善,为PICC相关性血栓的辅助诊断提供了基于临床医学数据的解决方法.本文构建了基于DeepFM和XGBoost的融合模型,针对稀疏数据进行特征融合并能降低过拟合的情况,能够对PICC相关性血栓提供风险预测.实验结果表明,融合模型能够有效地对PICC相关性血栓进行特征重要性提取并预测患病概率,辅助临床在外周穿刺置过程中识别血栓高危风险因素,及时进行干预从而预防血栓的发生. 展开更多
关键词 机器学习 血栓预测 deepFM XGBoost 模型融合 预测模型 深度学习
下载PDF
基于Wide&Deep-XGB2LSTM模型的超短期光伏功率预测 被引量:10
12
作者 栗然 丁星 +3 位作者 孙帆 韩怡 刘会兰 严敬汝 《电力自动化设备》 EI CSCD 北大核心 2021年第7期31-37,共7页
为了充分利用电网自身的海量历史数据进行光伏功率预测,提出一种宽度&深度(Wide&Deep)框架下融合极限梯度提升(XGBoost)算法和长短时记忆网络(LSTM)的Wide&Deep-XGB2LSTM超短期光伏功率预测模型。对历史数据进行特征提取,... 为了充分利用电网自身的海量历史数据进行光伏功率预测,提出一种宽度&深度(Wide&Deep)框架下融合极限梯度提升(XGBoost)算法和长短时记忆网络(LSTM)的Wide&Deep-XGB2LSTM超短期光伏功率预测模型。对历史数据进行特征提取,获得时间、辐照度、温度等原始特征,在此基础上进行特征重构,通过交叉组合和挖掘统计特征构造辐照度×辐照度、均值、标准差等组合特征,并通过Filter法和Embedded法进行特征选择。在TensorFlow框架下通过算例对比验证了所提模型及特征工程工作对光伏功率预测性能的提升效果。 展开更多
关键词 光伏功率预测 宽度&深度模型 极限梯度提升 长短时记忆网络 特征工程 模型融合
下载PDF
基于声音与视觉特征多级融合的鱼类行为识别模型U-FusionNet-ResNet50+SENet 被引量:5
13
作者 胥婧雯 于红 +5 位作者 张鹏 谷立帅 李海清 郑国伟 程思奇 殷雷明 《大连海洋大学学报》 CAS CSCD 北大核心 2023年第2期348-356,共9页
为解决在光线昏暗、声音与视觉噪声干扰等复杂条件下,单模态鱼类行为识别准确率和召回率低的问题,提出了基于声音和视觉特征多级融合的鱼类行为识别模型U-FusionNet-ResNet50+SENet,该方法采用ResNet50模型提取视觉模态特征,通过MFCC+Re... 为解决在光线昏暗、声音与视觉噪声干扰等复杂条件下,单模态鱼类行为识别准确率和召回率低的问题,提出了基于声音和视觉特征多级融合的鱼类行为识别模型U-FusionNet-ResNet50+SENet,该方法采用ResNet50模型提取视觉模态特征,通过MFCC+RestNet50模型提取声音模态特征,并在此基础上设计一种U型融合架构,使不同维度的鱼类视觉和声音特征充分交互,在特征提取的各阶段实现特征融合,最后引入SENet构成关注通道信息特征融合网络,并通过对比试验,采用多模态鱼类行为的合成加噪试验数据验证算法的有效性。结果表明:U-FusionNet-ResNet50+SENet对鱼类行为识别准确率达到93.71%,F1值达到93.43%,召回率达到92.56%,与效果较好的已有模型Intermediate-feature-level deep model相比,召回率、F1值和准确率分别提升了2.35%、3.45%和3.48%。研究表明,所提出的U-FusionNet-ResNet50+SENet识别方法,可有效解决单模态鱼类行为识别准确率低的问题,提升了鱼类行为识别的整体效果,可以有效识别复杂条件下鱼类的游泳、摄食等行为,为真实生产条件下的鱼类行为识别研究提供了新思路和新方法。 展开更多
关键词 行为识别 深度学习 多模态融合 U-fusionNet ResNet50 SENet
下载PDF
基于T-Fusion的TFP3D人体行为识别算法
14
作者 曾明如 熊嘉豪 祝琴 《计算机集成制造系统》 EI CSCD 北大核心 2023年第12期4032-4039,共8页
针对当前人体行为识别算法中双流卷积神经网络时效性差、3D卷积神经网络参数多、算法的复杂度高等不足,提出了基于3D卷积网络和时空融合网络的时空融合伪3D卷积神经网络模型TFP3D。首先,使用3D卷积拆分减少3D卷积核带来的庞大参数量;其... 针对当前人体行为识别算法中双流卷积神经网络时效性差、3D卷积神经网络参数多、算法的复杂度高等不足,提出了基于3D卷积网络和时空融合网络的时空融合伪3D卷积神经网络模型TFP3D。首先,使用3D卷积拆分减少3D卷积核带来的庞大参数量;其次,增加时空融合模块T-Fusion,保证人体行为信息时空特征的有效传递;最后,使用Kinetics数据集对深层模型进行预训练,在保证准确率的前提下提升网络速率。在常见的人体行为识别数据集UCFl01上进行了大量的实验分析,并将识别的结果和当前流行的算法进行比较,结果证明所设计的TFP3D优于其他方法,平均识别率相比其他方法有较大的提高。 展开更多
关键词 TFP3D网络 时间融合网络 预训练 行为识别 深度学习
下载PDF
Advanced Feature Fusion Algorithm Based on Multiple Convolutional Neural Network for Scene Recognition 被引量:5
15
作者 Lei Chen Kanghu Bo +1 位作者 Feifei Lee Qiu Chen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2020年第2期505-523,共19页
Scene recognition is a popular open problem in the computer vision field.Among lots of methods proposed in recent years,Convolutional Neural Network(CNN)based approaches achieve the best performance in scene recogniti... Scene recognition is a popular open problem in the computer vision field.Among lots of methods proposed in recent years,Convolutional Neural Network(CNN)based approaches achieve the best performance in scene recognition.We propose in this paper an advanced feature fusion algorithm using Multiple Convolutional Neural Network(Multi-CNN)for scene recognition.Unlike existing works that usually use individual convolutional neural network,a fusion of multiple different convolutional neural networks is applied for scene recognition.Firstly,we split training images in two directions and apply to three deep CNN model,and then extract features from the last full-connected(FC)layer and probabilistic layer on each model.Finally,feature vectors are fused with different fusion strategies in groups forwarded into SoftMax classifier.Our proposed algorithm is evaluated on three scene datasets for scene recognition.The experimental results demonstrate the effectiveness of proposed algorithm compared with other state-of-art approaches. 展开更多
关键词 Scene recognition deep feature fusion multiple convolutional neural network.
下载PDF
用于声音分类的Deep LightGBM算法
16
作者 李行健 汤心溢 张瑞 《声学技术》 CSCD 北大核心 2022年第6期871-877,共7页
在医学诊断、场景分析、语音识别、生态环境分析等方面语音分类都有着广泛的应用价值。传统的语音分类器采用的是神经网络。但是在精确度,模型设置,参数调整和资料的预处理等方面,有较大的缺陷。在这一基础上,文章提出了一种以“深度森... 在医学诊断、场景分析、语音识别、生态环境分析等方面语音分类都有着广泛的应用价值。传统的语音分类器采用的是神经网络。但是在精确度,模型设置,参数调整和资料的预处理等方面,有较大的缺陷。在这一基础上,文章提出了一种以“深度森林”为基础的改进方法——LightGBM的深度学习模型(Deep LightGBM模型)。它能够在保证模型简洁的前提下,提高分类精度和泛化能力。该算法有效降低了参数依赖性。在UrbanSound8K这一数据集中,采用向量方法进行语音特征的提取,其分类精确度达95.84%。将卷积神经网络(Convolutional Neural Network, CNN)抽取的特征和向量法获取的特征进行融合,并利用新的模型进行训练,其准确率可达97.67%。实验证明,此算法采用的特征提取方式与Deep LightGBM配合获得的模型参数调整容易,精度高,不会产生过度拟合,并且泛化能力好。 展开更多
关键词 声音分类 LightGBM算法 深度森林 特征融合 特征提取
下载PDF
MFC-DeepLabV3+:一种多特征级联融合裂缝缺陷检测网络模型 被引量:3
17
作者 李国燕 梁家栋 +2 位作者 刘毅 潘玉恒 刘泽帅 《铁道科学与工程学报》 EI CAS CSCD 北大核心 2023年第4期1370-1381,共12页
道路裂缝对道路安全存在很大威胁,确保道路的安全性离不开对裂缝的准确检测。针对常规的人工检测方法和传统机器学习检测方法泛化性低且在复杂背景下裂缝分割检测准确率低等诸多问题,提出一种新型道路裂缝缺陷检测模型MFC-DeepLabV3+(Mu... 道路裂缝对道路安全存在很大威胁,确保道路的安全性离不开对裂缝的准确检测。针对常规的人工检测方法和传统机器学习检测方法泛化性低且在复杂背景下裂缝分割检测准确率低等诸多问题,提出一种新型道路裂缝缺陷检测模型MFC-DeepLabV3+(MultiFeatureCascade-DeepLabV3+,多特征级联-DeepLabV3+)。首先,针对裂缝图像拓扑结构复杂,非均匀性强等问题,对主干特征提取网络进行改进,提出采用通道维度的分组卷积和分离注意力模块增强模型对裂缝图像特征提取能力,同时引入位置信息注意力机制提升对裂缝目标结构特征的精准定位,扩大网络各层特征信息的利用率。其次,加入多分支共享密集连接改进ASPP(AtrousSpatialPyramidPooling,空洞空间金字塔池化)模块,使其模仿人类视觉行为感知,在感受野保持均衡的同时生成密集覆盖裂缝尺度范围的特征语义信息。最后,在模型特征融合阶段增加多重边缘细化融合机制,使模型加大对高低阶特征信息的利用,提升模型对裂缝边缘精确分割的能力,防止裂缝轮廓边缘像素缺失。为验证MFC-DeepLabV3+模型的有效性,在公开路面裂缝数据集CRACK500与DeepCrack上进行实验,相较其他分割模型,在平均交并比上分别达到79.63%和76.99%,同时在主观视觉对比上预测出的裂缝分割图像边缘更加清晰,区域更加完整,表明该模型具有良好的工程应用价值。 展开更多
关键词 缺陷检测 裂缝识别 深度学习 语义分割 多特征融合
下载PDF
以ChatGPT为代表的AI技术与高等教育的深度融合 被引量:4
18
作者 蒋妮姗 《继续教育研究》 2024年第4期75-80,共6页
随着人工智能技术飞速发展,以ChatGPT为代表的人工智能技术已经逐渐成为人们关注的焦点。旨在探讨ChatGPT技术与高等教育的深度融合研究,分析其对高等教育的影响和挑战,并提出相应的解决方案。首先,介绍了ChatGPT技术的基本原理和特点,... 随着人工智能技术飞速发展,以ChatGPT为代表的人工智能技术已经逐渐成为人们关注的焦点。旨在探讨ChatGPT技术与高等教育的深度融合研究,分析其对高等教育的影响和挑战,并提出相应的解决方案。首先,介绍了ChatGPT技术的基本原理和特点,包括其语言生成模型的构建、预训练和优化等;然后,通过分析ChatGPT技术在高等教育中的应用场景,如智能教育、在线教学、个性化推荐等,探讨了其对高等教育的影响和挑战;最后,提出了一些解决方案,比如加强ChatGPT技术与高等教育的深度融合、建立完善的教育监管机制等,以应对ChatGPT技术在高等教育中面临的挑战。 展开更多
关键词 ChatGPT AI技术 高等教育 深度融合 融合路径
下载PDF
基于深度学习多模态融合的2型糖尿病中医证素辨证模型的构建 被引量:1
19
作者 赵智慧 周毅 +3 位作者 李炜弘 汤朝晖 郭强 陈日高 《世界科学技术-中医药现代化》 CSCD 北大核心 2024年第4期908-918,共11页
目的为适应互联网+智能医疗的时代需求,纳入舌诊仪图像数据及问诊结构化数据,采用深度学习、多模态融合等方法构建2型糖尿病中医证素辨证模型,为中医智能化辨证提供实验支撑和科学依据。方法共纳入2585例2型糖尿病患者,邀请3位专家分别... 目的为适应互联网+智能医疗的时代需求,纳入舌诊仪图像数据及问诊结构化数据,采用深度学习、多模态融合等方法构建2型糖尿病中医证素辨证模型,为中医智能化辨证提供实验支撑和科学依据。方法共纳入2585例2型糖尿病患者,邀请3位专家分别进行证素辨证标记。基于深度全连接神经网络、U2-Net与ResNet34等网络构建基于舌图数据、症候数据的症候辨证模型(S-Model)、舌图辨证模型(T-Model),并采用多模态融合技术构建以二者为共同输入的多模态融合辨证模型(TS-Model)。通过F1值、精确率、召回率等对比不同模型预测性能。结果T-Model对十四类证素的预测F1值波动于0.000%-86.726%,S-Model的预测F1值波动于0.000%-97.826%,TS-Mode的预测F1值波动于55.556%-99.065%。与T-Model、S-Model对比,TS-Model整体F1值较高且稳定。结论基于深度学习多模态融合技术构建中医证素智能辨证模型性能较好。多模态融合技术适用于中医证素辨证模型优化,为下一步建立四诊信息全客观化的高度智能证素辨证模型提供方法学支持。 展开更多
关键词 证素辨证 2型糖尿病 深度学习 多模态融合
下载PDF
基于Deep Web检索的查询结果处理技术的应用
20
作者 周二虎 张水平 胡洋 《计算机工程与设计》 CSCD 北大核心 2010年第1期106-109,共4页
针对当前Deep Web信息检索中Web数据库返回的查询结果页面内容多样、形式各异、有效信息难以提取等不足,将信息抽取与数据融合技术加以改进,提出了对查询结果页面进行处理的技术。该技术通过对HTML页面解析、信息过滤、分块、剪枝、提... 针对当前Deep Web信息检索中Web数据库返回的查询结果页面内容多样、形式各异、有效信息难以提取等不足,将信息抽取与数据融合技术加以改进,提出了对查询结果页面进行处理的技术。该技术通过对HTML页面解析、信息过滤、分块、剪枝、提取抽取规则,实现了有效信息的自动抽取。通过建立合并规则、去重规则、清洗规则,实现了数据的有效融合,并最终以统一的模式进行存储。最后,通过相关项目应用,验证了该技术的有效性和实用性。 展开更多
关键词 深网信息 结果处理 规则 信息抽取 数据融合
下载PDF
上一页 1 2 108 下一页 到第
使用帮助 返回顶部