期刊文献+
共找到113,855篇文章
< 1 2 250 >
每页显示 20 50 100
基于Deep Forest算法的对虾急性肝胰腺坏死病(AHPND)预警数学模型构建
1
作者 王印庚 于永翔 +5 位作者 蔡欣欣 张正 王春元 廖梅杰 朱洪洋 李昊 《渔业科学进展》 CSCD 北大核心 2024年第3期171-181,共11页
为预报池塘养殖凡纳对虾(Penaeus vannamei)急性肝胰腺坏死病(AHPND)的发生,自2020年开始,笔者对凡纳对虾养殖区开展了连续监测工作,包括与疾病发生相关的环境理化因子、微生物因子、虾体自身健康状况等18个候选预警因子指标,通过数据... 为预报池塘养殖凡纳对虾(Penaeus vannamei)急性肝胰腺坏死病(AHPND)的发生,自2020年开始,笔者对凡纳对虾养殖区开展了连续监测工作,包括与疾病发生相关的环境理化因子、微生物因子、虾体自身健康状况等18个候选预警因子指标,通过数据标准化处理后分析病原、宿主与环境之间的相关性,对候选预警因子进行筛选,基于Python语言编程结合Deep Forest、Light GBM、XGBoost算法进行数据建模和预测性能评判,仿真环境为Python2.7,以预警因子指标作为输入样本(即警兆),以对虾是否发病指标作为输出结果(即警情),根据输入样本和输出结果各自建立输入数据矩阵和目标数据矩阵,利用原始数据矩阵对输入样本进行初始化,结合函数方程进行拟合,拟合的源代码能利用已知环境、病原及对虾免疫指标数据对目标警情进行预测。最终建立了基于Deep Forest算法的虾体(肝胰腺内)细菌总数、虾体弧菌(Vibrio)占比、水体细菌总数和盐度的4维向量预警预报模型,准确率达89.00%。本研究将人工智能算法应用到对虾AHPND发生的预测预报,相关研究结果为对虾AHPND疾病预警预报建立了预警数学模型,并为对虾健康养殖和疾病防控提供了技术支撑和有力保障。 展开更多
关键词 对虾 急性肝胰腺坏死病 预警数学模型 deep Forest算法 PYTHON语言
下载PDF
基于DeepLabv3+的船体结构腐蚀检测方法
2
作者 向林浩 方昊昱 +2 位作者 周健 张瑜 李位星 《船海工程》 北大核心 2024年第2期30-34,共5页
利用图像识别方法对无人机、机器人所采集的实时图像开展船体结构腐蚀检测,可有效提高检验检测效率和数字化、智能化水平,具有极大的应用价值和潜力,将改变传统的船体结构检验检测方式。提出一种基于DeepLabv3+的船体结构腐蚀检测模型,... 利用图像识别方法对无人机、机器人所采集的实时图像开展船体结构腐蚀检测,可有效提高检验检测效率和数字化、智能化水平,具有极大的应用价值和潜力,将改变传统的船体结构检验检测方式。提出一种基于DeepLabv3+的船体结构腐蚀检测模型,通过收集图像样本并进行三种腐蚀类别的分割标注,基于DeepLabv3+语义分割模型进行网络的训练,预测图片中腐蚀的像素点类别和区域,模型在测试集的精准率达到52.92%,证明了使用DeepLabv3+检测船体腐蚀缺陷的可行性。 展开更多
关键词 船体结构 腐蚀检测 深度学习 deepLabv3+
下载PDF
基于M-DeepLab网络的速度建模技术研究
3
作者 徐秀刚 张浩楠 +1 位作者 许文德 郭鹏 《中国海洋大学学报(自然科学版)》 CAS CSCD 北大核心 2024年第6期145-155,共11页
本文提出了一种适用于速度建模方法的M-DeepLab网络框架,该网络将地震炮集记录作为输入,网络主体使用轻量级MobileNet,以此提升网络训练速度;并在编码环节ASPP模块后添加了Attention模块,且在解码环节将不同网络深度的速度特征进行了融... 本文提出了一种适用于速度建模方法的M-DeepLab网络框架,该网络将地震炮集记录作为输入,网络主体使用轻量级MobileNet,以此提升网络训练速度;并在编码环节ASPP模块后添加了Attention模块,且在解码环节将不同网络深度的速度特征进行了融合,既获得了更多的速度特征,又保留了网络浅部的速度信息,防止出现网络退化和过拟合问题。模型测试证明,M-DeepLab网络能够实现智能、精确的速度建模,简单模型、复杂模型以及含有噪声数据复杂模型的智能速度建模,均取得了良好的效果。相较DeepLabV3+网络,本文方法对于速度模型界面处的预测,特别是速度突变区域的预测,具有更高的预测精度,从而验证了该方法精确性、高效性、实用性和抗噪性。 展开更多
关键词 深度学习 速度建模 M-deepLab网络 监督学习
下载PDF
UAV-Assisted Dynamic Avatar Task Migration for Vehicular Metaverse Services: A Multi-Agent Deep Reinforcement Learning Approach 被引量:1
4
作者 Jiawen Kang Junlong Chen +6 位作者 Minrui Xu Zehui Xiong Yutao Jiao Luchao Han Dusit Niyato Yongju Tong Shengli Xie 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第2期430-445,共16页
Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers... Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses. 展开更多
关键词 AVATAR blockchain metaverses multi-agent deep reinforcement learning transformer UAVS
下载PDF
基于改进DeeplabV3+的水面多类型漂浮物分割方法研究
5
作者 包学才 刘飞燕 +2 位作者 聂菊根 许小华 柯华盛 《水利水电技术(中英文)》 北大核心 2024年第4期163-175,共13页
【目的】为解决传统图像处理方法鲁棒性差、常用深度学习检测方法无法准确识别大片漂浮物的边界等问题,【方法】提出一种基于改进DeeplabV3+的水面多类型漂浮物识别的语义分割方法,提高水面漂浮的识别能力。对所收集实际水面漂浮物进行... 【目的】为解决传统图像处理方法鲁棒性差、常用深度学习检测方法无法准确识别大片漂浮物的边界等问题,【方法】提出一种基于改进DeeplabV3+的水面多类型漂浮物识别的语义分割方法,提高水面漂浮的识别能力。对所收集实际水面漂浮物进行分类,采用自制数据集进行对比试验。算法选择xception网络作为主干网络以获得初步漂浮物特征,在加强特征提取网络部分引入注意力机制以强调有效特征信息,在后处理阶段加入全连接条件随机场模型,将单个像素点的局部信息与全局语义信息融合。【结果】对比图像分割性能指标,改进后的算法mPA(Mean Pixel Accuracy)提升了5.73%,mIOU(Mean Intersection Over Union)提升了4.37%。【结论】相比于其他算法模型,改进后的DeeplabV3+算法对漂浮物特征的获取能力更强,同时能获得丰富的细节信息以更精准地识别多类型水面漂浮物的边界与较难分类的漂浮物,在对多个水库场景测试后满足实际水域环境中漂浮物检测的需求。 展开更多
关键词 深度学习 语义分割 特征提取 漂浮物识别 注意力机制 全连接条件随机场 算法模型 影响因素
下载PDF
基于场因子分解的xDeepFM推荐模型
6
作者 李子杰 张姝 +2 位作者 欧阳昭相 王俊 吴迪 《应用科学学报》 CAS CSCD 北大核心 2024年第3期513-524,共12页
极深因子分解机(eXtreme deep factorization machine,xDeepFM)是一种基于上下文感知的推荐模型,它提出了一种压缩交叉网络对特征进行阶数可控的特征交叉,并将该网络与深度神经网络进行结合以优化推荐效果。为了进一步提升xDeepFM在推... 极深因子分解机(eXtreme deep factorization machine,xDeepFM)是一种基于上下文感知的推荐模型,它提出了一种压缩交叉网络对特征进行阶数可控的特征交叉,并将该网络与深度神经网络进行结合以优化推荐效果。为了进一步提升xDeepFM在推荐场景下的表现,提出一种基于场因子分解的xDeepFM改进模型。该模型通过场信息增强了特征的表达能力,并建立了多个交叉压缩网络以学习高阶组合特征。最后分析了用户场、项目场设定的合理性,并在3个不同规模的MovieLens系列数据集上通过受试者工作特征曲线下面积、对数似然损失指标进行性能评估,验证了该改进模型的有效性。 展开更多
关键词 推荐算法 极深因子分解机 场因子分解 深度学习
下载PDF
Optimizing Deep Learning for Computer-Aided Diagnosis of Lung Diseases: An Automated Method Combining Evolutionary Algorithm, Transfer Learning, and Model Compression
7
作者 Hassen Louati Ali Louati +1 位作者 Elham Kariri Slim Bechikh 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2519-2547,共29页
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w... Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures. 展开更多
关键词 Computer-aided diagnosis deep learning evolutionary algorithms deep compression transfer learning
下载PDF
Hyperspectral image super resolution using deep internal and self-supervised learning
8
作者 Zhe Liu Xian-Hua Han 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期128-141,共14页
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral... By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods. 展开更多
关键词 computer vision deep learning deep neural networks HYPERSPECTRAL image enhancement
下载PDF
Dendritic Deep Learning for Medical Segmentation
9
作者 Zhipeng Liu Zhiming Zhang +3 位作者 Zhenyu Lei Masaaki Omura Rong-Long Wang Shangce Gao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第3期803-805,共3页
Dear Editor,This letter presents a novel segmentation approach that leverages dendritic neurons to tackle the challenges of medical imaging segmentation.In this study,we enhance the segmentation accuracy based on a Se... Dear Editor,This letter presents a novel segmentation approach that leverages dendritic neurons to tackle the challenges of medical imaging segmentation.In this study,we enhance the segmentation accuracy based on a SegNet variant including an encoder-decoder structure,an upsampling index,and a deep supervision method.Furthermore,we introduce a dendritic neuron-based convolutional block to enable nonlinear feature mapping,thereby further improving the effectiveness of our approach. 展开更多
关键词 thereby deep enable
下载PDF
Evaluation of excavation damaged zones(EDZs)in Horonobe Underground Research Laboratory(URL)
10
作者 Koji Hata Sumio Niunoya +1 位作者 Kazuhei Aoyagi Nobukatsu Miyara 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第2期365-378,共14页
Excavation of underground caverns,such as mountain tunnels and energy-storage caverns,may cause the damages to the surrounding rock as a result of the stress redistribution.In this influenced zone,new cracks and disco... Excavation of underground caverns,such as mountain tunnels and energy-storage caverns,may cause the damages to the surrounding rock as a result of the stress redistribution.In this influenced zone,new cracks and discontinuities are created or propagate in the rock mass.Therefore,it is effective to measure and evaluate the acoustic emission(AE)events generated by the rocks,which is a small elastic vibration,and permeability change.The authors have developed a long-term measurement device that incorporates an optical AE(O-AE)sensor,an optical pore pressure sensor,and an optical temperature sensor in a single multi-optical measurement probe(MOP).Japan Atomic Energy Agency has been conducting R&D activities to enhance the reliability of high-level radioactive waste(HLW)deep geological disposal technology.In a high-level radioactive disposal project,one of the challenges is the development of methods for long-term monitoring of rock mass behavior.Therefore,in January 2014,the long-term measurements of the hydro-mechanical behavior of the rock mass were launched using the developed MOP in the vicinity of 350 m below the surface at the Horonobe Underground Research Center.The measurement results show that AEs occur frequently up to 1.5 m from the wall during excavation.In addition,hydraulic conductivity increased by 2e4 orders of magnitude.Elastoplastic analysis revealed that the hydraulic behavior of the rock mass affected the pore pressure fluctuations and caused micro-fractures.Based on this,a conceptual model is developed to represent the excavation damaged zone(EDZ),which contributes to the safe geological disposal of radioactive waste. 展开更多
关键词 Excavation damaged zone(EDZ) Optical sensor Long-term monitoring Acoustic emission(AE) Shaft sinking
下载PDF
MAUN:Memory-Augmented Deep Unfolding Network for Hyperspectral Image Reconstruction
11
作者 Qian Hu Jiayi Ma +2 位作者 Yuan Gao Junjun Jiang Yixuan Yuan 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第5期1139-1150,共12页
Spectral compressive imaging has emerged as a powerful technique to collect the 3D spectral information as 2D measurements.The algorithm for restoring the original 3D hyperspectral images(HSIs)from compressive measure... Spectral compressive imaging has emerged as a powerful technique to collect the 3D spectral information as 2D measurements.The algorithm for restoring the original 3D hyperspectral images(HSIs)from compressive measurements is pivotal in the imaging process.Early approaches painstakingly designed networks to directly map compressive measurements to HSIs,resulting in the lack of interpretability without exploiting the imaging priors.While some recent works have introduced the deep unfolding framework for explainable reconstruction,the performance of these methods is still limited by the weak information transmission between iterative stages.In this paper,we propose a Memory-Augmented deep Unfolding Network,termed MAUN,for explainable and accurate HSI reconstruction.Specifically,MAUN implements a novel CNN scheme to facilitate a better extrapolation step of the fast iterative shrinkage-thresholding algorithm,introducing an extra momentum incorporation step for each iteration to alleviate the information loss.Moreover,to exploit the high correlation of intermediate images from neighboring iterations,we customize a cross-stage transformer(CSFormer)as the deep denoiser to simultaneously capture self-similarity from both in-stage and cross-stage features,which is the first attempt to model the long-distance dependencies between iteration stages.Extensive experiments demonstrate that the proposed MAUN is superior to other state-of-the-art methods both visually and metrically.Our code is publicly available at https://github.com/HuQ1an/MAUN. 展开更多
关键词 deep FOLDING ITERATION
下载PDF
Constrained Multi-Objective Optimization With Deep Reinforcement Learning Assisted Operator Selection
12
作者 Fei Ming Wenyin Gong +1 位作者 Ling Wang Yaochu Jin 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期919-931,共13页
Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been dev... Solving constrained multi-objective optimization problems with evolutionary algorithms has attracted considerable attention.Various constrained multi-objective optimization evolutionary algorithms(CMOEAs)have been developed with the use of different algorithmic strategies,evolutionary operators,and constraint-handling techniques.The performance of CMOEAs may be heavily dependent on the operators used,however,it is usually difficult to select suitable operators for the problem at hand.Hence,improving operator selection is promising and necessary for CMOEAs.This work proposes an online operator selection framework assisted by Deep Reinforcement Learning.The dynamics of the population,including convergence,diversity,and feasibility,are regarded as the state;the candidate operators are considered as actions;and the improvement of the population state is treated as the reward.By using a Q-network to learn a policy to estimate the Q-values of all actions,the proposed approach can adaptively select an operator that maximizes the improvement of the population according to the current state and thereby improve the algorithmic performance.The framework is embedded into four popular CMOEAs and assessed on 42 benchmark problems.The experimental results reveal that the proposed Deep Reinforcement Learning-assisted operator selection significantly improves the performance of these CMOEAs and the resulting algorithm obtains better versatility compared to nine state-of-the-art CMOEAs. 展开更多
关键词 Constrained multi-objective optimization deep Qlearning deep reinforcement learning(DRL) evolutionary algorithms evolutionary operator selection
下载PDF
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
13
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
Classification of Sailboat Tell Tail Based on Deep Learning
14
作者 CHANG Xiaofeng YU Jintao +3 位作者 GAO Ying DING Hongchen LIU Yulong YU Huaming 《Journal of Ocean University of China》 SCIE CAS CSCD 2024年第3期710-720,共11页
The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailb... The tell tail is usually placed on the triangular sail to display the running state of the air flow on the sail surface.It is of great significance to make accurate judgement on the drift of the tell tail of the sailboat during sailing for the best sailing effect.Normally it is difficult for sailors to keep an eye for a long time on the tell sail for accurate judging its changes,affected by strong sunlight and visual fatigue.In this case,we adopt computer vision technology in hope of helping the sailors judge the changes of the tell tail in ease with ease.This paper proposes for the first time a method to classify sailboat tell tails based on deep learning and an expert guidance system,supported by a sailboat tell tail classification data set on the expert guidance system of interpreting the tell tails states in different sea wind conditions,including the feature extraction performance.Considering the expression capabilities that vary with the computational features in different visual tasks,the paper focuses on five tell tail computing features,which are recoded by an automatic encoder and classified by a SVM classifier.All experimental samples were randomly divided into five groups,and four groups were selected from each group as the training set to train the classifier.The remaining one group was used as the test set for testing.The highest resolution value of the ResNet network was 80.26%.To achieve better operational results on the basis of deep computing features obtained through the ResNet network in the experiments.The method can be used to assist the sailors in making better judgement about the tell tail changes during sailing. 展开更多
关键词 tell tail sailboat CLASSIFICATION deep learning
下载PDF
ASLP-DL—A Novel Approach Employing Lightweight Deep Learning Framework for Optimizing Accident Severity Level Prediction
15
作者 Saba Awan Zahid Mehmood 《Computers, Materials & Continua》 SCIE EI 2024年第2期2535-2555,共21页
Highway safety researchers focus on crash injury severity,utilizing deep learning—specifically,deep neural networks(DNN),deep convolutional neural networks(D-CNN),and deep recurrent neural networks(D-RNN)—as the pre... Highway safety researchers focus on crash injury severity,utilizing deep learning—specifically,deep neural networks(DNN),deep convolutional neural networks(D-CNN),and deep recurrent neural networks(D-RNN)—as the preferred method for modeling accident severity.Deep learning’s strength lies in handling intricate relation-ships within extensive datasets,making it popular for accident severity level(ASL)prediction and classification.Despite prior success,there is a need for an efficient system recognizing ASL in diverse road conditions.To address this,we present an innovative Accident Severity Level Prediction Deep Learning(ASLP-DL)framework,incorporating DNN,D-CNN,and D-RNN models fine-tuned through iterative hyperparameter selection with Stochastic Gradient Descent.The framework optimizes hidden layers and integrates data augmentation,Gaussian noise,and dropout regularization for improved generalization.Sensitivity and factor contribution analyses identify influential predictors.Evaluated on three diverse crash record databases—NCDB 2018–2019,UK 2015–2020,and US 2016–2021—the D-RNN model excels with an ACC score of 89.0281%,a Roc Area of 0.751,an F-estimate of 0.941,and a Kappa score of 0.0629 over the NCDB dataset.The proposed framework consistently outperforms traditional methods,existing machine learning,and deep learning techniques. 展开更多
关键词 Injury SEVERITY PREDICTION deep learning feature
下载PDF
DeepOCL:A deep neural network for Object Constraint Language generation from unrestricted nature language
16
作者 Yilong Yang Yibo Liu +3 位作者 Tianshu Bao Weiru Wang Nan Niu Yongfeng Yin 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期250-263,共14页
Object Constraint Language(OCL)is one kind of lightweight formal specification,which is widely used for software verification and validation in NASA and Object Management Group projects.Although OCL provides a simple ... Object Constraint Language(OCL)is one kind of lightweight formal specification,which is widely used for software verification and validation in NASA and Object Management Group projects.Although OCL provides a simple expressive syntax,it is hard for the developers to write correctly due to lacking knowledge of the mathematical foundations of the first-order logic,which is approximately half accurate at the first stage of devel-opment.A deep neural network named DeepOCL is proposed,which takes the unre-stricted natural language as inputs and automatically outputs the best-scored OCL candidates without requiring a domain conceptual model that is compulsively required in existing rule-based generation approaches.To demonstrate the validity of our proposed approach,ablation experiments were conducted on a new sentence-aligned dataset named OCLPairs.The experiments show that the proposed DeepOCL can achieve state of the art for OCL statement generation,scored 74.30 on BLEU,and greatly outperformed experienced developers by 35.19%.The proposed approach is the first deep learning approach to generate the OCL expression from the natural language.It can be further developed as a CASE tool for the software industry. 展开更多
关键词 deep learning OCL software engineering
下载PDF
An Improved Deep Learning Framework for Automated Optic Disc Localization and Glaucoma Detection
17
作者 Hela Elmannai Monia Hamdi +3 位作者 Souham Meshoul Amel Ali Alhussan Manel Ayadi Amel Ksibi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第8期1429-1457,共29页
Glaucoma disease causes irreversible damage to the optical nerve and it has the potential to cause permanent loss of vision.Glaucoma ranks as the second most prevalent cause of permanent blindness.Traditional glaucoma... Glaucoma disease causes irreversible damage to the optical nerve and it has the potential to cause permanent loss of vision.Glaucoma ranks as the second most prevalent cause of permanent blindness.Traditional glaucoma diagnosis requires a highly experienced specialist,costly equipment,and a lengthy wait time.For automatic glaucoma detection,state-of-the-art glaucoma detection methods include a segmentation-based method to calculate the cup-to-disc ratio.Other methods include multi-label segmentation networks and learning-based methods and rely on hand-crafted features.Localizing the optic disc(OD)is one of the key features in retinal images for detecting retinal diseases,especially for glaucoma disease detection.The approach presented in this study is based on deep classifiers for OD segmentation and glaucoma detection.First,the optic disc detection process is based on object detection using a Mask Region-Based Convolutional Neural Network(Mask-RCNN).The OD detection task was validated using the Dice score,intersection over union,and accuracy metrics.The OD region is then fed into the second stage for glaucoma detection.Therefore,considering only the OD area for glaucoma detection will reduce the number of classification artifacts by limiting the assessment to the optic disc area.For this task,VGG-16(Visual Geometry Group),Resnet-18(Residual Network),and Inception-v3 were pre-trained and fine-tuned.We also used the Support Vector Machine Classifier.The feature-based method uses region content features obtained by Histogram of Oriented Gradients(HOG)and Gabor Filters.The final decision is based on weighted fusion.A comparison of the obtained results from all classification approaches is provided.Classification metrics including accuracy and ROC curve are compared for each classification method.The novelty of this research project is the integration of automatic OD detection and glaucoma diagnosis in a global method.Moreover,the fusion-based decision system uses the glaucoma detection result obtained using several convolutional deep neural networks and the support vector machine classifier.These classification methods contribute to producing robust classification results.This method was evaluated using well-known retinal images available for research work and a combined dataset including retinal images with and without pathology.The performance of the models was tested on two public datasets and a combined dataset and was compared to similar research.The research findings show the potential of this methodology in the early detection of glaucoma,which will reduce diagnosis time and increase detection efficiency.The glaucoma assessment achieves about 98%accuracy in the classification rate,which is close to and even higher than that of state-of-the-art methods.The designed detection model may be used in telemedicine,healthcare,and computer-aided diagnosis systems. 展开更多
关键词 Optic disc GLAUCOMA fundus image deep learning
下载PDF
A Review of Deep Learning-Based Vulnerability Detection Tools for Ethernet Smart Contracts
18
作者 Huaiguang Wu Yibo Peng +1 位作者 Yaqiong He Jinlin Fan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期77-108,共32页
In recent years,the number of smart contracts deployed on blockchain has exploded.However,the issue of vulnerability has caused incalculable losses.Due to the irreversible and immutability of smart contracts,vulnerabi... In recent years,the number of smart contracts deployed on blockchain has exploded.However,the issue of vulnerability has caused incalculable losses.Due to the irreversible and immutability of smart contracts,vulnerability detection has become particularly important.With the popular use of neural network model,there has been a growing utilization of deep learning-based methods and tools for the identification of vulnerabilities within smart contracts.This paper commences by providing a succinct overview of prevalent categories of vulnerabilities found in smart contracts.Subsequently,it categorizes and presents an overview of contemporary deep learning-based tools developed for smart contract detection.These tools are categorized based on their open-source status,the data format and the type of feature extraction they employ.Then we conduct a comprehensive comparative analysis of these tools,selecting representative tools for experimental validation and comparing them with traditional tools in terms of detection coverage and accuracy.Finally,Based on the insights gained from the experimental results and the current state of research in the field of smart contract vulnerability detection tools,we suppose to provide a reference standard for developers of contract vulnerability detection tools.Meanwhile,forward-looking research directions are also proposed for deep learning-based smart contract vulnerability detection. 展开更多
关键词 Smart contract vulnerability detection deep learning
下载PDF
Deep neural network-enabled battery open-circuit voltage estimation based on partial charging data
19
作者 Ziyou Zhou Yonggang Liu +2 位作者 Chengming Zhang Weixiang Shen Rui Xiong 《Journal of Energy Chemistry》 SCIE EI CAS CSCD 2024年第3期120-132,I0005,共14页
Battery management systems(BMSs) play a vital role in ensuring efficient and reliable operations of lithium-ion batteries.The main function of the BMSs is to estimate battery states and diagnose battery health using b... Battery management systems(BMSs) play a vital role in ensuring efficient and reliable operations of lithium-ion batteries.The main function of the BMSs is to estimate battery states and diagnose battery health using battery open-circuit voltage(OCV).However,acquiring the complete OCV data online can be a challenging endeavor due to the time-consuming measurement process or the need for specific operating conditions required by OCV estimation models.In addressing these concerns,this study introduces a deep neural network-combined framework for accurate and robust OCV estimation,utilizing partial daily charging data.We incorporate a generative deep learning model to extract aging-related features from data and generate high-fidelity OCV curves.Correlation analysis is employed to identify the optimal partial charging data,optimizing the OCV estimation precision while preserving exceptional flexibility.The validation results,using data from nickel-cobalt-magnesium(NCM) batteries,illustrate the accurate estimation of the complete OCV-capacity curve,with an average root mean square errors(RMSE) of less than 3 mAh.Achieving this level of precision for OCV estimation requires only around 50 s collection of partial charging data.Further validations on diverse battery types operating under various conditions confirm the effectiveness of our proposed method.Additional cases of precise health diagnosis based on OCV highlight the significance of conducting online OCV estimation.Our method provides a flexible approach to achieve complete OCV estimation and holds promise for generalization to other tasks in BMSs. 展开更多
关键词 Lithium-ion battery Open-circuit voltage Health diagnosis deep learning
下载PDF
Modeling Geometrically Nonlinear FG Plates: A Fast and Accurate Alternative to IGA Method Based on Deep Learning
20
作者 Se Li Tiantang Yu Tinh Quoc Bui 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2793-2808,共16页
Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functiona... Isogeometric analysis (IGA) is known to showadvanced features compared to traditional finite element approaches.Using IGA one may accurately obtain the geometrically nonlinear bending behavior of plates with functionalgrading (FG). However, the procedure is usually complex and often is time-consuming. We thus put forward adeep learning method to model the geometrically nonlinear bending behavior of FG plates, bypassing the complexIGA simulation process. A long bidirectional short-term memory (BLSTM) recurrent neural network is trainedusing the load and gradient index as inputs and the displacement responses as outputs. The nonlinear relationshipbetween the outputs and the inputs is constructed usingmachine learning so that the displacements can be directlyestimated by the deep learning network. To provide enough training data, we use S-FSDT Von-Karman IGA andobtain the displacement responses for different loads and gradient indexes. Results show that the recognition erroris low, and demonstrate the feasibility of deep learning technique as a fast and accurate alternative to IGA formodeling the geometrically nonlinear bending behavior of FG plates. 展开更多
关键词 FG plates geometric nonlinearity deep learning BLSTM IGA S-FSDT
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部