期刊文献+
共找到12,321篇文章
< 1 2 250 >
每页显示 20 50 100
Self-potential inversion based on Attention U-Net deep learning network
1
作者 GUO You-jun CUI Yi-an +3 位作者 CHEN Hang XIE Jing ZHANG Chi LIU Jian-xin 《Journal of Central South University》 SCIE EI CAS 2024年第9期3156-3167,共12页
Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention an... Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention and control measures.The self-potential(SP)stands out for its sensitivity to contamination plumes,offering a solution for monitoring and detecting the movement and seepage of subsurface pollutants.However,traditional SP inversion techniques heavily rely on precise subsurface resistivity information.In this study,we propose the Attention U-Net deep learning network for rapid SP inversion.By incorporating an attention mechanism,this algorithm effectively learns the relationship between array-style SP data and the location and extent of subsurface contaminated sources.We designed a synthetic landfill model with a heterogeneous resistivity structure to assess the performance of Attention U-Net deep learning network.Additionally,we conducted further validation using a laboratory model to assess its practical applicability.The results demonstrate that the algorithm is not solely dependent on resistivity information,enabling effective locating of the source distribution,even in models with intricate subsurface structures.Our work provides a promising tool for SP data processing,enhancing the applicability of this method in the field of near-subsurface environmental monitoring. 展开更多
关键词 self-potential attention mechanism u-net deep learning network inversion landfill
下载PDF
Anomaly-Based Intrusion DetectionModel Using Deep Learning for IoT Networks
2
作者 Muaadh A.Alsoufi Maheyzah Md Siraj +4 位作者 Fuad A.Ghaleb Muna Al-Razgan Mahfoudh Saeed Al-Asaly Taha Alfakih Faisal Saeed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期823-845,共23页
The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly int... The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better. 展开更多
关键词 IOT anomaly intrusion detection deep learning sparse autoencoder convolutional neural network
下载PDF
Resource Allocation for Cognitive Network Slicing in PD-SCMA System Based on Two-Way Deep Reinforcement Learning
3
作者 Zhang Zhenyu Zhang Yong +1 位作者 Yuan Siyu Cheng Zhenjie 《China Communications》 SCIE CSCD 2024年第6期53-68,共16页
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se... In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users. 展开更多
关键词 cognitive radio deep reinforcement learning network slicing power-domain non-orthogonal multiple access resource allocation
下载PDF
Energy-Efficient Traffic Offloading for RSMA-Based Hybrid Satellite Terrestrial Networks with Deep Reinforcement Learning
4
作者 Qingmiao Zhang Lidong Zhu +1 位作者 Yanyan Chen Shan Jiang 《China Communications》 SCIE CSCD 2024年第2期49-58,共10页
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p... As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm. 展开更多
关键词 deep reinforcement learning energy efficiency hybrid satellite terrestrial networks rate splitting multiple access traffic offloading
下载PDF
Application of Bayesian Analysis Based on Neural Network and Deep Learning in Data Visualization
5
作者 Jiying Yang Qi Long +1 位作者 Xiaoyun Zhu Yuan Yang 《Journal of Electronic Research and Application》 2024年第4期88-93,共6页
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit... This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science. 展开更多
关键词 Neural network deep learning Bayesian analysis Data visualization Big data environment
下载PDF
An Intelligent SDN-IoT Enabled Intrusion Detection System for Healthcare Systems Using a Hybrid Deep Learning and Machine Learning Approach 被引量:1
6
作者 R Arthi S Krishnaveni Sherali Zeadally 《China Communications》 SCIE CSCD 2024年第10期267-287,共21页
The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during the... The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during these situations.Also,the security issues in the Internet of Medical Things(IoMT)used in these service,make the situation even more critical because cyberattacks on the medical devices might cause treatment delays or clinical failures.Hence,services in the healthcare ecosystem need rapid,uninterrupted,and secure facilities.The solution provided in this research addresses security concerns and services availability for patients with critical health in remote areas.This research aims to develop an intelligent Software Defined Networks(SDNs)enabled secure framework for IoT healthcare ecosystem.We propose a hybrid of machine learning and deep learning techniques(DNN+SVM)to identify network intrusions in the sensor-based healthcare data.In addition,this system can efficiently monitor connected devices and suspicious behaviours.Finally,we evaluate the performance of our proposed framework using various performance metrics based on the healthcare application scenarios.the experimental results show that the proposed approach effectively detects and mitigates attacks in the SDN-enabled IoT networks and performs better that other state-of-art-approaches. 展开更多
关键词 deep neural network healthcare intrusion detection system IOT machine learning software-defined networks
下载PDF
A Deep Learning Approach for Forecasting Thunderstorm Gusts in the Beijing–Tianjin–Hebei Region 被引量:1
7
作者 Yunqing LIU Lu YANG +3 位作者 Mingxuan CHEN Linye SONG Lei HAN Jingfeng XU 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1342-1363,共22页
Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly b... Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China. 展开更多
关键词 thunderstorm gusts deep learning weather forecasting convolutional neural network TRANSFORMER
下载PDF
Exploring deep learning for landslide mapping:A comprehensive review 被引量:1
8
作者 Zhi-qiang Yang Wen-wen Qi +1 位作者 Chong Xu Xiao-yi Shao 《China Geology》 CAS CSCD 2024年第2期330-350,共21页
A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized f... A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection. 展开更多
关键词 Landslide Mapping Quantitative hazard assessment deep learning Artificial intelligence Neural network Big data Geological hazard survery engineering
下载PDF
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
9
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
Study on Quantitative Precipitation Estimation by Polarimetric Radar Using Deep Learning
10
作者 Jiang HUANGFU Zhiqun HU +2 位作者 Jiafeng ZHENG Lirong WANG Yongjie ZHU 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第6期1147-1160,共14页
Accurate radar quantitative precipitation estimation(QPE)plays an essential role in disaster prevention and mitigation.In this paper,two deep learning-based QPE networks including a single-parameter network and a mult... Accurate radar quantitative precipitation estimation(QPE)plays an essential role in disaster prevention and mitigation.In this paper,two deep learning-based QPE networks including a single-parameter network and a multi-parameter network are designed.Meanwhile,a self-defined loss function(SLF)is proposed during modeling.The dataset includes Shijiazhuang S-band dual polarimetric radar(CINRAD/SAD)data and rain gauge data within the radar’s 100-km detection range during the flood season of 2021 in North China.Considering that the specific propagation phase shift(KDP)has a roughly linear relationship with the precipitation intensity,KDP is set to 0.5°km^(-1 )as a threshold value to divide all the rain data(AR)into a heavy rain(HR)and light rain(LR)dataset.Subsequently,12 deep learning-based QPE models are trained according to the input radar parameters,the precipitation datasets,and whether an SLF was adopted,respectively.The results suggest that the effects of QPE after distinguishing rainfall intensity are better than those without distinguishing,and the effects of using SLF are better than those that used MSE as a loss function.A Z-R relationship and a ZH-KDP-R synthesis method are compared with deep learning-based QPE.The mean relative errors(MRE)of AR models using SLF are improved by 61.90%,51.21%,and 56.34%compared with the Z-R relational method,and by 38.63%,42.55%,and 47.49%compared with the synthesis method.Finally,the models are further evaluated in three precipitation processes,which manifest that the deep learning-based models have significant advantages over the traditional empirical formula methods. 展开更多
关键词 polarimetric radar quantitative precipitation estimation deep learning single-parameter network multi-parameter network
下载PDF
Hyperspectral image super resolution using deep internal and self-supervised learning
11
作者 Zhe Liu Xian-Hua Han 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期128-141,共14页
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral... By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods. 展开更多
关键词 computer vision deep learning deep neural networks HYPERSPECTRAL image enhancement
下载PDF
Learning to Branch in Combinatorial Optimization With Graph Pointer Networks
12
作者 Rui Wang Zhiming Zhou +4 位作者 Kaiwen Li Tao Zhang Ling Wang Xin Xu Xiangke Liao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期157-169,共13页
Traditional expert-designed branching rules in branch-and-bound(B&B) are static, often failing to adapt to diverse and evolving problem instances. Crafting these rules is labor-intensive, and may not scale well wi... Traditional expert-designed branching rules in branch-and-bound(B&B) are static, often failing to adapt to diverse and evolving problem instances. Crafting these rules is labor-intensive, and may not scale well with complex problems.Given the frequent need to solve varied combinatorial optimization problems, leveraging statistical learning to auto-tune B&B algorithms for specific problem classes becomes attractive. This paper proposes a graph pointer network model to learn the branch rules. Graph features, global features and historical features are designated to represent the solver state. The graph neural network processes graph features, while the pointer mechanism assimilates the global and historical features to finally determine the variable on which to branch. The model is trained to imitate the expert strong branching rule by a tailored top-k Kullback-Leibler divergence loss function. Experiments on a series of benchmark problems demonstrate that the proposed approach significantly outperforms the widely used expert-designed branching rules. It also outperforms state-of-the-art machine-learning-based branch-and-bound methods in terms of solving speed and search tree size on all the test instances. In addition, the model can generalize to unseen instances and scale to larger instances. 展开更多
关键词 Branch-and-bound(B&B) combinatorial optimization deep learning graph neural network imitation learning
下载PDF
Automatic depth matching method of well log based on deep reinforcement learning
13
作者 XIONG Wenjun XIAO Lizhi +1 位作者 YUAN Jiangru YUE Wenzheng 《Petroleum Exploration and Development》 SCIE 2024年第3期634-646,共13页
In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep rei... In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy. 展开更多
关键词 artificial intelligence machine learning depth matching well log multi-agent deep reinforcement learning convolutional neural network double deep Q-network
下载PDF
Exploring Deep Learning Methods for Computer Vision Applications across Multiple Sectors:Challenges and Future Trends
14
作者 Narayanan Ganesh Rajendran Shankar +3 位作者 Miroslav Mahdal Janakiraman SenthilMurugan Jasgurpreet Singh Chohan Kanak Kalita 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期103-141,共39页
Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than ot... Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers. 展开更多
关键词 Neural network machine vision classification object detection deep learning
下载PDF
A deep reinforcement learning approach to gasoline blending real-time optimization under uncertainty
15
作者 Zhiwei Zhu Minglei Yang +3 位作者 Wangli He Renchu He Yunmeng Zhao Feng Qian 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第7期183-192,共10页
The gasoline inline blending process has widely used real-time optimization techniques to achieve optimization objectives,such as minimizing the cost of production.However,the effectiveness of real-time optimization i... The gasoline inline blending process has widely used real-time optimization techniques to achieve optimization objectives,such as minimizing the cost of production.However,the effectiveness of real-time optimization in gasoline blending relies on accurate blending models and is challenged by stochastic disturbances.Thus,we propose a real-time optimization algorithm based on the soft actor-critic(SAC)deep reinforcement learning strategy to optimize gasoline blending without relying on a single blending model and to be robust against disturbances.Our approach constructs the environment using nonlinear blending models and feedstocks with disturbances.The algorithm incorporates the Lagrange multiplier and path constraints in reward design to manage sparse product constraints.Carefully abstracted states facilitate algorithm convergence,and the normalized action vector in each optimization period allows the agent to generalize to some extent across different target production scenarios.Through these well-designed components,the algorithm based on the SAC outperforms real-time optimization methods based on either nonlinear or linear programming.It even demonstrates comparable performance with the time-horizon based real-time optimization method,which requires knowledge of uncertainty models,confirming its capability to handle uncertainty without accurate models.Our simulation illustrates a promising approach to free real-time optimization of the gasoline blending process from uncertainty models that are difficult to acquire in practice. 展开更多
关键词 deep reinforcement learning Gasoline blending Real-time optimization PETROLEUM Computer simulation Neural networks
下载PDF
AutoRhythmAI: A Hybrid Machine and Deep Learning Approach for Automated Diagnosis of Arrhythmias
16
作者 S.Jayanthi S.Prasanna Devi 《Computers, Materials & Continua》 SCIE EI 2024年第2期2137-2158,共22页
In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and... In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics. 展开更多
关键词 Automated machine learning neural networks deep learning ARRHYTHMIAS
下载PDF
Sentiment Analysis of Low-Resource Language Literature Using Data Processing and Deep Learning
17
作者 Aizaz Ali Maqbool Khan +2 位作者 Khalil Khan Rehan Ullah Khan Abdulrahman Aloraini 《Computers, Materials & Continua》 SCIE EI 2024年第4期713-733,共21页
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime... Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language. 展开更多
关键词 Urdu sentiment analysis convolutional neural networks recurrent neural network deep learning natural language processing neural networks
下载PDF
Application of deep learning methods combined with physical background in wide field of view imaging atmospheric Cherenkov telescopes
18
作者 Ao-Yan Cheng Hao Cai +25 位作者 Shi Chen Tian-Lu Chen Xiang Dong You-Liang Feng Qi Gao Quan-Bu Gou Yi-Qing Guo Hong-Bo Hu Ming-Ming Kang Hai-Jin Li Chen Liu Mao-Yuan Liu Wei Liu Fang-Sheng Min Chu-Cheng Pan Bing-Qiang Qiao Xiang-Li Qian Hui-Ying Sun Yu-Chang Sun Ao-Bo Wang Xu Wang Zhen Wang Guang-Guang Xin Yu-Hua Yao Qiang Yuan Yi Zhang 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第4期208-220,共13页
The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of... The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of focused Cherenkov detectors.Its objective is to observe transient sources such as gamma-ray bursts and the counterparts of gravitational waves.This study aims to utilize the latest AI technology to enhance the sensitivity of HADAR experiments.Training datasets and models with distinctive creativity were constructed by incorporating the relevant physical theories for various applications.These models can determine the type,energy,and direction of the incident particles after careful design.We obtained a background identification accuracy of 98.6%,a relative energy reconstruction error of 10.0%,and an angular resolution of 0.22°in a test dataset at 10 TeV.These findings demonstrate the significant potential for enhancing the precision and dependability of detector data analysis in astrophysical research.By using deep learning techniques,the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S.at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies.In addition,our experiment offers a new approach for dealing with strongly connected,scattered data. 展开更多
关键词 VHE gamma-ray astronomy HADAR deep learning Convolutional neural networks
下载PDF
Position-aware pushing and grasping synergy with deep reinforcement learning in clutter
19
作者 Min Zhao Guoyu Zuo +3 位作者 Shuangyue Yu Daoxiong Gong Zihao Wang Ouattara Sie 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第3期738-755,共18页
The positional information of objects is crucial to enable robots to perform grasping and pushing manipulations in clutter.To effectively perform grasping and pushing manipu-lations,robots need to perceive the positio... The positional information of objects is crucial to enable robots to perform grasping and pushing manipulations in clutter.To effectively perform grasping and pushing manipu-lations,robots need to perceive the position information of objects,including the co-ordinates and spatial relationship between objects(e.g.,proximity,adjacency).The authors propose an end-to-end position-aware deep Q-learning framework to achieve efficient collaborative pushing and grasping in clutter.Specifically,a pair of conjugate pushing and grasping attention modules are proposed to capture the position information of objects and generate high-quality affordance maps of operating positions with features of pushing and grasping operations.In addition,the authors propose an object isolation metric and clutter metric based on instance segmentation to measure the spatial re-lationships between objects in cluttered environments.To further enhance the perception capacity of position information of the objects,the authors associate the change in the object isolation metric and clutter metric in cluttered environment before and after performing the action with reward function.A series of experiments are carried out in simulation and real-world which indicate that the method improves sample efficiency,task completion rate,grasping success rate and action efficiency compared to state-of-the-art end-to-end methods.Noted that the authors’system can be robustly applied to real-world use and extended to novel objects.Supplementary material is available at https://youtu.be/NhG\_k5v3NnM}{https://youtu.be/NhG\_k5v3NnM. 展开更多
关键词 deep learning deep neural networks intelligent robots
下载PDF
Model Agnostic Meta-Learning(MAML)-Based Ensemble Model for Accurate Detection of Wheat Diseases Using Vision Transformer and Graph Neural Networks
20
作者 Yasir Maqsood Syed Muhammad Usman +3 位作者 Musaed Alhussein Khursheed Aurangzeb Shehzad Khalid Muhammad Zubair 《Computers, Materials & Continua》 SCIE EI 2024年第5期2795-2811,共17页
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di... Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed. 展开更多
关键词 Wheat disease detection deep learning vision transformer graph neural network model agnostic meta learning
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部