期刊文献+
共找到12,658篇文章
< 1 2 250 >
每页显示 20 50 100
DEEP NEURAL NETWORKS COMBINING MULTI-TASK LEARNING FOR SOLVING DELAY INTEGRO-DIFFERENTIAL EQUATIONS
1
作者 WANG Chen-yao SHI Feng 《数学杂志》 2025年第1期13-38,共26页
Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay di... Deep neural networks(DNNs)are effective in solving both forward and inverse problems for nonlinear partial differential equations(PDEs).However,conventional DNNs are not effective in handling problems such as delay differential equations(DDEs)and delay integrodifferential equations(DIDEs)with constant delays,primarily due to their low regularity at delayinduced breaking points.In this paper,a DNN method that combines multi-task learning(MTL)which is proposed to solve both the forward and inverse problems of DIDEs.The core idea of this approach is to divide the original equation into multiple tasks based on the delay,using auxiliary outputs to represent the integral terms,followed by the use of MTL to seamlessly incorporate the properties at the breaking points into the loss function.Furthermore,given the increased training dificulty associated with multiple tasks and outputs,we employ a sequential training scheme to reduce training complexity and provide reference solutions for subsequent tasks.This approach significantly enhances the approximation accuracy of solving DIDEs with DNNs,as demonstrated by comparisons with traditional DNN methods.We validate the effectiveness of this method through several numerical experiments,test various parameter sharing structures in MTL and compare the testing results of these structures.Finally,this method is implemented to solve the inverse problem of nonlinear DIDE and the results show that the unknown parameters of DIDE can be discovered with sparse or noisy data. 展开更多
关键词 Delay integro-differential equation Multi-task learning parameter sharing structure deep neural network sequential training scheme
下载PDF
Self-potential inversion based on Attention U-Net deep learning network
2
作者 GUO You-jun CUI Yi-an +3 位作者 CHEN Hang XIE Jing ZHANG Chi LIU Jian-xin 《Journal of Central South University》 SCIE EI CAS CSCD 2024年第9期3156-3167,共12页
Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention an... Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention and control measures.The self-potential(SP)stands out for its sensitivity to contamination plumes,offering a solution for monitoring and detecting the movement and seepage of subsurface pollutants.However,traditional SP inversion techniques heavily rely on precise subsurface resistivity information.In this study,we propose the Attention U-Net deep learning network for rapid SP inversion.By incorporating an attention mechanism,this algorithm effectively learns the relationship between array-style SP data and the location and extent of subsurface contaminated sources.We designed a synthetic landfill model with a heterogeneous resistivity structure to assess the performance of Attention U-Net deep learning network.Additionally,we conducted further validation using a laboratory model to assess its practical applicability.The results demonstrate that the algorithm is not solely dependent on resistivity information,enabling effective locating of the source distribution,even in models with intricate subsurface structures.Our work provides a promising tool for SP data processing,enhancing the applicability of this method in the field of near-subsurface environmental monitoring. 展开更多
关键词 SELF-POTENTIAL attention mechanism U-Net deep learning network INVERSION landfill
下载PDF
Anomaly-Based Intrusion DetectionModel Using Deep Learning for IoT Networks
3
作者 Muaadh A.Alsoufi Maheyzah Md Siraj +4 位作者 Fuad A.Ghaleb Muna Al-Razgan Mahfoudh Saeed Al-Asaly Taha Alfakih Faisal Saeed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期823-845,共23页
The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly int... The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better. 展开更多
关键词 IOT anomaly intrusion detection deep learning sparse autoencoder convolutional neural network
下载PDF
Resource Allocation for Cognitive Network Slicing in PD-SCMA System Based on Two-Way Deep Reinforcement Learning
4
作者 Zhang Zhenyu Zhang Yong +1 位作者 Yuan Siyu Cheng Zhenjie 《China Communications》 SCIE CSCD 2024年第6期53-68,共16页
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se... In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users. 展开更多
关键词 cognitive radio deep reinforcement learning network slicing power-domain non-orthogonal multiple access resource allocation
下载PDF
Energy-Efficient Traffic Offloading for RSMA-Based Hybrid Satellite Terrestrial Networks with Deep Reinforcement Learning
5
作者 Qingmiao Zhang Lidong Zhu +1 位作者 Yanyan Chen Shan Jiang 《China Communications》 SCIE CSCD 2024年第2期49-58,共10页
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p... As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm. 展开更多
关键词 deep reinforcement learning energy efficiency hybrid satellite terrestrial networks rate splitting multiple access traffic offloading
下载PDF
Robust Network Security:A Deep Learning Approach to Intrusion Detection in IoT
6
作者 Ammar Odeh Anas Abu Taleb 《Computers, Materials & Continua》 SCIE EI 2024年第12期4149-4169,共21页
The proliferation of Internet of Things(IoT)technology has exponentially increased the number of devices interconnected over networks,thereby escalating the potential vectors for cybersecurity threats.In response,this... The proliferation of Internet of Things(IoT)technology has exponentially increased the number of devices interconnected over networks,thereby escalating the potential vectors for cybersecurity threats.In response,this study rigorously applies and evaluates deep learning models—namely Convolutional Neural Networks(CNN),Autoencoders,and Long Short-Term Memory(LSTM)networks—to engineer an advanced Intrusion Detection System(IDS)specifically designed for IoT environments.Utilizing the comprehensive UNSW-NB15 dataset,which encompasses 49 distinct features representing varied network traffic characteristics,our methodology focused on meticulous data preprocessing including cleaning,normalization,and strategic feature selection to enhance model performance.A robust comparative analysis highlights the CNN model’s outstanding performance,achieving an accuracy of 99.89%,precision of 99.90%,recall of 99.88%,and an F1 score of 99.89%in binary classification tasks,outperforming other evaluated models significantly.These results not only confirm the superior detection capabilities of CNNs in distinguishing between benign and malicious network activities but also illustrate the model’s effectiveness in multiclass classification tasks,addressing various attack vectors prevalent in IoT setups.The empirical findings from this research demonstrate deep learning’s transformative potential in fortifying network security infrastructures against sophisticated cyber threats,providing a scalable,high-performance solution that enhances security measures across increasingly complex IoT ecosystems.This study’s outcomes are critical for security practitioners and researchers focusing on the next generation of cyber defense mechanisms,offering a data-driven foundation for future advancements in IoT security strategies. 展开更多
关键词 Intrusion detection system(IDS) Internet ofThings(IoT) convolutional neural network(CNN) long short-term memory(LSTM) autoencoder network security deep learning data preprocessing feature selection cyber threats
下载PDF
Application of Bayesian Analysis Based on Neural Network and Deep Learning in Data Visualization
7
作者 Jiying Yang Qi Long +1 位作者 Xiaoyun Zhu Yuan Yang 《Journal of Electronic Research and Application》 2024年第4期88-93,共6页
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit... This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science. 展开更多
关键词 Neural network deep learning Bayesian analysis Data visualization Big data environment
下载PDF
Hybrid deep-learning and physics-based neural network for programmable illumination computational microscopy
8
作者 Ruiqing Sun Delong Yang +1 位作者 Shaohui Zhang Qun Hao 《Advanced Photonics Nexus》 2024年第5期48-57,共10页
Two mainstream approaches for solving inverse sample reconstruction problems in programmable illumination computational microscopy rely on either deep models or physical models.Solutions based on physical models posse... Two mainstream approaches for solving inverse sample reconstruction problems in programmable illumination computational microscopy rely on either deep models or physical models.Solutions based on physical models possess strong generalization capabilities while struggling with global optimization of inverse problems due to a lack of sufficient physical constraints.In contrast,deep-learning methods have strong problem-solving abilities,but their generalization ability is often questioned because of the unclear physical principles.In addition,conventional deep models are difficult to apply to some specific scenes because of the difficulty in acquiring high-quality training data and their limited capacity to generalize across different scenarios.To combine the advantages of deep models and physical models together,we propose a hybrid framework consisting of three subneural networks(two deep-learning networks and one physics-based network).We first obtain a result with rich semantic information through a light deeplearning neural network and then use it as the initial value of the physical network to make its output comply with physical process constraints.These two results are then used as the input of a fusion deeplearning neural work that utilizes the paired features between the reconstruction results of two different models to further enhance imaging quality.The proposed hybrid framework integrates the advantages of both deep models and physical models and can quickly solve the computational reconstruction inverse problem in programmable illumination computational microscopy and achieve better results.We verified the feasibility and effectiveness of the proposed hybrid framework with theoretical analysis and actual experiments on resolution targets and biological samples. 展开更多
关键词 deep learning physics-based neural network computational imaging Fourier ptychographic microscopy
下载PDF
Spectral transfer-learning-based metasurface design assisted by complex-valued deep neural network
9
作者 Yi Xu Fu Li +6 位作者 Jianqiang Gu Zhiwei Bi Bing Cao Quanlong Yang Jiaguang Han Qinghua Hu Weili Zhang 《Advanced Photonics Nexus》 2024年第2期8-17,共10页
Recently,deep learning has been used to establish the nonlinear and nonintuitive mapping between physical structures and electromagnetic responses of meta-atoms for higher computational efficiency.However,to obtain su... Recently,deep learning has been used to establish the nonlinear and nonintuitive mapping between physical structures and electromagnetic responses of meta-atoms for higher computational efficiency.However,to obtain sufficiently accurate predictions,the conventional deep-learning-based method consumes excessive time to collect the data set,thus hindering its wide application in this interdisciplinary field.We introduce a spectral transfer-learning-based metasurface design method to achieve excellent performance on a small data set with only 1000 samples in the target waveband by utilizing open-source data from another spectral range.We demonstrate three transfer strategies and experimentally quantify their performance,among which the“frozen-none”robustly improves the prediction accuracy by∼26%compared to direct learning.We propose to use a complex-valued deep neural network during the training process to further improve the spectral predicting precision by∼30%compared to its real-valued counterparts.We design several typical teraherz metadevices by employing a hybrid inverse model consolidating this trained target network and a global optimization algorithm.The simulated results successfully validate the capability of our approach.Our work provides a universal methodology for efficient and accurate metasurface design in arbitrary wavebands,which will pave the way toward the automated and mass production of metasurfaces. 展开更多
关键词 transfer learning complex-valued deep neural network metasurface inverse design conditioned adaptive particle swarm optimization TERAHERTZ
下载PDF
An Intelligent SDN-IoT Enabled Intrusion Detection System for Healthcare Systems Using a Hybrid Deep Learning and Machine Learning Approach 被引量:1
10
作者 R Arthi S Krishnaveni Sherali Zeadally 《China Communications》 SCIE CSCD 2024年第10期267-287,共21页
The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during the... The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during these situations.Also,the security issues in the Internet of Medical Things(IoMT)used in these service,make the situation even more critical because cyberattacks on the medical devices might cause treatment delays or clinical failures.Hence,services in the healthcare ecosystem need rapid,uninterrupted,and secure facilities.The solution provided in this research addresses security concerns and services availability for patients with critical health in remote areas.This research aims to develop an intelligent Software Defined Networks(SDNs)enabled secure framework for IoT healthcare ecosystem.We propose a hybrid of machine learning and deep learning techniques(DNN+SVM)to identify network intrusions in the sensor-based healthcare data.In addition,this system can efficiently monitor connected devices and suspicious behaviours.Finally,we evaluate the performance of our proposed framework using various performance metrics based on the healthcare application scenarios.the experimental results show that the proposed approach effectively detects and mitigates attacks in the SDN-enabled IoT networks and performs better that other state-of-art-approaches. 展开更多
关键词 deep neural network healthcare intrusion detection system IOT machine learning software-defined networks
下载PDF
A Deep Learning Approach for Forecasting Thunderstorm Gusts in the Beijing–Tianjin–Hebei Region 被引量:1
11
作者 Yunqing LIU Lu YANG +3 位作者 Mingxuan CHEN Linye SONG Lei HAN Jingfeng XU 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第7期1342-1363,共22页
Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly b... Thunderstorm gusts are a common form of severe convective weather in the warm season in North China,and it is of great importance to correctly forecast them.At present,the forecasting of thunderstorm gusts is mainly based on traditional subjective methods,which fails to achieve high-resolution and high-frequency gridded forecasts based on multiple observation sources.In this paper,we propose a deep learning method called Thunderstorm Gusts TransU-net(TGTransUnet)to forecast thunderstorm gusts in North China based on multi-source gridded product data from the Institute of Urban Meteorology(IUM)with a lead time of 1 to 6 h.To determine the specific range of thunderstorm gusts,we combine three meteorological variables:radar reflectivity factor,lightning location,and 1-h maximum instantaneous wind speed from automatic weather stations(AWSs),and obtain a reasonable ground truth of thunderstorm gusts.Then,we transform the forecasting problem into an image-to-image problem in deep learning under the TG-TransUnet architecture,which is based on convolutional neural networks and a transformer.The analysis and forecast data of the enriched multi-source gridded comprehensive forecasting system for the period 2021–23 are then used as training,validation,and testing datasets.Finally,the performance of TG-TransUnet is compared with other methods.The results show that TG-TransUnet has the best prediction results at 1–6 h.The IUM is currently using this model to support the forecasting of thunderstorm gusts in North China. 展开更多
关键词 thunderstorm gusts deep learning weather forecasting convolutional neural network TRANSFORMER
下载PDF
DeepBio:A Deep CNN and Bi-LSTM Learning for Person Identification Using Ear Biometrics 被引量:1
12
作者 Anshul Mahajan Sunil K.Singla 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第11期1623-1649,共27页
The identification of individuals through ear images is a prominent area of study in the biometric sector.Facial recognition systems have faced challenges during the COVID-19 pandemic due to mask-wearing,prompting the... The identification of individuals through ear images is a prominent area of study in the biometric sector.Facial recognition systems have faced challenges during the COVID-19 pandemic due to mask-wearing,prompting the exploration of supplementary biometric measures such as ear biometrics.The research proposes a Deep Learning(DL)framework,termed DeepBio,using ear biometrics for human identification.It employs two DL models and five datasets,including IIT Delhi(IITD-I and IITD-II),annotated web images(AWI),mathematical analysis of images(AMI),and EARVN1.Data augmentation techniques such as flipping,translation,and Gaussian noise are applied to enhance model performance and mitigate overfitting.Feature extraction and human identification are conducted using a hybrid approach combining Convolutional Neural Networks(CNN)and Bidirectional Long Short-Term Memory(Bi-LSTM).The DeepBio framework achieves high recognition rates of 97.97%,99.37%,98.57%,94.5%,and 96.87%on the respective datasets.Comparative analysis with existing techniques demonstrates improvements of 0.41%,0.47%,12%,and 9.75%on IITD-II,AMI,AWE,and EARVN1 datasets,respectively. 展开更多
关键词 Data augmentation convolutional neural network bidirectional long short-term memory deep learning ear biometrics
下载PDF
Exploring deep learning for landslide mapping:A comprehensive review 被引量:1
13
作者 Zhi-qiang Yang Wen-wen Qi +1 位作者 Chong Xu Xiao-yi Shao 《China Geology》 CAS CSCD 2024年第2期330-350,共21页
A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized f... A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection. 展开更多
关键词 Landslide Mapping Quantitative hazard assessment deep learning Artificial intelligence Neural network Big data Geological hazard survery engineering
下载PDF
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
14
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
Study on Quantitative Precipitation Estimation by Polarimetric Radar Using Deep Learning
15
作者 Jiang HUANGFU Zhiqun HU +2 位作者 Jiafeng ZHENG Lirong WANG Yongjie ZHU 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第6期1147-1160,共14页
Accurate radar quantitative precipitation estimation(QPE)plays an essential role in disaster prevention and mitigation.In this paper,two deep learning-based QPE networks including a single-parameter network and a mult... Accurate radar quantitative precipitation estimation(QPE)plays an essential role in disaster prevention and mitigation.In this paper,two deep learning-based QPE networks including a single-parameter network and a multi-parameter network are designed.Meanwhile,a self-defined loss function(SLF)is proposed during modeling.The dataset includes Shijiazhuang S-band dual polarimetric radar(CINRAD/SAD)data and rain gauge data within the radar’s 100-km detection range during the flood season of 2021 in North China.Considering that the specific propagation phase shift(KDP)has a roughly linear relationship with the precipitation intensity,KDP is set to 0.5°km^(-1 )as a threshold value to divide all the rain data(AR)into a heavy rain(HR)and light rain(LR)dataset.Subsequently,12 deep learning-based QPE models are trained according to the input radar parameters,the precipitation datasets,and whether an SLF was adopted,respectively.The results suggest that the effects of QPE after distinguishing rainfall intensity are better than those without distinguishing,and the effects of using SLF are better than those that used MSE as a loss function.A Z-R relationship and a ZH-KDP-R synthesis method are compared with deep learning-based QPE.The mean relative errors(MRE)of AR models using SLF are improved by 61.90%,51.21%,and 56.34%compared with the Z-R relational method,and by 38.63%,42.55%,and 47.49%compared with the synthesis method.Finally,the models are further evaluated in three precipitation processes,which manifest that the deep learning-based models have significant advantages over the traditional empirical formula methods. 展开更多
关键词 polarimetric radar quantitative precipitation estimation deep learning single-parameter network multi-parameter network
下载PDF
Hyperspectral image super resolution using deep internal and self-supervised learning
16
作者 Zhe Liu Xian-Hua Han 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期128-141,共14页
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral... By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods. 展开更多
关键词 computer vision deep learning deep neural networks HYPERSPECTRAL image enhancement
下载PDF
Learning to Branch in Combinatorial Optimization With Graph Pointer Networks
17
作者 Rui Wang Zhiming Zhou +4 位作者 Kaiwen Li Tao Zhang Ling Wang Xin Xu Xiangke Liao 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第1期157-169,共13页
Traditional expert-designed branching rules in branch-and-bound(B&B) are static, often failing to adapt to diverse and evolving problem instances. Crafting these rules is labor-intensive, and may not scale well wi... Traditional expert-designed branching rules in branch-and-bound(B&B) are static, often failing to adapt to diverse and evolving problem instances. Crafting these rules is labor-intensive, and may not scale well with complex problems.Given the frequent need to solve varied combinatorial optimization problems, leveraging statistical learning to auto-tune B&B algorithms for specific problem classes becomes attractive. This paper proposes a graph pointer network model to learn the branch rules. Graph features, global features and historical features are designated to represent the solver state. The graph neural network processes graph features, while the pointer mechanism assimilates the global and historical features to finally determine the variable on which to branch. The model is trained to imitate the expert strong branching rule by a tailored top-k Kullback-Leibler divergence loss function. Experiments on a series of benchmark problems demonstrate that the proposed approach significantly outperforms the widely used expert-designed branching rules. It also outperforms state-of-the-art machine-learning-based branch-and-bound methods in terms of solving speed and search tree size on all the test instances. In addition, the model can generalize to unseen instances and scale to larger instances. 展开更多
关键词 Branch-and-bound(B&B) combinatorial optimization deep learning graph neural network imitation learning
下载PDF
Improving Multiple Sclerosis Disease Prediction Using Hybrid Deep Learning Model
18
作者 Stephen Ojo Moez Krichen +3 位作者 Meznah A.Alamro Alaeddine Mihoub Gabriel Avelino Sampedro Jaroslava Kniezova 《Computers, Materials & Continua》 SCIE EI 2024年第10期643-661,共19页
Myelin damage and a wide range of symptoms are caused by the immune system targeting the central nervous system in Multiple Sclerosis(MS),a chronic autoimmune neurological condition.It disrupts signals between the bra... Myelin damage and a wide range of symptoms are caused by the immune system targeting the central nervous system in Multiple Sclerosis(MS),a chronic autoimmune neurological condition.It disrupts signals between the brain and body,causing symptoms including tiredness,muscle weakness,and difficulty with memory and balance.Traditional methods for detecting MS are less precise and time-consuming,which is a major gap in addressing this problem.This gap has motivated the investigation of new methods to improve MS detection consistency and accuracy.This paper proposed a novel approach named FAD consisting of Deep Neural Network(DNN)fused with an Artificial Neural Network(ANN)to detect MS with more efficiency and accuracy,utilizing regularization and combat over-fitting.We use gene expression data for MS research in the GEO GSE17048 dataset.The dataset is preprocessed by performing encoding,standardization using min-max-scaler,and feature selection using Recursive Feature Elimination with Cross-Validation(RFECV)to optimize and refine the dataset.Meanwhile,for experimenting with the dataset,another deep-learning hybrid model is integrated with different ML models,including Random Forest(RF),Gradient Boosting(GB),XGBoost(XGB),K-Nearest Neighbors(KNN)and Decision Tree(DT).Results reveal that FAD performed exceptionally well on the dataset,which was evident with an accuracy of 96.55%and an F1-score of 96.71%.The use of the proposed FAD approach helps in achieving remarkable results with better accuracy than previous studies. 展开更多
关键词 Multi Sclerosis(MS) machine learning deep learning artificial neural network healthcare
下载PDF
Automatic depth matching method of well log based on deep reinforcement learning
19
作者 XIONG Wenjun XIAO Lizhi +1 位作者 YUAN Jiangru YUE Wenzheng 《Petroleum Exploration and Development》 SCIE 2024年第3期634-646,共13页
In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep rei... In the traditional well log depth matching tasks,manual adjustments are required,which means significantly labor-intensive for multiple wells,leading to low work efficiency.This paper introduces a multi-agent deep reinforcement learning(MARL)method to automate the depth matching of multi-well logs.This method defines multiple top-down dual sliding windows based on the convolutional neural network(CNN)to extract and capture similar feature sequences on well logs,and it establishes an interaction mechanism between agents and the environment to control the depth matching process.Specifically,the agent selects an action to translate or scale the feature sequence based on the double deep Q-network(DDQN).Through the feedback of the reward signal,it evaluates the effectiveness of each action,aiming to obtain the optimal strategy and improve the accuracy of the matching task.Our experiments show that MARL can automatically perform depth matches for well-logs in multiple wells,and reduce manual intervention.In the application to the oil field,a comparative analysis of dynamic time warping(DTW),deep Q-learning network(DQN),and DDQN methods revealed that the DDQN algorithm,with its dual-network evaluation mechanism,significantly improves performance by identifying and aligning more details in the well log feature sequences,thus achieving higher depth matching accuracy. 展开更多
关键词 artificial intelligence machine learning depth matching well log multi-agent deep reinforcement learning convolutional neural network double deep Q-network
下载PDF
Human Interaction Recognition in Surveillance Videos Using Hybrid Deep Learning and Machine Learning Models
20
作者 Vesal Khean Chomyong Kim +5 位作者 Sunjoo Ryu Awais Khan Min Kyung Hong Eun Young Kim Joungmin Kim Yunyoung Nam 《Computers, Materials & Continua》 SCIE EI 2024年第10期773-787,共15页
Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their mov... Human Interaction Recognition(HIR)was one of the challenging issues in computer vision research due to the involvement of multiple individuals and their mutual interactions within video frames generated from their movements.HIR requires more sophisticated analysis than Human Action Recognition(HAR)since HAR focuses solely on individual activities like walking or running,while HIR involves the interactions between people.This research aims to develop a robust system for recognizing five common human interactions,such as hugging,kicking,pushing,pointing,and no interaction,from video sequences using multiple cameras.In this study,a hybrid Deep Learning(DL)and Machine Learning(ML)model was employed to improve classification accuracy and generalizability.The dataset was collected in an indoor environment with four-channel cameras capturing the five types of interactions among 13 participants.The data was processed using a DL model with a fine-tuned ResNet(Residual Networks)architecture based on 2D Convolutional Neural Network(CNN)layers for feature extraction.Subsequently,machine learning models were trained and utilized for interaction classification using six commonly used ML algorithms,including SVM,KNN,RF,DT,NB,and XGBoost.The results demonstrate a high accuracy of 95.45%in classifying human interactions.The hybrid approach enabled effective learning,resulting in highly accurate performance across different interaction types.Future work will explore more complex scenarios involving multiple individuals based on the application of this architecture. 展开更多
关键词 Convolutional neural network deep learning human interaction recognition ResNet skeleton joint key points human pose estimation hybrid deep learning and machine learning
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部