期刊文献+
共找到14,743篇文章
< 1 2 250 >
每页显示 20 50 100
Resource Allocation for Cognitive Network Slicing in PD-SCMA System Based on Two-Way Deep Reinforcement Learning
1
作者 Zhang Zhenyu Zhang Yong +1 位作者 Yuan Siyu Cheng Zhenjie 《China Communications》 SCIE CSCD 2024年第6期53-68,共16页
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se... In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users. 展开更多
关键词 cognitive radio deep reinforcement learning network slicing power-domain non-orthogonal multiple access resource allocation
下载PDF
Energy-Efficient Traffic Offloading for RSMA-Based Hybrid Satellite Terrestrial Networks with Deep Reinforcement Learning
2
作者 Qingmiao Zhang Lidong Zhu +1 位作者 Yanyan Chen Shan Jiang 《China Communications》 SCIE CSCD 2024年第2期49-58,共10页
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p... As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm. 展开更多
关键词 deep reinforcement learning energy efficiency hybrid satellite terrestrial networks rate splitting multiple access traffic offloading
下载PDF
A Multi-Task Deep Learning Framework for Simultaneous Detection of Thoracic Pathology through Image Classification
3
作者 Nada Al Zahrani Ramdane Hedjar +4 位作者 Mohamed Mekhtiche Mohamed Bencherif Taha Al Fakih Fattoh Al-Qershi Muna Alrazghan 《Journal of Computer and Communications》 2024年第4期153-170,共18页
Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’... Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing. 展开更多
关键词 PNEUMONIA Thoracic Pathology COVID-19 deep learning multi-task learning
下载PDF
Fully Distributed Learning for Deep Random Vector Functional-Link Networks
4
作者 Huada Zhu Wu Ai 《Journal of Applied Mathematics and Physics》 2024年第4期1247-1262,共16页
In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations a... In the contemporary era, the proliferation of information technology has led to an unprecedented surge in data generation, with this data being dispersed across a multitude of mobile devices. Facing these situations and the training of deep learning model that needs great computing power support, the distributed algorithm that can carry out multi-party joint modeling has attracted everyone’s attention. The distributed training mode relieves the huge pressure of centralized model on computer computing power and communication. However, most distributed algorithms currently work in a master-slave mode, often including a central server for coordination, which to some extent will cause communication pressure, data leakage, privacy violations and other issues. To solve these problems, a decentralized fully distributed algorithm based on deep random weight neural network is proposed. The algorithm decomposes the original objective function into several sub-problems under consistency constraints, combines the decentralized average consensus (DAC) and alternating direction method of multipliers (ADMM), and achieves the goal of joint modeling and training through local calculation and communication of each node. Finally, we compare the proposed decentralized algorithm with several centralized deep neural networks with random weights, and experimental results demonstrate the effectiveness of the proposed algorithm. 展开更多
关键词 Distributed Optimization deep neural network Random Vector Functional-Link (RVFL) network Alternating Direction Method of Multipliers (ADMM)
下载PDF
Scale adaptive fitness evaluation‐based particle swarm optimisation for hyperparameter and architecture optimisation in neural networks and deep learning
5
作者 Ye‐Qun Wang Jian‐Yu Li +2 位作者 Chun‐Hua Chen Jun Zhang Zhi‐Hui Zhan 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第3期849-862,共14页
Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to ... Research into automatically searching for an optimal neural network(NN)by optimi-sation algorithms is a significant research topic in deep learning and artificial intelligence.However,this is still challenging due to two issues:Both the hyperparameter and ar-chitecture should be optimised and the optimisation process is computationally expen-sive.To tackle these two issues,this paper focusses on solving the hyperparameter and architecture optimization problem for the NN and proposes a novel light‐weight scale‐adaptive fitness evaluation‐based particle swarm optimisation(SAFE‐PSO)approach.Firstly,the SAFE‐PSO algorithm considers the hyperparameters and architectures together in the optimisation problem and therefore can find their optimal combination for the globally best NN.Secondly,the computational cost can be reduced by using multi‐scale accuracy evaluation methods to evaluate candidates.Thirdly,a stagnation‐based switch strategy is proposed to adaptively switch different evaluation methods to better balance the search performance and computational cost.The SAFE‐PSO algorithm is tested on two widely used datasets:The 10‐category(i.e.,CIFAR10)and the 100−cate-gory(i.e.,CIFAR100).The experimental results show that SAFE‐PSO is very effective and efficient,which can not only find a promising NN automatically but also find a better NN than compared algorithms at the same computational cost. 展开更多
关键词 deep learning evolutionary computation hyperparameter and architecture optimisation neural networks particle swarm optimisation scale‐adaptive fitness evaluation
下载PDF
Deep Fake Detection Using Computer Vision-Based Deep Neural Network with Pairwise Learning
6
作者 R.Saravana Ram M.Vinoth Kumar +3 位作者 Tareq M.Al-shami Mehedi Masud Hanan Aljuaid Mohamed Abouhawwash 《Intelligent Automation & Soft Computing》 SCIE 2023年第2期2449-2462,共14页
Deep learning-based approaches are applied successfully in manyfields such as deepFake identification,big data analysis,voice recognition,and image recognition.Deepfake is the combination of deep learning in fake creati... Deep learning-based approaches are applied successfully in manyfields such as deepFake identification,big data analysis,voice recognition,and image recognition.Deepfake is the combination of deep learning in fake creation,which states creating a fake image or video with the help of artificial intelligence for political abuse,spreading false information,and pornography.The artificial intel-ligence technique has a wide demand,increasing the problems related to privacy,security,and ethics.This paper has analyzed the features related to the computer vision of digital content to determine its integrity.This method has checked the computer vision features of the image frames using the fuzzy clustering feature extraction method.By the proposed deep belief network with loss handling,the manipulation of video/image is found by means of a pairwise learning approach.This proposed approach has improved the accuracy of the detection rate by 98%on various datasets. 展开更多
关键词 deep fake deep belief network fuzzy clustering feature extraction pairwise learning
下载PDF
Deep Learning with Optimal Hierarchical Spiking Neural Network for Medical Image Classification
7
作者 P.Immaculate Rexi Jenifer S.Kannan 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1081-1097,共17页
Medical image classification becomes a vital part of the design of computer aided diagnosis(CAD)models.The conventional CAD models are majorly dependent upon the shapes,colors,and/or textures that are problem oriented... Medical image classification becomes a vital part of the design of computer aided diagnosis(CAD)models.The conventional CAD models are majorly dependent upon the shapes,colors,and/or textures that are problem oriented and exhibited complementary in medical images.The recently developed deep learning(DL)approaches pave an efficient method of constructing dedicated models for classification problems.But the maximum resolution of medical images and small datasets,DL models are facing the issues of increased computation cost.In this aspect,this paper presents a deep convolutional neural network with hierarchical spiking neural network(DCNN-HSNN)for medical image classification.The proposed DCNN-HSNN technique aims to detect and classify the existence of diseases using medical images.In addition,region growing segmentation technique is involved to determine the infected regions in the medical image.Moreover,NADAM optimizer with DCNN based Capsule Network(CapsNet)approach is used for feature extraction and derived a collection of feature vectors.Furthermore,the shark smell optimization algorithm(SSA)based HSNN approach is utilized for classification process.In order to validate the better performance of the DCNN-HSNN technique,a wide range of simulations take place against HIS2828 and ISIC2017 datasets.The experimental results highlighted the effectiveness of the DCNN-HSNN technique over the recent techniques interms of different measures.Please type your abstract here. 展开更多
关键词 Medical image classification spiking neural networks computer aided diagnosis medical imaging parameter optimization deep learning
下载PDF
Machine learning and deep neural network-based learning in osteoarthritis knee
8
作者 Harish V K Ratna Madhan Jeyaraman +4 位作者 Naveen Jeyaraman Arulkumar Nallakumarasamy Shilpa Sharma Manish Khanna Ashim Gupta 《World Journal of Methodology》 2023年第5期419-425,共7页
Osteoarthritis(OA)of the knee joint is considered the commonest musculoskeletal condition leading to marked disability for patients residing in various regions around the globe.Application of machine learning(ML)in do... Osteoarthritis(OA)of the knee joint is considered the commonest musculoskeletal condition leading to marked disability for patients residing in various regions around the globe.Application of machine learning(ML)in doing research regarding OA has brought about various clinical advances viz,OA being diagnosed at preliminary stages,prediction of chances of development of OA among the population,discovering various phenotypes of OA,calculating the severity in OA structure and also discovering people with slow and fast progression of disease pathology,etc.Various publications are available regarding machine learning methods for the early detection of osteoarthritis.The key features are detected by morphology,molecular architecture,and electrical and mechanical functions.In addition,this particular technique was utilized to assess non-interfering,non-ionizing,and in-vivo techniques using magnetic resonance imaging.ML is being utilized in OA,chiefly with the formulation of large cohorts viz,the OA Initiative,a cohort observational study,the Multicentre Osteoarthritis Study,an observational,prospective longitudinal study and the Cohort Hip&Cohort Knee,an observational cohort prospective study of both hip and knee OA.Though ML has various contributions and enhancing applications,it remains an imminent field with high potential,also with its limitations.Many more studies are to be carried out to find more about the link between machine learning and knee osteoarthritis,which would help in the improvement of making decisions clinically,and expedite the necessary interventions. 展开更多
关键词 OSTEOARTHRITIS KNEE Artificial intelligence Machine learning deep neural network
下载PDF
Machine Learning Techniques Using Deep Instinctive Encoder-Based Feature Extraction for Optimized Breast Cancer Detection
9
作者 Vaishnawi Priyadarshni Sanjay Kumar Sharma +2 位作者 Mohammad Khalid Imam Rahmani Baijnath Kaushik Rania Almajalid 《Computers, Materials & Continua》 SCIE EI 2024年第2期2441-2468,共28页
Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s li... Breast cancer(BC)is one of the leading causes of death among women worldwide,as it has emerged as the most commonly diagnosed malignancy in women.Early detection and effective treatment of BC can help save women’s lives.Developing an efficient technology-based detection system can lead to non-destructive and preliminary cancer detection techniques.This paper proposes a comprehensive framework that can effectively diagnose cancerous cells from benign cells using the Curated Breast Imaging Subset of the Digital Database for Screening Mammography(CBIS-DDSM)data set.The novelty of the proposed framework lies in the integration of various techniques,where the fusion of deep learning(DL),traditional machine learning(ML)techniques,and enhanced classification models have been deployed using the curated dataset.The analysis outcome proves that the proposed enhanced RF(ERF),enhanced DT(EDT)and enhanced LR(ELR)models for BC detection outperformed most of the existing models with impressive results. 展开更多
关键词 Autoencoder breast cancer deep neural network convolutional neural network image processing machine learning deep learning
下载PDF
Hyperspectral image super resolution using deep internal and self-supervised learning
10
作者 Zhe Liu Xian-Hua Han 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期128-141,共14页
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral... By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods. 展开更多
关键词 computer vision deep learning deep neural networks HYPERSPECTRAL image enhancement
下载PDF
Sentiment Analysis of Low-Resource Language Literature Using Data Processing and Deep Learning
11
作者 Aizaz Ali Maqbool Khan +2 位作者 Khalil Khan Rehan Ullah Khan Abdulrahman Aloraini 《Computers, Materials & Continua》 SCIE EI 2024年第4期713-733,共21页
Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentime... Sentiment analysis, a crucial task in discerning emotional tones within the text, plays a pivotal role in understandingpublic opinion and user sentiment across diverse languages.While numerous scholars conduct sentiment analysisin widely spoken languages such as English, Chinese, Arabic, Roman Arabic, and more, we come to grapplingwith resource-poor languages like Urdu literature which becomes a challenge. Urdu is a uniquely crafted language,characterized by a script that amalgamates elements from diverse languages, including Arabic, Parsi, Pashtu,Turkish, Punjabi, Saraiki, and more. As Urdu literature, characterized by distinct character sets and linguisticfeatures, presents an additional hurdle due to the lack of accessible datasets, rendering sentiment analysis aformidable undertaking. The limited availability of resources has fueled increased interest among researchers,prompting a deeper exploration into Urdu sentiment analysis. This research is dedicated to Urdu languagesentiment analysis, employing sophisticated deep learning models on an extensive dataset categorized into fivelabels: Positive, Negative, Neutral, Mixed, and Ambiguous. The primary objective is to discern sentiments andemotions within the Urdu language, despite the absence of well-curated datasets. To tackle this challenge, theinitial step involves the creation of a comprehensive Urdu dataset by aggregating data from various sources such asnewspapers, articles, and socialmedia comments. Subsequent to this data collection, a thorough process of cleaningand preprocessing is implemented to ensure the quality of the data. The study leverages two well-known deeplearningmodels, namely Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN), for bothtraining and evaluating sentiment analysis performance. Additionally, the study explores hyperparameter tuning tooptimize the models’ efficacy. Evaluation metrics such as precision, recall, and the F1-score are employed to assessthe effectiveness of the models. The research findings reveal that RNN surpasses CNN in Urdu sentiment analysis,gaining a significantly higher accuracy rate of 91%. This result accentuates the exceptional performance of RNN,solidifying its status as a compelling option for conducting sentiment analysis tasks in the Urdu language. 展开更多
关键词 Urdu sentiment analysis convolutional neural networks recurrent neural network deep learning natural language processing neural networks
下载PDF
Exploring Deep Learning Methods for Computer Vision Applications across Multiple Sectors:Challenges and Future Trends
12
作者 Narayanan Ganesh Rajendran Shankar +3 位作者 Miroslav Mahdal Janakiraman SenthilMurugan Jasgurpreet Singh Chohan Kanak Kalita 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期103-141,共39页
Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than ot... Computer vision(CV)was developed for computers and other systems to act or make recommendations based on visual inputs,such as digital photos,movies,and other media.Deep learning(DL)methods are more successful than other traditional machine learning(ML)methods inCV.DL techniques can produce state-of-the-art results for difficult CV problems like picture categorization,object detection,and face recognition.In this review,a structured discussion on the history,methods,and applications of DL methods to CV problems is presented.The sector-wise presentation of applications in this papermay be particularly useful for researchers in niche fields who have limited or introductory knowledge of DL methods and CV.This review will provide readers with context and examples of how these techniques can be applied to specific areas.A curated list of popular datasets and a brief description of them are also included for the benefit of readers. 展开更多
关键词 neural network machine vision classification object detection deep learning
下载PDF
Construction of apricot variety search engine based on deep learning
13
作者 Chen Chen Lin Wang +8 位作者 Huimin Liu Jing Liu Wanyu Xu Mengzhen Huang Ningning Gou Chu Wang Haikun Bai Gengjie Jia Tana Wuyun 《Horticultural Plant Journal》 SCIE CAS CSCD 2024年第2期387-397,共11页
Apricot has a long history of cultivation and has many varieties and types. The traditional variety identification methods are timeconsuming and labor-consuming, posing grand challenges to apricot resource management.... Apricot has a long history of cultivation and has many varieties and types. The traditional variety identification methods are timeconsuming and labor-consuming, posing grand challenges to apricot resource management. Tool development in this regard will help researchers quickly identify variety information. This study photographed apricot fruits outdoors and indoors and constructed a dataset that can precisely classify the fruits using a U-net model (F-score:99%), which helps to obtain the fruit's size, shape, and color features. Meanwhile, a variety search engine was constructed, which can search and identify variety from the database according to the above features. Besides, a mobile and web application (ApricotView) was developed, and the construction mode can be also applied to other varieties of fruit trees.Additionally, we have collected four difficult-to-identify seed datasets and used the VGG16 model for training, with an accuracy of 97%, which provided an important basis for ApricotView. To address the difficulties in data collection bottlenecking apricot phenomics research, we developed the first apricot database platform of its kind (ApricotDIAP, http://apricotdiap.com/) to accumulate, manage, and publicize scientific data of apricot. 展开更多
关键词 APRICOT VARIETY Convolutional neural network deep learning Database platform Mobile application Image retrieval
下载PDF
Downscaling Seasonal Precipitation Forecasts over East Africa with Deep Convolutional Neural Networks
14
作者 Temesgen Gebremariam ASFAW Jing-Jia LUO 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第3期449-464,共16页
This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that co... This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that compare different CNN configurations and deployed the best-performing architecture to downscale one-month lead seasonal forecasts of June–July–August–September(JJAS) precipitation from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0(NUIST-CFS1.0) for 1982–2020. We also perform hyper-parameter optimization and introduce predictors over a larger area to include information about the main large-scale circulations that drive precipitation over the East Africa region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results show that the CNN-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme precipitation spatial patterns. Besides, CNN-based downscaling yields a much more accurate forecast of extreme and spell indicators and reduces the significant relative biases exhibited by the raw model predictions. Moreover, our results show that CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of East Africa. The results demonstrate the potential usefulness of CNN in downscaling seasonal precipitation predictions over East Africa,particularly in providing improved forecast products which are essential for end users. 展开更多
关键词 East Africa seasonal precipitation forecasting DOWNSCALING deep learning convolutional neural networks(CNNs)
下载PDF
Application of deep learning methods combined with physical background in wide field of view imaging atmospheric Cherenkov telescopes
15
作者 Ao-Yan Cheng Hao Cai +25 位作者 Shi Chen Tian-Lu Chen Xiang Dong You-Liang Feng Qi Gao Quan-Bu Gou Yi-Qing Guo Hong-Bo Hu Ming-Ming Kang Hai-Jin Li Chen Liu Mao-Yuan Liu Wei Liu Fang-Sheng Min Chu-Cheng Pan Bing-Qiang Qiao Xiang-Li Qian Hui-Ying Sun Yu-Chang Sun Ao-Bo Wang Xu Wang Zhen Wang Guang-Guang Xin Yu-Hua Yao Qiang Yuan Yi Zhang 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第4期208-220,共13页
The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of... The High Altitude Detection of Astronomical Radiation(HADAR)experiment,which was constructed in Tibet,China,combines the wide-angle advantages of traditional EAS array detectors with the high-sensitivity advantages of focused Cherenkov detectors.Its objective is to observe transient sources such as gamma-ray bursts and the counterparts of gravitational waves.This study aims to utilize the latest AI technology to enhance the sensitivity of HADAR experiments.Training datasets and models with distinctive creativity were constructed by incorporating the relevant physical theories for various applications.These models can determine the type,energy,and direction of the incident particles after careful design.We obtained a background identification accuracy of 98.6%,a relative energy reconstruction error of 10.0%,and an angular resolution of 0.22°in a test dataset at 10 TeV.These findings demonstrate the significant potential for enhancing the precision and dependability of detector data analysis in astrophysical research.By using deep learning techniques,the HADAR experiment’s observational sensitivity to the Crab Nebula has surpassed that of MAGIC and H.E.S.S.at energies below 0.5 TeV and remains competitive with conventional narrow-field Cherenkov telescopes at higher energies.In addition,our experiment offers a new approach for dealing with strongly connected,scattered data. 展开更多
关键词 VHE gamma-ray astronomy HADAR deep learning Convolutional neural networks
下载PDF
Exploring deep learning for landslide mapping:A comprehensive review
16
作者 Zhi-qiang Yang Wen-wen Qi +1 位作者 Chong Xu Xiao-yi Shao 《China Geology》 CAS CSCD 2024年第2期330-350,共21页
A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized f... A detailed and accurate inventory map of landslides is crucial for quantitative hazard assessment and land planning.Traditional methods relying on change detection and object-oriented approaches have been criticized for their dependence on expert knowledge and subjective factors.Recent advancements in highresolution satellite imagery,coupled with the rapid development of artificial intelligence,particularly datadriven deep learning algorithms(DL)such as convolutional neural networks(CNN),have provided rich feature indicators for landslide mapping,overcoming previous limitations.In this review paper,77representative DL-based landslide detection methods applied in various environments over the past seven years were examined.This study analyzed the structures of different DL networks,discussed five main application scenarios,and assessed both the advancements and limitations of DL in geological hazard analysis.The results indicated that the increasing number of articles per year reflects growing interest in landslide mapping by artificial intelligence,with U-Net-based structures gaining prominence due to their flexibility in feature extraction and generalization.Finally,we explored the hindrances of DL in landslide hazard research based on the above research content.Challenges such as black-box operations and sample dependence persist,warranting further theoretical research and future application of DL in landslide detection. 展开更多
关键词 Landslide Mapping Quantitative hazard assessment deep learning Artificial intelligence neural network Big data Geological hazard survery engineering
下载PDF
Deep Learning for Financial Time Series Prediction:A State-of-the-Art Review of Standalone and HybridModels
17
作者 Weisi Chen Walayat Hussain +1 位作者 Francesco Cauteruccio Xu Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期187-224,共38页
Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep lear... Financial time series prediction,whether for classification or regression,has been a heated research topic over the last decade.While traditional machine learning algorithms have experienced mediocre results,deep learning has largely contributed to the elevation of the prediction performance.Currently,the most up-to-date review of advanced machine learning techniques for financial time series prediction is still lacking,making it challenging for finance domain experts and relevant practitioners to determine which model potentially performs better,what techniques and components are involved,and how themodel can be designed and implemented.This review article provides an overview of techniques,components and frameworks for financial time series prediction,with an emphasis on state-of-the-art deep learning models in the literature from2015 to 2023,including standalonemodels like convolutional neural networks(CNN)that are capable of extracting spatial dependencies within data,and long short-term memory(LSTM)that is designed for handling temporal dependencies;and hybrid models integrating CNN,LSTM,attention mechanism(AM)and other techniques.For illustration and comparison purposes,models proposed in recent studies are mapped to relevant elements of a generalized framework comprised of input,output,feature extraction,prediction,and related processes.Among the state-of-the-artmodels,hybrid models like CNNLSTMand CNN-LSTM-AM in general have been reported superior in performance to stand-alone models like the CNN-only model.Some remaining challenges have been discussed,including non-friendliness for finance domain experts,delayed prediction,domain knowledge negligence,lack of standards,and inability of real-time and highfrequency predictions.The principal contributions of this paper are to provide a one-stop guide for both academia and industry to review,compare and summarize technologies and recent advances in this area,to facilitate smooth and informed implementation,and to highlight future research directions. 展开更多
关键词 Financial time series prediction convolutional neural network long short-term memory deep learning attention mechanism FINANCE
下载PDF
AutoRhythmAI: A Hybrid Machine and Deep Learning Approach for Automated Diagnosis of Arrhythmias
18
作者 S.Jayanthi S.Prasanna Devi 《Computers, Materials & Continua》 SCIE EI 2024年第2期2137-2158,共22页
In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and... In healthcare,the persistent challenge of arrhythmias,a leading cause of global mortality,has sparked extensive research into the automation of detection using machine learning(ML)algorithms.However,traditional ML and AutoML approaches have revealed their limitations,notably regarding feature generalization and automation efficiency.This glaring research gap has motivated the development of AutoRhythmAI,an innovative solution that integrates both machine and deep learning to revolutionize the diagnosis of arrhythmias.Our approach encompasses two distinct pipelines tailored for binary-class and multi-class arrhythmia detection,effectively bridging the gap between data preprocessing and model selection.To validate our system,we have rigorously tested AutoRhythmAI using a multimodal dataset,surpassing the accuracy achieved using a single dataset and underscoring the robustness of our methodology.In the first pipeline,we employ signal filtering and ML algorithms for preprocessing,followed by data balancing and split for training.The second pipeline is dedicated to feature extraction and classification,utilizing deep learning models.Notably,we introduce the‘RRI-convoluted trans-former model’as a novel addition for binary-class arrhythmias.An ensemble-based approach then amalgamates all models,considering their respective weights,resulting in an optimal model pipeline.In our study,the VGGRes Model achieved impressive results in multi-class arrhythmia detection,with an accuracy of 97.39%and firm performance in precision(82.13%),recall(31.91%),and F1-score(82.61%).In the binary-class task,the proposed model achieved an outstanding accuracy of 96.60%.These results highlight the effectiveness of our approach in improving arrhythmia detection,with notably high accuracy and well-balanced performance metrics. 展开更多
关键词 Automated machine learning neural networks deep learning ARRHYTHMIAS
下载PDF
Model Agnostic Meta-Learning(MAML)-Based Ensemble Model for Accurate Detection of Wheat Diseases Using Vision Transformer and Graph Neural Networks
19
作者 Yasir Maqsood Syed Muhammad Usman +3 位作者 Musaed Alhussein Khursheed Aurangzeb Shehzad Khalid Muhammad Zubair 《Computers, Materials & Continua》 SCIE EI 2024年第5期2795-2811,共17页
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di... Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed. 展开更多
关键词 Wheat disease detection deep learning vision transformer graph neural network model agnostic meta learning
下载PDF
Using deep neural networks coupled with principal component analysis for ore production forecasting at open-pit mines
20
作者 Chengkai Fan Na Zhang +1 位作者 Bei Jiang Wei Victor Liu 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第3期727-740,共14页
Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challe... Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines. 展开更多
关键词 Oil sands production Open-pit mining deep learning Principal component analysis(PCA) Artificial neural network Mining engineering
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部