期刊文献+
共找到18,002篇文章
< 1 2 250 >
每页显示 20 50 100
Application of Improved Deep Auto-Encoder Network in Rolling Bearing Fault Diagnosis 被引量:1
1
作者 Jian Di Leilei Wang 《Journal of Computer and Communications》 2018年第7期41-53,共13页
Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive... Since the effectiveness of extracting fault features is not high under traditional bearing fault diagnosis method, a bearing fault diagnosis method based on Deep Auto-encoder Network (DAEN) optimized by Cloud Adaptive Particle Swarm Optimization (CAPSO) was proposed. On the basis of analyzing CAPSO and DAEN, the CAPSO-DAEN fault diagnosis model is built. The model uses the randomness and stability of CAPSO algorithm to optimize the connection weight of DAEN, to reduce the constraints on the weights and extract fault features adaptively. Finally, efficient and accurate fault diagnosis can be implemented with the Softmax classifier. The results of test show that the proposed method has higher diagnostic accuracy and more stable diagnosis results than those based on the DAEN, Support Vector Machine (SVM) and the Back Propagation algorithm (BP) under appropriate parameters. 展开更多
关键词 Fault Diagnosis ROLLING BEARING deep auto-encoder network CAPSO Algorithm Feature Extraction
下载PDF
End-to-End Auto-Encoder System for Deep Residual Shrinkage Network for AWGN Channels
2
作者 Wenhao Zhao Shengbo Hu 《Journal of Computer and Communications》 2023年第5期161-176,共16页
With the rapid development of deep learning methods, the data-driven approach has shown powerful advantages over the model-driven one. In this paper, we propose an end-to-end autoencoder communication system based on ... With the rapid development of deep learning methods, the data-driven approach has shown powerful advantages over the model-driven one. In this paper, we propose an end-to-end autoencoder communication system based on Deep Residual Shrinkage Networks (DRSNs), where neural networks (DNNs) are used to implement the coding, decoding, modulation and demodulation functions of the communication system. Our proposed autoencoder communication system can better reduce the signal noise by adding an “attention mechanism” and “soft thresholding” modules and has better performance at various signal-to-noise ratios (SNR). Also, we have shown through comparative experiments that the system can operate at moderate block lengths and support different throughputs. It has been shown to work efficiently in the AWGN channel. Simulation results show that our model has a higher Bit-Error-Rate (BER) gain and greatly improved decoding performance compared to conventional modulation and classical autoencoder systems at various signal-to-noise ratios. 展开更多
关键词 deep Residual Shrinkage network Autoencoder End-To-End Learning Communication Systems
下载PDF
Hybrid model for BOF oxygen blowing time prediction based on oxygen balance mechanism and deep neural network 被引量:1
3
作者 Xin Shao Qing Liu +3 位作者 Zicheng Xin Jiangshan Zhang Tao Zhou Shaoshuai Li 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CSCD 2024年第1期106-117,共12页
The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ... The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter. 展开更多
关键词 basic oxygen furnace oxygen consumption oxygen blowing time oxygen balance mechanism deep neural network hybrid model
下载PDF
Customized Convolutional Neural Network for Accurate Detection of Deep Fake Images in Video Collections 被引量:1
4
作者 Dmitry Gura Bo Dong +1 位作者 Duaa Mehiar Nidal Al Said 《Computers, Materials & Continua》 SCIE EI 2024年第5期1995-2014,共20页
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in... The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos. 展开更多
关键词 deep fake detection video analysis convolutional neural network machine learning video dataset collection facial landmark prediction accuracy models
下载PDF
Probabilistic seismic inversion based on physics-guided deep mixture density network
5
作者 Qian-Hao Sun Zhao-Yun Zong Xin Li 《Petroleum Science》 SCIE EI CAS CSCD 2024年第3期1611-1631,共21页
Deterministic inversion based on deep learning has been widely utilized in model parameters estimation.Constrained by logging data,seismic data,wavelet and modeling operator,deterministic inversion based on deep learn... Deterministic inversion based on deep learning has been widely utilized in model parameters estimation.Constrained by logging data,seismic data,wavelet and modeling operator,deterministic inversion based on deep learning can establish nonlinear relationships between seismic data and model parameters.However,seismic data lacks low-frequency and contains noise,which increases the non-uniqueness of the solutions.The conventional inversion method based on deep learning can only establish the deterministic relationship between seismic data and parameters,and cannot quantify the uncertainty of inversion.In order to quickly quantify the uncertainty,a physics-guided deep mixture density network(PG-DMDN)is established by combining the mixture density network(MDN)with the deep neural network(DNN).Compared with Bayesian neural network(BNN)and network dropout,PG-DMDN has lower computing cost and shorter training time.A low-frequency model is introduced in the training process of the network to help the network learn the nonlinear relationship between narrowband seismic data and low-frequency impedance.In addition,the block constraints are added to the PG-DMDN framework to improve the horizontal continuity of the inversion results.To illustrate the benefits of proposed method,the PG-DMDN is compared with existing semi-supervised inversion method.Four synthetic data examples of Marmousi II model are utilized to quantify the influence of forward modeling part,low-frequency model,noise and the pseudo-wells number on inversion results,and prove the feasibility and stability of the proposed method.In addition,the robustness and generality of the proposed method are verified by the field seismic data. 展开更多
关键词 deep learning Probabilistic inversion Physics-guided deep mixture density network
下载PDF
A Multi-AGV Routing Planning Method Based on Deep Reinforcement Learning and Recurrent Neural Network
6
作者 Yishuai Lin Gang Hu +2 位作者 Liang Wang Qingshan Li Jiawei Zhu 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第7期1720-1722,共3页
Dear Editor,This letter presents a multi-automated guided vehicles(AGV) routing planning method based on deep reinforcement learning(DRL)and recurrent neural network(RNN), specifically utilizing proximal policy optimi... Dear Editor,This letter presents a multi-automated guided vehicles(AGV) routing planning method based on deep reinforcement learning(DRL)and recurrent neural network(RNN), specifically utilizing proximal policy optimization(PPO) and long short-term memory(LSTM). 展开更多
关键词 network AGV deep
下载PDF
Strengthening network slicing for Industrial Internet with deep reinforcement learning
7
作者 Yawen Tan Jiadai Wang Jiajia Liu 《Digital Communications and Networks》 SCIE CSCD 2024年第4期863-872,共10页
Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structu... Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structure and diversified application requirements call for the applying of network slicing technology.Guaranteeing robust network slicing is essential for Industrial Internet,but it faces the challenge of complex slice topologies caused by the intricate interaction relationships among Network Functions(NFs)composing the slice.Existing works have not concerned the strengthening problem of industrial network slicing regarding its complex network properties.Towards this end,we aim to study this issue by intelligently selecting a subset of most valuable NFs with the minimum cost to satisfy the strengthening requirements.State-of-the-art AlphaGo series of algorithms and the advanced graph neural network technology are combined to build the solution.Simulation results demonstrate the superior performance of our scheme compared to the benchmark schemes. 展开更多
关键词 Industrial Internet network slicing deep reinforcement learning Graph neural network
下载PDF
Self-potential inversion based on Attention U-Net deep learning network
8
作者 GUO You-jun CUI Yi-an +3 位作者 CHEN Hang XIE Jing ZHANG Chi LIU Jian-xin 《Journal of Central South University》 SCIE EI CAS CSCD 2024年第9期3156-3167,共12页
Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention an... Landfill leaks pose a serious threat to environmental health,risking the contamination of both groundwater and soil resources.Accurate investigation of these sites is essential for implementing effective prevention and control measures.The self-potential(SP)stands out for its sensitivity to contamination plumes,offering a solution for monitoring and detecting the movement and seepage of subsurface pollutants.However,traditional SP inversion techniques heavily rely on precise subsurface resistivity information.In this study,we propose the Attention U-Net deep learning network for rapid SP inversion.By incorporating an attention mechanism,this algorithm effectively learns the relationship between array-style SP data and the location and extent of subsurface contaminated sources.We designed a synthetic landfill model with a heterogeneous resistivity structure to assess the performance of Attention U-Net deep learning network.Additionally,we conducted further validation using a laboratory model to assess its practical applicability.The results demonstrate that the algorithm is not solely dependent on resistivity information,enabling effective locating of the source distribution,even in models with intricate subsurface structures.Our work provides a promising tool for SP data processing,enhancing the applicability of this method in the field of near-subsurface environmental monitoring. 展开更多
关键词 SELF-POTENTIAL attention mechanism U-Net deep learning network INVERSION landfill
下载PDF
An End-To-End Hyperbolic Deep Graph Convolutional Neural Network Framework
9
作者 Yuchen Zhou Hongtao Huo +5 位作者 Zhiwen Hou Lingbin Bu Yifan Wang Jingyi Mao Xiaojun Lv Fanliang Bu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期537-563,共27页
Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to sca... Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements. 展开更多
关键词 Graph neural networks hyperbolic graph convolutional neural networks deep graph convolutional neural networks message passing framework
下载PDF
Diffraction deep neural network-based classification for vector vortex beams
10
作者 彭怡翔 陈兵 +1 位作者 王乐 赵生妹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期387-392,共6页
The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably a... The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network. 展开更多
关键词 vector vortex beam diffractive deep neural network classification atmospheric turbulence
下载PDF
Network Security Enhanced with Deep Neural Network-Based Intrusion Detection System
11
作者 Fatma S.Alrayes Mohammed Zakariah +2 位作者 Syed Umar Amin Zafar Iqbal Khan Jehad Saad Alqurni 《Computers, Materials & Continua》 SCIE EI 2024年第7期1457-1490,共34页
This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intr... This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge. 展开更多
关键词 MACHINE-LEARNING deep-Learning intrusion detection system security PRIVACY deep neural network NSL-KDD Dataset
下载PDF
Geometric prior guided hybrid deep neural network for facial beauty analysis
12
作者 Tianhao Peng Mu Li +2 位作者 Fangmei Chen Yong Xu David Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第2期467-480,共14页
Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial ... Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task. 展开更多
关键词 deep neural networks face analysis face biometrics image analysis
下载PDF
Dynamic Modeling of Robotic Manipulator via an Augmented Deep Lagrangian Network
13
作者 Shuangshuang Wu Zhiming Li +1 位作者 Wenbai Chen Fuchun Sun 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2024年第5期1604-1614,共11页
Learning the accurate dynamics of robotic systems directly from the trajectory data is currently a prominent research focus.Recent physics-enforced networks,exemplified by Hamiltonian neural networks and Lagrangian ne... Learning the accurate dynamics of robotic systems directly from the trajectory data is currently a prominent research focus.Recent physics-enforced networks,exemplified by Hamiltonian neural networks and Lagrangian neural networks,demonstrate proficiency in modeling ideal physical systems,but face limitations when applied to systems with uncertain non-conservative dynamics due to the inherent constraints of the conservation laws foundation.In this paper,we present a novel augmented deep Lagrangian network,which seamlessly integrates a deep Lagrangian network with a standard deep network.This fusion aims to effectively model uncertainties that surpass the limitations of conventional Lagrangian mechanics.The proposed network is applied to learn inverse dynamics model of two multi-degree manipulators including a 6-dof UR-5 robot and a 7-dof SARCOS manipulator under uncertainties.The experimental results clearly demonstrate that our approach exhibits superior modeling precision and enhanced physical credibility. 展开更多
关键词 deep Lagrangian network nonconservative dynamics multi-degree manipulator inverse dynamic modeling
原文传递
Anomaly-Based Intrusion DetectionModel Using Deep Learning for IoT Networks
14
作者 Muaadh A.Alsoufi Maheyzah Md Siraj +4 位作者 Fuad A.Ghaleb Muna Al-Razgan Mahfoudh Saeed Al-Asaly Taha Alfakih Faisal Saeed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期823-845,共23页
The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly int... The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better. 展开更多
关键词 IOT anomaly intrusion detection deep learning sparse autoencoder convolutional neural network
下载PDF
A generalized deep neural network approach for improving resolution of fluorescence microscopy images
15
作者 Zichen Jin Qing He +1 位作者 Yang Liu Kaige Wang 《Journal of Innovative Optical Health Sciences》 SCIE EI CSCD 2024年第6期53-65,共13页
Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural netwo... Deep learning is capable of greatly promoting the progress of super-resolution imaging technology in terms of imaging and reconstruction speed,imaging resolution,and imagingflux.This paper proposes a deep neural network based on a generative adversarial network(GAN).The generator employs a U-Net-based network,which integrates Dense Net for the downsampling component.The proposed method has excellent properties,for example,the network model is trained with several different datasets of biological structures;the trained model can improve the imaging resolution of different microscopy imaging modalities such as confocal imaging and wide-field imaging;and the model demonstrates a generalized ability to improve the resolution of different biological structures even out of the datasets.In addition,experimental results showed that the method improved the resolution of caveolin-coated pits(CCPs)structures from 264 nm to 138 nm,a 1.91-fold increase,and nearly doubled the resolution of DNA molecules imaged while being transported through microfluidic channels. 展开更多
关键词 deep learning super-resolution imaging generalized model framework generation adversarial networks image reconstruction.
下载PDF
Downscaling Seasonal Precipitation Forecasts over East Africa with Deep Convolutional Neural Networks
16
作者 Temesgen Gebremariam ASFAW Jing-Jia LUO 《Advances in Atmospheric Sciences》 SCIE CAS CSCD 2024年第3期449-464,共16页
This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that co... This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that compare different CNN configurations and deployed the best-performing architecture to downscale one-month lead seasonal forecasts of June–July–August–September(JJAS) precipitation from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0(NUIST-CFS1.0) for 1982–2020. We also perform hyper-parameter optimization and introduce predictors over a larger area to include information about the main large-scale circulations that drive precipitation over the East Africa region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results show that the CNN-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme precipitation spatial patterns. Besides, CNN-based downscaling yields a much more accurate forecast of extreme and spell indicators and reduces the significant relative biases exhibited by the raw model predictions. Moreover, our results show that CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of East Africa. The results demonstrate the potential usefulness of CNN in downscaling seasonal precipitation predictions over East Africa,particularly in providing improved forecast products which are essential for end users. 展开更多
关键词 East Africa seasonal precipitation forecasting DOWNSCALING deep learning convolutional neural networks(CNNs)
下载PDF
Resource Allocation for Cognitive Network Slicing in PD-SCMA System Based on Two-Way Deep Reinforcement Learning
17
作者 Zhang Zhenyu Zhang Yong +1 位作者 Yuan Siyu Cheng Zhenjie 《China Communications》 SCIE CSCD 2024年第6期53-68,共16页
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se... In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users. 展开更多
关键词 cognitive radio deep reinforcement learning network slicing power-domain non-orthogonal multiple access resource allocation
下载PDF
Detection of Oscillations in Process Control Loops From Visual Image Space Using Deep Convolutional Networks
18
作者 Tao Wang Qiming Chen +3 位作者 Xun Lang Lei Xie Peng Li Hongye Su 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期982-995,共14页
Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have b... Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers. 展开更多
关键词 Convolutional neural networks(CNNs) deep learning image processing oscillation detection process industries
下载PDF
Radar Signal Intra-Pulse Modulation Recognition Based on Deep Residual Network
19
作者 Fuyuan Xu Guangqing Shao +3 位作者 Jiazhan Lu Zhiyin Wang Zhipeng Wu Shuhang Xia 《Journal of Beijing Institute of Technology》 EI CAS 2024年第2期155-162,共8页
In view of low recognition rate of complex radar intra-pulse modulation signal type by traditional methods under low signal-to-noise ratio(SNR),the paper proposes an automatic recog-nition method of complex radar intr... In view of low recognition rate of complex radar intra-pulse modulation signal type by traditional methods under low signal-to-noise ratio(SNR),the paper proposes an automatic recog-nition method of complex radar intra-pulse modulation signal type based on deep residual network.The basic principle of the recognition method is to obtain the transformation relationship between the time and frequency of complex radar intra-pulse modulation signal through short-time Fourier transform(STFT),and then design an appropriate deep residual network to extract the features of the time-frequency map and complete a variety of complex intra-pulse modulation signal type recognition.In addition,in order to improve the generalization ability of the proposed method,label smoothing and L2 regularization are introduced.The simulation results show that the proposed method has a recognition accuracy of more than 95%for complex radar intra-pulse modulation sig-nal types under low SNR(2 dB). 展开更多
关键词 intra-pulse modulation low signal-to-noise deep residual network automatic recognition
下载PDF
Energy-Efficient Traffic Offloading for RSMA-Based Hybrid Satellite Terrestrial Networks with Deep Reinforcement Learning
20
作者 Qingmiao Zhang Lidong Zhu +1 位作者 Yanyan Chen Shan Jiang 《China Communications》 SCIE CSCD 2024年第2期49-58,共10页
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p... As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm. 展开更多
关键词 deep reinforcement learning energy efficiency hybrid satellite terrestrial networks rate splitting multiple access traffic offloading
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部