期刊文献+
共找到126,166篇文章
< 1 2 250 >
每页显示 20 50 100
NeurstrucEnergy:A bi-directional GNN model for energy prediction of neural networks in IoT
1
作者 Chaopeng Guo Zhaojin Zhong +1 位作者 Zexin Zhang Jie Song 《Digital Communications and Networks》 SCIE CSCD 2024年第2期439-449,共11页
A significant demand rises for energy-efficient deep neural networks to support power-limited embedding devices with successful deep learning applications in IoT and edge computing fields.An accurate energy prediction... A significant demand rises for energy-efficient deep neural networks to support power-limited embedding devices with successful deep learning applications in IoT and edge computing fields.An accurate energy prediction approach is critical to provide measurement and lead optimization direction.However,the current energy prediction approaches lack accuracy and generalization ability due to the lack of research on the neural network structure and the excessive reliance on customized training dataset.This paper presents a novel energy prediction model,NeurstrucEnergy.NeurstrucEnergy treats neural networks as directed graphs and applies a bi-directional graph neural network training on a randomly generated dataset to extract structural features for energy prediction.NeurstrucEnergy has advantages over linear approaches because the bi-directional graph neural network collects structural features from each layer's parents and children.Experimental results show that NeurstrucEnergy establishes state-of-the-art results with mean absolute percentage error of 2.60%.We also evaluate NeurstrucEnergy in a randomly generated dataset,achieving the mean absolute percentage error of 4.83%over 10 typical convolutional neural networks in recent years and 7 efficient convolutional neural networks created by neural architecture search.Our code is available at https://github.com/NEUSoftGreenAI/NeurstrucEnergy.git. 展开更多
关键词 Internet of things neural network energy prediction Graph neural networks Graph structure embedding Multi-head attention
下载PDF
HGNN-ETC: Higher-Order Graph Neural Network Based on Chronological Relationships for Encrypted Traffic Classification
2
作者 Rongwei Yu Xiya Guo +1 位作者 Peihao Zhang Kaijuan Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第11期2643-2664,共22页
Encrypted traffic plays a crucial role in safeguarding network security and user privacy.However,encrypting malicious traffic can lead to numerous security issues,making the effective classification of encrypted traff... Encrypted traffic plays a crucial role in safeguarding network security and user privacy.However,encrypting malicious traffic can lead to numerous security issues,making the effective classification of encrypted traffic essential.Existing methods for detecting encrypted traffic face two significant challenges.First,relying solely on the original byte information for classification fails to leverage the rich temporal relationships within network traffic.Second,machine learning and convolutional neural network methods lack sufficient network expression capabilities,hindering the full exploration of traffic’s potential characteristics.To address these limitations,this study introduces a traffic classification method that utilizes time relationships and a higher-order graph neural network,termed HGNN-ETC.This approach fully exploits the original byte information and chronological relationships of traffic packets,transforming traffic data into a graph structure to provide the model with more comprehensive context information.HGNN-ETC employs an innovative k-dimensional graph neural network to effectively capture the multi-scale structural features of traffic graphs,enabling more accurate classification.We select the ISCXVPN and the USTC-TK2016 dataset for our experiments.The results show that compared with other state-of-the-art methods,our method can obtain a better classification effect on different datasets,and the accuracy rate is about 97.00%.In addition,by analyzing the impact of varying input specifications on classification performance,we determine the optimal network data truncation strategy and confirm the model’s excellent generalization ability on different datasets. 展开更多
关键词 Encrypted network traffic graph neural network traffic classification deep learning
下载PDF
A Denoiser for Correlated Noise Channel Decoding: Gated-Neural Network
3
作者 Xiao Li Ling Zhao +1 位作者 Zhen Dai Yonggang Lei 《China Communications》 SCIE CSCD 2024年第2期122-128,共7页
This letter proposes a sliced-gated-convolutional neural network with belief propagation(SGCNN-BP) architecture for decoding long codes under correlated noise. The basic idea of SGCNNBP is using Neural Networks(NN) to... This letter proposes a sliced-gated-convolutional neural network with belief propagation(SGCNN-BP) architecture for decoding long codes under correlated noise. The basic idea of SGCNNBP is using Neural Networks(NN) to transform the correlated noise into white noise, setting up the optimal condition for a standard BP decoder that takes the output from the NN. A gate-controlled neuron is used to regulate information flow and an optional operation—slicing is adopted to reduce parameters and lower training complexity. Simulation results show that SGCNN-BP has much better performance(with the largest gap being 5dB improvement) than a single BP decoder and achieves a nearly 1dB improvement compared to Fully Convolutional Networks(FCN). 展开更多
关键词 belief propagation channel decoding correlated noise neural network
下载PDF
Performance of physical-informed neural network (PINN) for the key parameter inference in Langmuir turbulence parameterization scheme
4
作者 Fangrui Xiu Zengan Deng 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2024年第5期121-132,共12页
The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kineti... The Stokes production coefficient(E_(6))constitutes a critical parameter within the Mellor-Yamada type(MY-type)Langmuir turbulence(LT)parameterization schemes,significantly affecting the simulation of turbulent kinetic energy,turbulent length scale,and vertical diffusivity coefficient for turbulent kinetic energy in the upper ocean.However,the accurate determination of its value remains a pressing scientific challenge.This study adopted an innovative approach by leveraging deep learning technology to address this challenge of inferring the E_(6).Through the integration of the information of the turbulent length scale equation into a physical-informed neural network(PINN),we achieved an accurate and physically meaningful inference of E_(6).Multiple cases were examined to assess the feasibility of PINN in this task,revealing that under optimal settings,the average mean squared error of the E_(6) inference was only 0.01,attesting to the effectiveness of PINN.The optimal hyperparameter combination was identified using the Tanh activation function,along with a spatiotemporal sampling interval of 1 s and 0.1 m.This resulted in a substantial reduction in the average bias of the E_(6) inference,ranging from O(10^(1))to O(10^(2))times compared with other combinations.This study underscores the potential application of PINN in intricate marine environments,offering a novel and efficient method for optimizing MY-type LT parameterization schemes. 展开更多
关键词 Langmuir turbulence physical-informed neural network parameter inference
下载PDF
MetaPINNs:Predicting soliton and rogue wave of nonlinear PDEs via the improved physics-informed neural networks based on meta-learned optimization
5
作者 郭亚楠 曹小群 +1 位作者 宋君强 冷洪泽 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第2期96-107,共12页
Efficiently solving partial differential equations(PDEs)is a long-standing challenge in mathematics and physics research.In recent years,the rapid development of artificial intelligence technology has brought deep lea... Efficiently solving partial differential equations(PDEs)is a long-standing challenge in mathematics and physics research.In recent years,the rapid development of artificial intelligence technology has brought deep learning-based methods to the forefront of research on numerical methods for partial differential equations.Among them,physics-informed neural networks(PINNs)are a new class of deep learning methods that show great potential in solving PDEs and predicting complex physical phenomena.In the field of nonlinear science,solitary waves and rogue waves have been important research topics.In this paper,we propose an improved PINN that enhances the physical constraints of the neural network model by adding gradient information constraints.In addition,we employ meta-learning optimization to speed up the training process.We apply the improved PINNs to the numerical simulation and prediction of solitary and rogue waves.We evaluate the accuracy of the prediction results by error analysis.The experimental results show that the improved PINNs can make more accurate predictions in less time than that of the original PINNs. 展开更多
关键词 physics-informed neural networks gradient-enhanced loss function meta-learned optimization nonlinear science
下载PDF
Dynamic interwell connectivity analysis of multi-layer waterflooding reservoirs based on an improved graph neural network
6
作者 Zhao-Qin Huang Zhao-Xu Wang +4 位作者 Hui-Fang Hu Shi-Ming Zhang Yong-Xing Liang Qi Guo Jun Yao 《Petroleum Science》 SCIE EI CAS CSCD 2024年第2期1062-1080,共19页
The analysis of interwell connectivity plays an important role in the formulation of oilfield development plans and the description of residual oil distribution. In fact, sandstone reservoirs in China's onshore oi... The analysis of interwell connectivity plays an important role in the formulation of oilfield development plans and the description of residual oil distribution. In fact, sandstone reservoirs in China's onshore oilfields generally have the characteristics of thin and many layers, so multi-layer joint production is usually adopted. It remains a challenge to ensure the accuracy of splitting and dynamic connectivity in each layer of the injection-production wells with limited field data. The three-dimensional well pattern of multi-layer reservoir and the relationship between injection-production wells can be equivalent to a directional heterogeneous graph. In this paper, an improved graph neural network is proposed to construct an interacting process mimics the real interwell flow regularity. In detail, this method is used to split injection and production rates by combining permeability, porosity and effective thickness, and to invert the dynamic connectivity in each layer of the injection-production wells by attention mechanism.Based on the material balance and physical information, the overall connectivity from the injection wells,through the water injection layers to the production layers and the output of final production wells is established. Meanwhile, the change of well pattern caused by perforation, plugging and switching of wells at different times is achieved by updated graph structure in spatial and temporal ways. The effectiveness of the method is verified by a combination of reservoir numerical simulation examples and field example. The method corresponds to the actual situation of the reservoir, has wide adaptability and low cost, has good practical value, and provides a reference for adjusting the injection-production relationship of the reservoir and the development of the remaining oil. 展开更多
关键词 Graph neural network Dynamic interwell connectivity Production-injection splitting Attention mechanism Multi-layer reservoir
下载PDF
An intelligent control method based on artificial neural network for numerical flight simulation of the basic finner projectile with pitching maneuver
7
作者 Yiming Liang Guangning Li +3 位作者 Min Xu Junmin Zhao Feng Hao Hongbo Shi 《Defence Technology(防务技术)》 SCIE EI CAS CSCD 2024年第2期663-674,共12页
In this paper,an intelligent control method applying on numerical virtual flight is proposed.The proposed algorithm is verified and evaluated by combining with the case of the basic finner projectile model and shows a... In this paper,an intelligent control method applying on numerical virtual flight is proposed.The proposed algorithm is verified and evaluated by combining with the case of the basic finner projectile model and shows a good application prospect.Firstly,a numerical virtual flight simulation model based on overlapping dynamic mesh technology is constructed.In order to verify the accuracy of the dynamic grid technology and the calculation of unsteady flow,a numerical simulation of the basic finner projectile without control is carried out.The simulation results are in good agreement with the experiment data which shows that the algorithm used in this paper can also be used in the design and evaluation of the intelligent controller in the numerical virtual flight simulation.Secondly,combined with the real-time control requirements of aerodynamic,attitude and displacement parameters of the projectile during the flight process,the numerical simulations of the basic finner projectile’s pitch channel are carried out under the traditional PID(Proportional-Integral-Derivative)control strategy and the intelligent PID control strategy respectively.The intelligent PID controller based on BP(Back Propagation)neural network can realize online learning and self-optimization of control parameters according to the acquired real-time flight parameters.Compared with the traditional PID controller,the concerned control variable overshoot,rise time,transition time and steady state error and other performance indicators have been greatly improved,and the higher the learning efficiency or the inertia coefficient,the faster the system,the larger the overshoot,and the smaller the stability error.The intelligent control method applying on numerical virtual flight is capable of solving the complicated unsteady motion and flow with the intelligent PID control strategy and has a strong promotion to engineering application. 展开更多
关键词 Numerical virtual flight Intelligent control BP neural network PID Moving chimera grid
下载PDF
TCAS-PINN:Physics-informed neural networks with a novel temporal causality-based adaptive sampling method
8
作者 郭嘉 王海峰 +1 位作者 古仕林 侯臣平 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期344-364,共21页
Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the los... Physics-informed neural networks(PINNs)have become an attractive machine learning framework for obtaining solutions to partial differential equations(PDEs).PINNs embed initial,boundary,and PDE constraints into the loss function.The performance of PINNs is generally affected by both training and sampling.Specifically,training methods focus on how to overcome the training difficulties caused by the special PDE residual loss of PINNs,and sampling methods are concerned with the location and distribution of the sampling points upon which evaluations of PDE residual loss are accomplished.However,a common problem among these original PINNs is that they omit special temporal information utilization during the training or sampling stages when dealing with an important PDE category,namely,time-dependent PDEs,where temporal information plays a key role in the algorithms used.There is one method,called Causal PINN,that considers temporal causality at the training level but not special temporal utilization at the sampling level.Incorporating temporal knowledge into sampling remains to be studied.To fill this gap,we propose a novel temporal causality-based adaptive sampling method that dynamically determines the sampling ratio according to both PDE residual and temporal causality.By designing a sampling ratio determined by both residual loss and temporal causality to control the number and location of sampled points in each temporal sub-domain,we provide a practical solution by incorporating temporal information into sampling.Numerical experiments of several nonlinear time-dependent PDEs,including the Cahn–Hilliard,Korteweg–de Vries,Allen–Cahn and wave equations,show that our proposed sampling method can improve the performance.We demonstrate that using such a relatively simple sampling method can improve prediction performance by up to two orders of magnitude compared with the results from other methods,especially when points are limited. 展开更多
关键词 partial differential equation physics-informed neural networks residual-based adaptive sampling temporal causality
下载PDF
Solar Radiation Estimation Based on a New Combined Approach of Artificial Neural Networks (ANN) and Genetic Algorithms (GA) in South Algeria
9
作者 Djeldjli Halima Benatiallah Djelloul +3 位作者 Ghasri Mehdi Tanougast Camel Benatiallah Ali Benabdelkrim Bouchra 《Computers, Materials & Continua》 SCIE EI 2024年第6期4725-4740,共16页
When designing solar systems and assessing the effectiveness of their many uses,estimating sun irradiance is a crucial first step.This study examined three approaches(ANN,GA-ANN,and ANFIS)for estimating daily global s... When designing solar systems and assessing the effectiveness of their many uses,estimating sun irradiance is a crucial first step.This study examined three approaches(ANN,GA-ANN,and ANFIS)for estimating daily global solar radiation(GSR)in the south of Algeria:Adrar,Ouargla,and Bechar.The proposed hybrid GA-ANN model,based on genetic algorithm-based optimization,was developed to improve the ANN model.The GA-ANN and ANFIS models performed better than the standalone ANN-based model,with GA-ANN being better suited for forecasting in all sites,and it performed the best with the best values in the testing phase of Coefficient of Determination(R=0.9005),Mean Absolute Percentage Error(MAPE=8.40%),and Relative Root Mean Square Error(rRMSE=12.56%).Nevertheless,the ANFIS model outperformed the GA-ANN model in forecasting daily GSR,with the best values of indicators when testing the model being R=0.9374,MAPE=7.78%,and rRMSE=10.54%.Generally,we may conclude that the initial ANN stand-alone model performance when forecasting solar radiation has been improved,and the results obtained after injecting the genetic algorithm into the ANN to optimize its weights were satisfactory.The model can be used to forecast daily GSR in dry climates and other climates and may also be helpful in selecting solar energy system installations and sizes. 展开更多
关键词 Solar energy systems genetic algorithm neural networks hybrid adaptive neuro fuzzy inference system solar radiation
下载PDF
Convergence of Hyperbolic Neural Networks Under Riemannian Stochastic Gradient Descent
10
作者 Wes Whiting Bao Wang Jack Xin 《Communications on Applied Mathematics and Computation》 EI 2024年第2期1175-1188,共14页
We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a ... We prove,under mild conditions,the convergence of a Riemannian gradient descent method for a hyperbolic neural network regression model,both in batch gradient descent and stochastic gradient descent.We also discuss a Riemannian version of the Adam algorithm.We show numerical simulations of these algorithms on various benchmarks. 展开更多
关键词 Hyperbolic neural network Riemannian gradient descent Riemannian Adam(RAdam) Training convergence
下载PDF
HQNN-SFOP:Hybrid Quantum Neural Networks with Signal Feature Overlay Projection for Drone Detection Using Radar Return Signals-A Simulation
11
作者 Wenxia Wang Jinchen Xu +4 位作者 Xiaodong Ding Zhihui Song Yizhen Huang Xin Zhou Zheng Shan 《Computers, Materials & Continua》 SCIE EI 2024年第10期1363-1390,共28页
With the wide application of drone technology,there is an increasing demand for the detection of radar return signals from drones.Existing detection methods mainly rely on time-frequency domain feature extraction and ... With the wide application of drone technology,there is an increasing demand for the detection of radar return signals from drones.Existing detection methods mainly rely on time-frequency domain feature extraction and classical machine learning algorithms for image recognition.This method suffers from the problem of large dimensionality of image features,which leads to large input data size and noise affecting learning.Therefore,this paper proposes to extract signal time-domain statistical features for radar return signals from drones and reduce the feature dimension from 512×4 to 16 dimensions.However,the downscaled feature data makes the accuracy of traditional machine learning algorithms decrease,so we propose a new hybrid quantum neural network with signal feature overlay projection(HQNN-SFOP),which reduces the dimensionality of the signal by extracting the statistical features in the time domain of the signal,introduces the signal feature overlay projection to enhance the expression ability of quantum computation on the signal features,and introduces the quantum circuits to improve the neural network’s ability to obtain the inline relationship of features,thus improving the accuracy and migration generalization ability of drone detection.In order to validate the effectiveness of the proposed method,we experimented with the method using the MM model that combines the real parameters of five commercial drones and random drones parameters to generate data to simulate a realistic environment.The results show that the method based on statistical features in the time domain of the signal is able to extract features at smaller scales and obtain higher accuracy on a dataset with an SNR of 10 dB.On the time-domain feature data set,HQNNSFOP obtains the highest accuracy compared to other conventional methods.In addition,HQNN-SFOP has good migration generalization ability on five commercial drones and random drones data at different SNR conditions.Our method verifies the feasibility and effectiveness of signal detection methods based on quantum computation and experimentally demonstrates that the advantages of quantum computation for information processing are still valid in the field of signal processing,it provides a highly efficient method for the drone detection using radar return signals. 展开更多
关键词 Quantum computing hybrid quantum neural network drone detection using radar signals time domain features
下载PDF
APSO-CNN-SE:An Adaptive Convolutional Neural Network Approach for IoT Intrusion Detection
12
作者 Yunfei Ban Damin Zhang +1 位作者 Qing He Qianwen Shen 《Computers, Materials & Continua》 SCIE EI 2024年第10期567-601,共35页
The surge in connected devices and massive data aggregation has expanded the scale of the Internet of Things(IoT)networks.The proliferation of unknown attacks and related risks,such as zero-day attacks and Distributed... The surge in connected devices and massive data aggregation has expanded the scale of the Internet of Things(IoT)networks.The proliferation of unknown attacks and related risks,such as zero-day attacks and Distributed Denial of Service(DDoS)attacks triggered by botnets,have resulted in information leakage and property damage.Therefore,developing an efficient and realistic intrusion detection system(IDS)is critical for ensuring IoT network security.In recent years,traditional machine learning techniques have struggled to learn the complex associations between multidimensional features in network traffic,and the excellent performance of deep learning techniques,as an advanced version of machine learning,has led to their widespread application in intrusion detection.In this paper,we propose an Adaptive Particle Swarm Optimization Convolutional Neural Network Squeeze-andExcitation(APSO-CNN-SE)model for implementing IoT network intrusion detection.A 2D CNN backbone is initially constructed to extract spatial features from network traffic.Subsequently,a squeeze-and-excitation channel attention mechanism is introduced and embedded into the CNN to focus on critical feature channels.Lastly,the weights and biases in the CNN-SE are extracted to initialize the population individuals of the APSO.As the number of iterations increases,the population’s position vector is continuously updated,and the cross-entropy loss function value is minimized to produce the ideal network architecture.We evaluated the models experimentally using binary and multiclassification on the UNSW-NB15 and NSL-KDD datasets,comparing and analyzing the evaluation metrics derived from each model.Compared to the base CNN model,the results demonstrate that APSO-CNNSE enhances the binary classification detection accuracy by 1.84%and 3.53%and the multiclassification detection accuracy by 1.56%and 2.73%on the two datasets,respectively.Additionally,the model outperforms the existing models like DT,KNN,LR,SVM,LSTM,etc.,in terms of accuracy and fitting performance.This means that the model can identify potential attacks or anomalies more precisely,improving the overall security and stability of the IoT environment. 展开更多
关键词 Intrusion detection system internet of things convolutional neural network channel attention mechanism adaptive particle swarm optimization
下载PDF
Fast solution to the free return orbit's reachable domain of the manned lunar mission by deep neural network
13
作者 YANG Luyi LI Haiyang +1 位作者 ZHANG Jin ZHU Yuehe 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第2期495-508,共14页
It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly eval... It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model. 展开更多
关键词 manned lunar mission free return orbit reachable domain(RD) deep neural network computation efficiency
下载PDF
Calculating real-time surface deformation for large active surface radio antennas using a graph neural network
14
作者 Zihan Zhang Qian Ye +2 位作者 Li Fu Qinghui Liu Guoxiang Meng 《Astronomical Techniques and Instruments》 CSCD 2024年第5期267-274,共8页
This paper presents an innovative surrogate modeling method using a graph neural network to compensate for gravitational and thermal deformation in large radio telescopes.Traditionally,rapid compensation is feasible f... This paper presents an innovative surrogate modeling method using a graph neural network to compensate for gravitational and thermal deformation in large radio telescopes.Traditionally,rapid compensation is feasible for gravitational deformation but not for temperature-induced deformation.The introduction of this method facilitates real-time calculation of deformation caused both by gravity and temperature.Constructing the surrogate model involves two key steps.First,the gravitational and thermal loads are encoded,which facilitates more efficient learning for the neural network.This is followed by employing a graph neural network as an end-to-end model.This model effectively maps external loads to deformation while preserving the spatial correlations between nodes.Simulation results affirm that the proposed method can successfully estimate the surface deformation of the main reflector in real-time and can deliver results that are practically indistinguishable from those obtained using finite element analysis.We also compare the proposed surrogate model method with the out-of-focus holography method and yield similar results. 展开更多
关键词 Large radio telescope Surface deformation Surrogate model Graph neural network
下载PDF
Effects of data smoothing and recurrent neural network(RNN)algorithms for real-time forecasting of tunnel boring machine(TBM)performance
15
作者 Feng Shan Xuzhen He +1 位作者 Danial Jahed Armaghani Daichao Sheng 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第5期1538-1551,共14页
Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk... Tunnel boring machines(TBMs)have been widely utilised in tunnel construction due to their high efficiency and reliability.Accurately predicting TBM performance can improve project time management,cost control,and risk management.This study aims to use deep learning to develop real-time models for predicting the penetration rate(PR).The models are built using data from the Changsha metro project,and their performances are evaluated using unseen data from the Zhengzhou Metro project.In one-step forecast,the predicted penetration rate follows the trend of the measured penetration rate in both training and testing.The autoregressive integrated moving average(ARIMA)model is compared with the recurrent neural network(RNN)model.The results show that univariate models,which only consider historical penetration rate itself,perform better than multivariate models that take into account multiple geological and operational parameters(GEO and OP).Next,an RNN variant combining time series of penetration rate with the last-step geological and operational parameters is developed,and it performs better than other models.A sensitivity analysis shows that the penetration rate is the most important parameter,while other parameters have a smaller impact on time series forecasting.It is also found that smoothed data are easier to predict with high accuracy.Nevertheless,over-simplified data can lose real characteristics in time series.In conclusion,the RNN variant can accurately predict the next-step penetration rate,and data smoothing is crucial in time series forecasting.This study provides practical guidance for TBM performance forecasting in practical engineering. 展开更多
关键词 Tunnel boring machine(TBM) Penetration rate(PR) Time series forecasting Recurrent neural network(Rnn)
下载PDF
The Actuarial Data Intelligent Based Artificial Neural Network (ANN) Automobile Insurance Inflation Adjusted Frequency Severity Loss Reserving Model
16
作者 Brighton Mahohoho 《Open Journal of Statistics》 2024年第5期634-665,共32页
This study proposes a novel approach for estimating automobile insurance loss reserves utilizing Artificial Neural Network (ANN) techniques integrated with actuarial data intelligence. The model aims to address the ch... This study proposes a novel approach for estimating automobile insurance loss reserves utilizing Artificial Neural Network (ANN) techniques integrated with actuarial data intelligence. The model aims to address the challenges of accurately predicting insurance claim frequencies, severities, and overall loss reserves while accounting for inflation adjustments. Through comprehensive data analysis and model development, this research explores the effectiveness of ANN methodologies in capturing complex nonlinear relationships within insurance data. The study leverages a data set comprising automobile insurance policyholder information, claim history, and economic indicators to train and validate the ANN-based reserving model. Key aspects of the methodology include data preprocessing techniques such as one-hot encoding and scaling, followed by the construction of frequency, severity, and overall loss reserving models using ANN architectures. Moreover, the model incorporates inflation adjustment factors to ensure the accurate estimation of future loss reserves in real terms. Results from the study demonstrate the superior predictive performance of the ANN-based reserving model compared to traditional actuarial methods, with substantial improvements in accuracy and robustness. Furthermore, the model’s ability to adapt to changing market conditions and regulatory requirements, such as IFRS17, highlights its practical relevance in the insurance industry. The findings of this research contribute to the advancement of actuarial science and provide valuable insights for insurance companies seeking more accurate and efficient loss reserving techniques. The proposed ANN-based approach offers a promising avenue for enhancing risk management practices and optimizing financial decision-making processes in the automobile insurance sector. 展开更多
关键词 Artificial neural network Actuarial Loss Reserving Machine Learning Intelligent Model
下载PDF
Big Model Strategy for Bridge Structural Health Monitoring Based on Data-Driven, Adaptive Method and Convolutional Neural Network (CNN) Group
17
作者 Yadong Xu Weixing Hong +3 位作者 Mohammad Noori Wael A.Altabey Ahmed Silik Nabeel S.D.Farhan 《Structural Durability & Health Monitoring》 EI 2024年第6期763-783,共21页
This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemb... This study introduces an innovative“Big Model”strategy to enhance Bridge Structural Health Monitoring(SHM)using a Convolutional Neural Network(CNN),time-frequency analysis,and fine element analysis.Leveraging ensemble methods,collaborative learning,and distributed computing,the approach effectively manages the complexity and scale of large-scale bridge data.The CNN employs transfer learning,fine-tuning,and continuous monitoring to optimize models for adaptive and accurate structural health assessments,focusing on extracting meaningful features through time-frequency analysis.By integrating Finite Element Analysis,time-frequency analysis,and CNNs,the strategy provides a comprehensive understanding of bridge health.Utilizing diverse sensor data,sophisticated feature extraction,and advanced CNN architecture,the model is optimized through rigorous preprocessing and hyperparameter tuning.This approach significantly enhances the ability to make accurate predictions,monitor structural health,and support proactive maintenance practices,thereby ensuring the safety and longevity of critical infrastructure. 展开更多
关键词 Structural Health Monitoring(SHM) BRIDGES big model Convolutional neural network(Cnn) Finite Element Method(FEM)
下载PDF
Reliability analysis of slope stability by neural network,principal component analysis,and transfer learning techniques 被引量:1
18
作者 Sheng Zhang Li Ding +3 位作者 Menglong Xie Xuzhen He Rui Yang Chenxi Tong 《Journal of Rock Mechanics and Geotechnical Engineering》 SCIE CSCD 2024年第10期4034-4045,共12页
The prediction of slope stability is considered as one of the critical concerns in geotechnical engineering.Conventional stochastic analysis with spatially variable slopes is time-consuming and highly computation-dema... The prediction of slope stability is considered as one of the critical concerns in geotechnical engineering.Conventional stochastic analysis with spatially variable slopes is time-consuming and highly computation-demanding.To assess the slope stability problems with a more desirable computational effort,many machine learning(ML)algorithms have been proposed.However,most ML-based techniques require that the training data must be in the same feature space and have the same distribution,and the model may need to be rebuilt when the spatial distribution changes.This paper presents a new ML-based algorithm,which combines the principal component analysis(PCA)-based neural network(NN)and transfer learning(TL)techniques(i.e.PCAeNNeTL)to conduct the stability analysis of slopes with different spatial distributions.The Monte Carlo coupled with finite element simulation is first conducted for data acquisition considering the spatial variability of cohesive strength or friction angle of soils from eight slopes with the same geometry.The PCA method is incorporated into the neural network algorithm(i.e.PCA-NN)to increase the computational efficiency by reducing the input variables.It is found that the PCA-NN algorithm performs well in improving the prediction of slope stability for a given slope in terms of the computational accuracy and computational effort when compared with the other two algorithms(i.e.NN and decision trees,DT).Furthermore,the PCAeNNeTL algorithm shows great potential in assessing the stability of slope even with fewer training data. 展开更多
关键词 Slope stability analysis Monte Carlo simulation neural network(nn) Transfer learning(TL)
下载PDF
Pluggable multitask diffractive neural networks based on cascaded metasurfaces 被引量:4
19
作者 Cong He Dan Zhao +8 位作者 Fei Fan Hongqiang Zhou Xin Li Yao Li Junjie Li Fei Dong Yin-Xiao Miao Yongtian Wang Lingling Huang 《Opto-Electronic Advances》 SCIE EI CAS CSCD 2024年第2期23-31,共9页
Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been c... Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been considered as one of the powerful tools in promoting the fields of imaging processing and object recognition.However,the existing optical system architecture cannot be reconstructed to the realization of multi-functional artificial intelligence systems simultaneously.To push the development of this issue,we propose the pluggable diffractive neural networks(P-DNN),a general paradigm resorting to the cascaded metasurfaces,which can be applied to recognize various tasks by switching internal plug-ins.As the proof-of-principle,the recognition functions of six types of handwritten digits and six types of fashions are numerical simulated and experimental demonstrated at near-infrared regimes.Encouragingly,the proposed paradigm not only improves the flexibility of the optical neural networks but paves the new route for achieving high-speed,low-power and versatile artificial intelligence systems. 展开更多
关键词 optical neural networks diffractive deep neural networks cascaded metasurfaces
下载PDF
Activation Redistribution Based Hybrid Asymmetric Quantization Method of Neural Networks 被引量:1
20
作者 Lu Wei Zhong Ma Chaojie Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期981-1000,共20页
The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedd... The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices.In order to reduce the complexity and overhead of deploying neural networks on Integeronly hardware,most current quantization methods use a symmetric quantization mapping strategy to quantize a floating-point neural network into an integer network.However,although symmetric quantization has the advantage of easier implementation,it is sub-optimal for cases where the range could be skewed and not symmetric.This often comes at the cost of lower accuracy.This paper proposed an activation redistribution-based hybrid asymmetric quantizationmethod for neural networks.The proposedmethod takes data distribution into consideration and can resolve the contradiction between the quantization accuracy and the ease of implementation,balance the trade-off between clipping range and quantization resolution,and thus improve the accuracy of the quantized neural network.The experimental results indicate that the accuracy of the proposed method is 2.02%and 5.52%higher than the traditional symmetric quantization method for classification and detection tasks,respectively.The proposed method paves the way for computationally intensive neural network models to be deployed on devices with limited computing resources.Codes will be available on https://github.com/ycjcy/Hybrid-Asymmetric-Quantization. 展开更多
关键词 QUANTIZATION neural network hybrid asymmetric ACCURACY
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部