The prediction of slope stability is considered as one of the critical concerns in geotechnical engineering.Conventional stochastic analysis with spatially variable slopes is time-consuming and highly computation-dema...The prediction of slope stability is considered as one of the critical concerns in geotechnical engineering.Conventional stochastic analysis with spatially variable slopes is time-consuming and highly computation-demanding.To assess the slope stability problems with a more desirable computational effort,many machine learning(ML)algorithms have been proposed.However,most ML-based techniques require that the training data must be in the same feature space and have the same distribution,and the model may need to be rebuilt when the spatial distribution changes.This paper presents a new ML-based algorithm,which combines the principal component analysis(PCA)-based neural network(NN)and transfer learning(TL)techniques(i.e.PCAeNNeTL)to conduct the stability analysis of slopes with different spatial distributions.The Monte Carlo coupled with finite element simulation is first conducted for data acquisition considering the spatial variability of cohesive strength or friction angle of soils from eight slopes with the same geometry.The PCA method is incorporated into the neural network algorithm(i.e.PCA-NN)to increase the computational efficiency by reducing the input variables.It is found that the PCA-NN algorithm performs well in improving the prediction of slope stability for a given slope in terms of the computational accuracy and computational effort when compared with the other two algorithms(i.e.NN and decision trees,DT).Furthermore,the PCAeNNeTL algorithm shows great potential in assessing the stability of slope even with fewer training data.展开更多
Encrypted traffic plays a crucial role in safeguarding network security and user privacy.However,encrypting malicious traffic can lead to numerous security issues,making the effective classification of encrypted traff...Encrypted traffic plays a crucial role in safeguarding network security and user privacy.However,encrypting malicious traffic can lead to numerous security issues,making the effective classification of encrypted traffic essential.Existing methods for detecting encrypted traffic face two significant challenges.First,relying solely on the original byte information for classification fails to leverage the rich temporal relationships within network traffic.Second,machine learning and convolutional neural network methods lack sufficient network expression capabilities,hindering the full exploration of traffic’s potential characteristics.To address these limitations,this study introduces a traffic classification method that utilizes time relationships and a higher-order graph neural network,termed HGNN-ETC.This approach fully exploits the original byte information and chronological relationships of traffic packets,transforming traffic data into a graph structure to provide the model with more comprehensive context information.HGNN-ETC employs an innovative k-dimensional graph neural network to effectively capture the multi-scale structural features of traffic graphs,enabling more accurate classification.We select the ISCXVPN and the USTC-TK2016 dataset for our experiments.The results show that compared with other state-of-the-art methods,our method can obtain a better classification effect on different datasets,and the accuracy rate is about 97.00%.In addition,by analyzing the impact of varying input specifications on classification performance,we determine the optimal network data truncation strategy and confirm the model’s excellent generalization ability on different datasets.展开更多
Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly di...Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.展开更多
Soft materials,with the sensitivity to various external stimuli,exhibit high flexibility and stretchability.Accurate prediction of their mechanical behaviors requires advanced hyperelastic constitutive models incorpor...Soft materials,with the sensitivity to various external stimuli,exhibit high flexibility and stretchability.Accurate prediction of their mechanical behaviors requires advanced hyperelastic constitutive models incorporating multiple parameters.However,identifying multiple parameters under complex deformations remains a challenge,especially with limited observed data.In this study,we develop a physics-informed neural network(PINN)framework to identify material parameters and predict mechanical fields,focusing on compressible Neo-Hookean materials and hydrogels.To improve accuracy,we utilize scaling techniques to normalize network outputs and material parameters.This framework effectively solves forward and inverse problems,extrapolating continuous mechanical fields from sparse boundary data and identifying unknown mechanical properties.We explore different approaches for imposing boundary conditions(BCs)to assess their impacts on accuracy.To enhance efficiency and generalization,we propose a transfer learning enhanced PINN(TL-PINN),allowing pre-trained networks to quickly adapt to new scenarios.The TL-PINN significantly reduces computational costs while maintaining accuracy.This work holds promise in addressing practical challenges in soft material science,and provides insights into soft material mechanics with state-of-the-art experimental methods.展开更多
Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer l...Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer learningenhanced convolutional neural network(CNN)was proposed to identify the gross weight and the axle weight of moving vehicles on the bridge.The proposed transfer learning-enhanced CNN model was expected to weigh different bridges based on a small amount of training datasets and provide high identification accuracy.First of all,a CNN algorithm for bridge weigh-in-motion(B-WIM)technology was proposed to identify the axle weight and the gross weight of the typical two-axle,three-axle,and five-axle vehicles as they crossed the bridge with different loading routes and speeds.Then,the pre-trained CNN model was transferred by fine-tuning to weigh themoving vehicle on another bridge.Finally,the identification accuracy and the amount of training data required were compared between the two CNN models.Results showed that the pre-trained CNN model using transfer learning for B-WIM technology could be successfully used for the identification of the axle weight and the gross weight for moving vehicles on another bridge while reducing the training data by 63%.Moreover,the recognition accuracy of the pre-trained CNN model using transfer learning was comparable to that of the original model,showing its promising potentials in the actual applications.展开更多
Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically ...Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance.展开更多
This study proposes a novel approach for estimating automobile insurance loss reserves utilizing Artificial Neural Network (ANN) techniques integrated with actuarial data intelligence. The model aims to address the ch...This study proposes a novel approach for estimating automobile insurance loss reserves utilizing Artificial Neural Network (ANN) techniques integrated with actuarial data intelligence. The model aims to address the challenges of accurately predicting insurance claim frequencies, severities, and overall loss reserves while accounting for inflation adjustments. Through comprehensive data analysis and model development, this research explores the effectiveness of ANN methodologies in capturing complex nonlinear relationships within insurance data. The study leverages a data set comprising automobile insurance policyholder information, claim history, and economic indicators to train and validate the ANN-based reserving model. Key aspects of the methodology include data preprocessing techniques such as one-hot encoding and scaling, followed by the construction of frequency, severity, and overall loss reserving models using ANN architectures. Moreover, the model incorporates inflation adjustment factors to ensure the accurate estimation of future loss reserves in real terms. Results from the study demonstrate the superior predictive performance of the ANN-based reserving model compared to traditional actuarial methods, with substantial improvements in accuracy and robustness. Furthermore, the model’s ability to adapt to changing market conditions and regulatory requirements, such as IFRS17, highlights its practical relevance in the insurance industry. The findings of this research contribute to the advancement of actuarial science and provide valuable insights for insurance companies seeking more accurate and efficient loss reserving techniques. The proposed ANN-based approach offers a promising avenue for enhancing risk management practices and optimizing financial decision-making processes in the automobile insurance sector.展开更多
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
Physics-informed neural networks are a useful machine learning method for solving differential equations,but encounter challenges in effectively learning thin boundary layers within singular perturbation problems.To r...Physics-informed neural networks are a useful machine learning method for solving differential equations,but encounter challenges in effectively learning thin boundary layers within singular perturbation problems.To resolve this issue,multi-scale-matching neural networks are proposed to solve the singular perturbation problems.Inspired by matched asymptotic expansions,the solution is decomposed into inner solutions for small scales and outer solutions for large scales,corresponding to boundary layers and outer regions,respectively.Moreover,to conform neural networks,we introduce exponential stretched variables in the boundary layers to avoid semiinfinite region problems.Numerical results for the thin plate problem validate the proposed method.展开更多
Existing web-based security applications have failed in many situations due to the great intelligence of attackers.Among web applications,Cross-Site Scripting(XSS)is one of the dangerous assaults experienced while mod...Existing web-based security applications have failed in many situations due to the great intelligence of attackers.Among web applications,Cross-Site Scripting(XSS)is one of the dangerous assaults experienced while modifying an organization's or user's information.To avoid these security challenges,this article proposes a novel,all-encompassing combination of machine learning(NB,SVM,k-NN)and deep learning(RNN,CNN,LSTM)frameworks for detecting and defending against XSS attacks with high accuracy and efficiency.Based on the representation,a novel idea for merging stacking ensemble with web applications,termed“hybrid stacking”,is proposed.In order to implement the aforementioned methods,four distinct datasets,each of which contains both safe and unsafe content,are considered.The hybrid detection method can adaptively identify the attacks from the URL,and the defense mechanism inherits the advantages of URL encoding with dictionary-based mapping to improve prediction accuracy,accelerate the training process,and effectively remove the unsafe JScript/JavaScript keywords from the URL.The simulation results show that the proposed hybrid model is more efficient than the existing detection methods.It produces more than 99.5%accurate XSS attack classification results(accuracy,precision,recall,f1_score,and Receiver Operating Characteristic(ROC))and is highly resistant to XSS attacks.In order to ensure the security of the server's information,the proposed hybrid approach is demonstrated in a real-time environment.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,exces...Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.展开更多
We present our results by using a machine learning(ML)approach for the solution of the Riemann problem for the Euler equations of fluid dynamics.The Riemann problem is an initial-value problem with piecewise-constant ...We present our results by using a machine learning(ML)approach for the solution of the Riemann problem for the Euler equations of fluid dynamics.The Riemann problem is an initial-value problem with piecewise-constant initial data and it represents a mathematical model of the shock tube.The solution of the Riemann problem is the building block for many numerical algorithms in computational fluid dynamics,such as finite-volume or discontinuous Galerkin methods.Therefore,a fast and accurate approximation of the solution of the Riemann problem and construction of the associated numerical fluxes is of crucial importance.The exact solution of the shock tube problem is fully described by the intermediate pressure and mathematically reduces to finding a solution of a nonlinear equation.Prior to delving into the complexities of ML for the Riemann problem,we consider a much simpler formulation,yet very informative,problem of learning roots of quadratic equations based on their coefficients.We compare two approaches:(i)Gaussian process(GP)regressions,and(ii)neural network(NN)approximations.Among these approaches,NNs prove to be more robust and efficient,although GP can be appreciably more accurate(about 30\%).We then use our experience with the quadratic equation to apply the GP and NN approaches to learn the exact solution of the Riemann problem from the initial data or coefficients of the gas equation of state(EOS).We compare GP and NN approximations in both regression and classification analysis and discuss the potential benefits and drawbacks of the ML approach.展开更多
Microseism,acoustic emission and electromagnetic radiation(M-A-E)data are usually used for predicting rockburst hazards.However,it is a great challenge to realize the prediction of M-A-E data.In this study,with the ai...Microseism,acoustic emission and electromagnetic radiation(M-A-E)data are usually used for predicting rockburst hazards.However,it is a great challenge to realize the prediction of M-A-E data.In this study,with the aid of a deep learning algorithm,a new method for the prediction of M-A-E data is proposed.In this method,an M-A-E data prediction model is built based on a variety of neural networks after analyzing numerous M-A-E data,and then the M-A-E data can be predicted.The predicted results are highly correlated with the real data collected in the field.Through field verification,the deep learning-based prediction method of M-A-E data provides quantitative prediction data for rockburst monitoring.展开更多
Traditional expert-designed branching rules in branch-and-bound(B&B) are static, often failing to adapt to diverse and evolving problem instances. Crafting these rules is labor-intensive, and may not scale well wi...Traditional expert-designed branching rules in branch-and-bound(B&B) are static, often failing to adapt to diverse and evolving problem instances. Crafting these rules is labor-intensive, and may not scale well with complex problems.Given the frequent need to solve varied combinatorial optimization problems, leveraging statistical learning to auto-tune B&B algorithms for specific problem classes becomes attractive. This paper proposes a graph pointer network model to learn the branch rules. Graph features, global features and historical features are designated to represent the solver state. The graph neural network processes graph features, while the pointer mechanism assimilates the global and historical features to finally determine the variable on which to branch. The model is trained to imitate the expert strong branching rule by a tailored top-k Kullback-Leibler divergence loss function. Experiments on a series of benchmark problems demonstrate that the proposed approach significantly outperforms the widely used expert-designed branching rules. It also outperforms state-of-the-art machine-learning-based branch-and-bound methods in terms of solving speed and search tree size on all the test instances. In addition, the model can generalize to unseen instances and scale to larger instances.展开更多
As the demand for high-quality services proliferates,an innovative network architecture,the fully-decoupled RAN(FD-RAN),has emerged for more flexible spectrum resource utilization and lower network costs.However,with ...As the demand for high-quality services proliferates,an innovative network architecture,the fully-decoupled RAN(FD-RAN),has emerged for more flexible spectrum resource utilization and lower network costs.However,with the decoupling of uplink base stations and downlink base stations in FDRAN,the traditional transmission mechanism,which relies on real-time channel feedback,is not suitable as the receiver is not able to feedback accurate and timely channel state information to the transmitter.This paper proposes a novel transmission scheme without relying on physical layer channel feedback.Specifically,we design a radio map based complex-valued precoding network(RMCPNet)model,which outputs the base station precoding based on user location.RMCPNet comprises multiple subnets,with each subnet responsible for extracting unique modal features from diverse input modalities.Furthermore,the multimodal embeddings derived from these distinct subnets are integrated within the information fusion layer,culminating in a unified representation.We also develop a specific RMCPNet training algorithm that employs the negative spectral efficiency as the loss function.We evaluate the performance of the proposed scheme on the public DeepMIMO dataset and show that RMCPNet can achieve 16%and 76%performance improvements over the conventional real-valued neural network and statistical codebook approach,respectively.展开更多
Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structu...Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structure and diversified application requirements call for the applying of network slicing technology.Guaranteeing robust network slicing is essential for Industrial Internet,but it faces the challenge of complex slice topologies caused by the intricate interaction relationships among Network Functions(NFs)composing the slice.Existing works have not concerned the strengthening problem of industrial network slicing regarding its complex network properties.Towards this end,we aim to study this issue by intelligently selecting a subset of most valuable NFs with the minimum cost to satisfy the strengthening requirements.State-of-the-art AlphaGo series of algorithms and the advanced graph neural network technology are combined to build the solution.Simulation results demonstrate the superior performance of our scheme compared to the benchmark schemes.展开更多
Recent advances in deep neural networks have shed new light on physics,engineering,and scientific computing.Reconciling the data-centered viewpoint with physical simulation is one of the research hotspots.The physicsi...Recent advances in deep neural networks have shed new light on physics,engineering,and scientific computing.Reconciling the data-centered viewpoint with physical simulation is one of the research hotspots.The physicsinformedneural network(PINN)is currently the most general framework,which is more popular due to theconvenience of constructing NNs and excellent generalization ability.The automatic differentiation(AD)-basedPINN model is suitable for the homogeneous scientific problem;however,it is unclear how AD can enforce fluxcontinuity across boundaries between cells of different properties where spatial heterogeneity is represented bygrid cells with different physical properties.In this work,we propose a criss-cross physics-informed convolutionalneural network(CC-PINN)learning architecture,aiming to learn the solution of parametric PDEs with spatialheterogeneity of physical properties.To achieve the seamless enforcement of flux continuity and integration ofphysicalmeaning into CNN,a predefined 2D convolutional layer is proposed to accurately express transmissibilitybetween adjacent cells.The efficacy of the proposedmethodwas evaluated through predictions of several petroleumreservoir problems with spatial heterogeneity and compared against state-of-the-art(PINN)through numericalanalysis as a benchmark,which demonstrated the superiority of the proposed method over the PINN.展开更多
Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provide...Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.展开更多
Heat transport has been significantly enhanced by the widespread usage of extended surfaces in various engi-neering domains.Gas turbine blade cooling,refrigeration,and electronic equipment cooling are a few prevalent ...Heat transport has been significantly enhanced by the widespread usage of extended surfaces in various engi-neering domains.Gas turbine blade cooling,refrigeration,and electronic equipment cooling are a few prevalent applications.Thus,the thermal analysis of extended surfaces has been the subject of a significant assessment by researchers.Motivated by this,the present study describes the unsteady thermal dispersal phenomena in a wavy fin with the presence of convection heat transmission.This analysis also emphasizes a novel mathematical model in accordance with transient thermal change in a wavy profiled fin resulting from convection using the finite difference method(FDM)and physics informed neural network(PINN).The time and space-dependent governing partial differential equation(PDE)for the suggested heat problem has been translated into a dimensionless form using the relevant dimensionless terms.The graph depicts the effect of thermal parameters on the fin’s thermal profile.The temperature dispersion in the fin decreases as the dimensionless convection-conduction variable rises.The heat dispersion in the fin is decreased by increasing the aspect ratio,whereas the reverse behavior is seen with the time change.Furthermore,FDM-PINN results are validated against the outcomes of the FDM.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.52008402)the Central South University autonomous exploration project(Grant No.2021zzts0790).
文摘The prediction of slope stability is considered as one of the critical concerns in geotechnical engineering.Conventional stochastic analysis with spatially variable slopes is time-consuming and highly computation-demanding.To assess the slope stability problems with a more desirable computational effort,many machine learning(ML)algorithms have been proposed.However,most ML-based techniques require that the training data must be in the same feature space and have the same distribution,and the model may need to be rebuilt when the spatial distribution changes.This paper presents a new ML-based algorithm,which combines the principal component analysis(PCA)-based neural network(NN)and transfer learning(TL)techniques(i.e.PCAeNNeTL)to conduct the stability analysis of slopes with different spatial distributions.The Monte Carlo coupled with finite element simulation is first conducted for data acquisition considering the spatial variability of cohesive strength or friction angle of soils from eight slopes with the same geometry.The PCA method is incorporated into the neural network algorithm(i.e.PCA-NN)to increase the computational efficiency by reducing the input variables.It is found that the PCA-NN algorithm performs well in improving the prediction of slope stability for a given slope in terms of the computational accuracy and computational effort when compared with the other two algorithms(i.e.NN and decision trees,DT).Furthermore,the PCAeNNeTL algorithm shows great potential in assessing the stability of slope even with fewer training data.
基金supported in part by the National Key Research and Development Program of China(No.2022YFB4500800)the National Science Foundation of China(No.42071431).
文摘Encrypted traffic plays a crucial role in safeguarding network security and user privacy.However,encrypting malicious traffic can lead to numerous security issues,making the effective classification of encrypted traffic essential.Existing methods for detecting encrypted traffic face two significant challenges.First,relying solely on the original byte information for classification fails to leverage the rich temporal relationships within network traffic.Second,machine learning and convolutional neural network methods lack sufficient network expression capabilities,hindering the full exploration of traffic’s potential characteristics.To address these limitations,this study introduces a traffic classification method that utilizes time relationships and a higher-order graph neural network,termed HGNN-ETC.This approach fully exploits the original byte information and chronological relationships of traffic packets,transforming traffic data into a graph structure to provide the model with more comprehensive context information.HGNN-ETC employs an innovative k-dimensional graph neural network to effectively capture the multi-scale structural features of traffic graphs,enabling more accurate classification.We select the ISCXVPN and the USTC-TK2016 dataset for our experiments.The results show that compared with other state-of-the-art methods,our method can obtain a better classification effect on different datasets,and the accuracy rate is about 97.00%.In addition,by analyzing the impact of varying input specifications on classification performance,we determine the optimal network data truncation strategy and confirm the model’s excellent generalization ability on different datasets.
基金Researchers Supporting Project Number(RSPD2024R 553),King Saud University,Riyadh,Saudi Arabia.
文摘Wheat is a critical crop,extensively consumed worldwide,and its production enhancement is essential to meet escalating demand.The presence of diseases like stem rust,leaf rust,yellow rust,and tan spot significantly diminishes wheat yield,making the early and precise identification of these diseases vital for effective disease management.With advancements in deep learning algorithms,researchers have proposed many methods for the automated detection of disease pathogens;however,accurately detectingmultiple disease pathogens simultaneously remains a challenge.This challenge arises due to the scarcity of RGB images for multiple diseases,class imbalance in existing public datasets,and the difficulty in extracting features that discriminate between multiple classes of disease pathogens.In this research,a novel method is proposed based on Transfer Generative Adversarial Networks for augmenting existing data,thereby overcoming the problems of class imbalance and data scarcity.This study proposes a customized architecture of Vision Transformers(ViT),where the feature vector is obtained by concatenating features extracted from the custom ViT and Graph Neural Networks.This paper also proposes a Model AgnosticMeta Learning(MAML)based ensemble classifier for accurate classification.The proposedmodel,validated on public datasets for wheat disease pathogen classification,achieved a test accuracy of 99.20%and an F1-score of 97.95%.Compared with existing state-of-the-art methods,this proposed model outperforms in terms of accuracy,F1-score,and the number of disease pathogens detection.In future,more diseases can be included for detection along with some other modalities like pests and weed.
基金supported by the National Natural Science Foundation of China(Nos.12172273 and 11820101001)。
文摘Soft materials,with the sensitivity to various external stimuli,exhibit high flexibility and stretchability.Accurate prediction of their mechanical behaviors requires advanced hyperelastic constitutive models incorporating multiple parameters.However,identifying multiple parameters under complex deformations remains a challenge,especially with limited observed data.In this study,we develop a physics-informed neural network(PINN)framework to identify material parameters and predict mechanical fields,focusing on compressible Neo-Hookean materials and hydrogels.To improve accuracy,we utilize scaling techniques to normalize network outputs and material parameters.This framework effectively solves forward and inverse problems,extrapolating continuous mechanical fields from sparse boundary data and identifying unknown mechanical properties.We explore different approaches for imposing boundary conditions(BCs)to assess their impacts on accuracy.To enhance efficiency and generalization,we propose a transfer learning enhanced PINN(TL-PINN),allowing pre-trained networks to quickly adapt to new scenarios.The TL-PINN significantly reduces computational costs while maintaining accuracy.This work holds promise in addressing practical challenges in soft material science,and provides insights into soft material mechanics with state-of-the-art experimental methods.
基金the financial support provided by the National Natural Science Foundation of China(Grant No.52208213)the Excellent Youth Foundation of Education Department in Hunan Province(Grant No.22B0141)+1 种基金the Xiaohe Sci-Tech Talents Special Funding under Hunan Provincial Sci-Tech Talents Sponsorship Program(2023TJ-X65)the Science Foundation of Xiangtan University(Grant No.21QDZ23).
文摘Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer learningenhanced convolutional neural network(CNN)was proposed to identify the gross weight and the axle weight of moving vehicles on the bridge.The proposed transfer learning-enhanced CNN model was expected to weigh different bridges based on a small amount of training datasets and provide high identification accuracy.First of all,a CNN algorithm for bridge weigh-in-motion(B-WIM)technology was proposed to identify the axle weight and the gross weight of the typical two-axle,three-axle,and five-axle vehicles as they crossed the bridge with different loading routes and speeds.Then,the pre-trained CNN model was transferred by fine-tuning to weigh themoving vehicle on another bridge.Finally,the identification accuracy and the amount of training data required were compared between the two CNN models.Results showed that the pre-trained CNN model using transfer learning for B-WIM technology could be successfully used for the identification of the axle weight and the gross weight for moving vehicles on another bridge while reducing the training data by 63%.Moreover,the recognition accuracy of the pre-trained CNN model using transfer learning was comparable to that of the original model,showing its promising potentials in the actual applications.
文摘Long-term time series forecasting stands as a crucial research domain within the realm of automated machine learning(AutoML).At present,forecasting,whether rooted in machine learning or statistical learning,typically relies on expert input and necessitates substantial manual involvement.This manual effort spans model development,feature engineering,hyper-parameter tuning,and the intricate construction of time series models.The complexity of these tasks renders complete automation unfeasible,as they inherently demand human intervention at multiple junctures.To surmount these challenges,this article proposes leveraging Long Short-Term Memory,which is the variant of Recurrent Neural Networks,harnessing memory cells and gating mechanisms to facilitate long-term time series prediction.However,forecasting accuracy by particular neural network and traditional models can degrade significantly,when addressing long-term time-series tasks.Therefore,our research demonstrates that this innovative approach outperforms the traditional Autoregressive Integrated Moving Average(ARIMA)method in forecasting long-term univariate time series.ARIMA is a high-quality and competitive model in time series prediction,and yet it requires significant preprocessing efforts.Using multiple accuracy metrics,we have evaluated both ARIMA and proposed method on the simulated time-series data and real data in both short and long term.Furthermore,our findings indicate its superiority over alternative network architectures,including Fully Connected Neural Networks,Convolutional Neural Networks,and Nonpooling Convolutional Neural Networks.Our AutoML approach enables non-professional to attain highly accurate and effective time series forecasting,and can be widely applied to various domains,particularly in business and finance.
文摘This study proposes a novel approach for estimating automobile insurance loss reserves utilizing Artificial Neural Network (ANN) techniques integrated with actuarial data intelligence. The model aims to address the challenges of accurately predicting insurance claim frequencies, severities, and overall loss reserves while accounting for inflation adjustments. Through comprehensive data analysis and model development, this research explores the effectiveness of ANN methodologies in capturing complex nonlinear relationships within insurance data. The study leverages a data set comprising automobile insurance policyholder information, claim history, and economic indicators to train and validate the ANN-based reserving model. Key aspects of the methodology include data preprocessing techniques such as one-hot encoding and scaling, followed by the construction of frequency, severity, and overall loss reserving models using ANN architectures. Moreover, the model incorporates inflation adjustment factors to ensure the accurate estimation of future loss reserves in real terms. Results from the study demonstrate the superior predictive performance of the ANN-based reserving model compared to traditional actuarial methods, with substantial improvements in accuracy and robustness. Furthermore, the model’s ability to adapt to changing market conditions and regulatory requirements, such as IFRS17, highlights its practical relevance in the insurance industry. The findings of this research contribute to the advancement of actuarial science and provide valuable insights for insurance companies seeking more accurate and efficient loss reserving techniques. The proposed ANN-based approach offers a promising avenue for enhancing risk management practices and optimizing financial decision-making processes in the automobile insurance sector.
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.
基金supported by the National Natural Science Foun-dation of China (NSFC) Basic Science Center Program for"Multiscale Problems in Nonlinear Mechanics"(Grant No. 11988102)supported by the National Natural Science Foundation of China (NSFC)(Grant No. 12202451)
文摘Physics-informed neural networks are a useful machine learning method for solving differential equations,but encounter challenges in effectively learning thin boundary layers within singular perturbation problems.To resolve this issue,multi-scale-matching neural networks are proposed to solve the singular perturbation problems.Inspired by matched asymptotic expansions,the solution is decomposed into inner solutions for small scales and outer solutions for large scales,corresponding to boundary layers and outer regions,respectively.Moreover,to conform neural networks,we introduce exponential stretched variables in the boundary layers to avoid semiinfinite region problems.Numerical results for the thin plate problem validate the proposed method.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MEST)No.2015R1A3A2031159,2016R1A5A1008055.
文摘Existing web-based security applications have failed in many situations due to the great intelligence of attackers.Among web applications,Cross-Site Scripting(XSS)is one of the dangerous assaults experienced while modifying an organization's or user's information.To avoid these security challenges,this article proposes a novel,all-encompassing combination of machine learning(NB,SVM,k-NN)and deep learning(RNN,CNN,LSTM)frameworks for detecting and defending against XSS attacks with high accuracy and efficiency.Based on the representation,a novel idea for merging stacking ensemble with web applications,termed“hybrid stacking”,is proposed.In order to implement the aforementioned methods,four distinct datasets,each of which contains both safe and unsafe content,are considered.The hybrid detection method can adaptively identify the attacks from the URL,and the defense mechanism inherits the advantages of URL encoding with dictionary-based mapping to improve prediction accuracy,accelerate the training process,and effectively remove the unsafe JScript/JavaScript keywords from the URL.The simulation results show that the proposed hybrid model is more efficient than the existing detection methods.It produces more than 99.5%accurate XSS attack classification results(accuracy,precision,recall,f1_score,and Receiver Operating Characteristic(ROC))and is highly resistant to XSS attacks.In order to ensure the security of the server's information,the proposed hybrid approach is demonstrated in a real-time environment.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
基金supported by the National Natural Science Foundation of China(Nos.61974164,62074166,62004219,62004220,and 62104256).
文摘Artificial neural networks(ANNs)have led to landmark changes in many fields,but they still differ significantly fromthemechanisms of real biological neural networks and face problems such as high computing costs,excessive computing power,and so on.Spiking neural networks(SNNs)provide a new approach combined with brain-like science to improve the computational energy efficiency,computational architecture,and biological credibility of current deep learning applications.In the early stage of development,its poor performance hindered the application of SNNs in real-world scenarios.In recent years,SNNs have made great progress in computational performance and practicability compared with the earlier research results,and are continuously producing significant results.Although there are already many pieces of literature on SNNs,there is still a lack of comprehensive review on SNNs from the perspective of improving performance and practicality as well as incorporating the latest research results.Starting from this issue,this paper elaborates on SNNs along the complete usage process of SNNs including network construction,data processing,model training,development,and deployment,aiming to provide more comprehensive and practical guidance to promote the development of SNNs.Therefore,the connotation and development status of SNNcomputing is reviewed systematically and comprehensively from four aspects:composition structure,data set,learning algorithm,software/hardware development platform.Then the development characteristics of SNNs in intelligent computing are summarized,the current challenges of SNNs are discussed and the future development directions are also prospected.Our research shows that in the fields of machine learning and intelligent computing,SNNs have comparable network scale and performance to ANNs and the ability to challenge large datasets and a variety of tasks.The advantages of SNNs over ANNs in terms of energy efficiency and spatial-temporal data processing have been more fully exploited.And the development of programming and deployment tools has lowered the threshold for the use of SNNs.SNNs show a broad development prospect for brain-like computing.
基金This work was performed under the auspices of the National Nuclear Security Administration of the US Department of Energy at Los Alamos National Laboratory under Contract No.DE-AC52-06NA25396The authors gratefully acknowledge the support of the US Department of Energy National Nuclear Security Administration Advanced Simulation and Computing Program.The Los Alamos unlimited release number is LA-UR-19-32257.
文摘We present our results by using a machine learning(ML)approach for the solution of the Riemann problem for the Euler equations of fluid dynamics.The Riemann problem is an initial-value problem with piecewise-constant initial data and it represents a mathematical model of the shock tube.The solution of the Riemann problem is the building block for many numerical algorithms in computational fluid dynamics,such as finite-volume or discontinuous Galerkin methods.Therefore,a fast and accurate approximation of the solution of the Riemann problem and construction of the associated numerical fluxes is of crucial importance.The exact solution of the shock tube problem is fully described by the intermediate pressure and mathematically reduces to finding a solution of a nonlinear equation.Prior to delving into the complexities of ML for the Riemann problem,we consider a much simpler formulation,yet very informative,problem of learning roots of quadratic equations based on their coefficients.We compare two approaches:(i)Gaussian process(GP)regressions,and(ii)neural network(NN)approximations.Among these approaches,NNs prove to be more robust and efficient,although GP can be appreciably more accurate(about 30\%).We then use our experience with the quadratic equation to apply the GP and NN approaches to learn the exact solution of the Riemann problem from the initial data or coefficients of the gas equation of state(EOS).We compare GP and NN approximations in both regression and classification analysis and discuss the potential benefits and drawbacks of the ML approach.
基金supported by the National Natural Science Foundation of China(Grant No.51934007)the Natural Science Foundation of Jiangsu Province,China(Grant No.BK20220691).
文摘Microseism,acoustic emission and electromagnetic radiation(M-A-E)data are usually used for predicting rockburst hazards.However,it is a great challenge to realize the prediction of M-A-E data.In this study,with the aid of a deep learning algorithm,a new method for the prediction of M-A-E data is proposed.In this method,an M-A-E data prediction model is built based on a variety of neural networks after analyzing numerous M-A-E data,and then the M-A-E data can be predicted.The predicted results are highly correlated with the real data collected in the field.Through field verification,the deep learning-based prediction method of M-A-E data provides quantitative prediction data for rockburst monitoring.
基金supported by the Open Project of Xiangjiang Laboratory (22XJ02003)Scientific Project of the National University of Defense Technology (NUDT)(ZK21-07, 23-ZZCX-JDZ-28)+1 种基金the National Science Fund for Outstanding Young Scholars (62122093)the National Natural Science Foundation of China (72071205)。
文摘Traditional expert-designed branching rules in branch-and-bound(B&B) are static, often failing to adapt to diverse and evolving problem instances. Crafting these rules is labor-intensive, and may not scale well with complex problems.Given the frequent need to solve varied combinatorial optimization problems, leveraging statistical learning to auto-tune B&B algorithms for specific problem classes becomes attractive. This paper proposes a graph pointer network model to learn the branch rules. Graph features, global features and historical features are designated to represent the solver state. The graph neural network processes graph features, while the pointer mechanism assimilates the global and historical features to finally determine the variable on which to branch. The model is trained to imitate the expert strong branching rule by a tailored top-k Kullback-Leibler divergence loss function. Experiments on a series of benchmark problems demonstrate that the proposed approach significantly outperforms the widely used expert-designed branching rules. It also outperforms state-of-the-art machine-learning-based branch-and-bound methods in terms of solving speed and search tree size on all the test instances. In addition, the model can generalize to unseen instances and scale to larger instances.
基金supported in part by the National Natural Science Foundation Original Exploration Project of China under Grant 62250004the National Natural Science Foundation of China under Grant 62271244+1 种基金the Natural Science Fund for Distinguished Young Scholars of Jiangsu Province under Grant BK20220067the Natural Sciences and Engineering Research Council of Canada (NSERC)
文摘As the demand for high-quality services proliferates,an innovative network architecture,the fully-decoupled RAN(FD-RAN),has emerged for more flexible spectrum resource utilization and lower network costs.However,with the decoupling of uplink base stations and downlink base stations in FDRAN,the traditional transmission mechanism,which relies on real-time channel feedback,is not suitable as the receiver is not able to feedback accurate and timely channel state information to the transmitter.This paper proposes a novel transmission scheme without relying on physical layer channel feedback.Specifically,we design a radio map based complex-valued precoding network(RMCPNet)model,which outputs the base station precoding based on user location.RMCPNet comprises multiple subnets,with each subnet responsible for extracting unique modal features from diverse input modalities.Furthermore,the multimodal embeddings derived from these distinct subnets are integrated within the information fusion layer,culminating in a unified representation.We also develop a specific RMCPNet training algorithm that employs the negative spectral efficiency as the loss function.We evaluate the performance of the proposed scheme on the public DeepMIMO dataset and show that RMCPNet can achieve 16%and 76%performance improvements over the conventional real-valued neural network and statistical codebook approach,respectively.
基金supported by National Key R&D Program of China(2022YFB3104200)in part by National Natural Science Foundation of China(62202386)+2 种基金in part by Basic Research Programs of Taicang(TC2021JC31)in part by Fundamental Research Funds for the Central Universities(D5000210817)in part by Xi’an Unmanned System Security and Intelligent Communications ISTC Center,and in part by Special Funds for Central Universities Construction of World-Class Universities(Disciplines)and Special Development Guidance(0639022GH0202237 and 0639022SH0201237).
文摘Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structure and diversified application requirements call for the applying of network slicing technology.Guaranteeing robust network slicing is essential for Industrial Internet,but it faces the challenge of complex slice topologies caused by the intricate interaction relationships among Network Functions(NFs)composing the slice.Existing works have not concerned the strengthening problem of industrial network slicing regarding its complex network properties.Towards this end,we aim to study this issue by intelligently selecting a subset of most valuable NFs with the minimum cost to satisfy the strengthening requirements.State-of-the-art AlphaGo series of algorithms and the advanced graph neural network technology are combined to build the solution.Simulation results demonstrate the superior performance of our scheme compared to the benchmark schemes.
基金the National Natural Science Foundation of China(No.52274048)Beijing Natural Science Foundation(No.3222037)+1 种基金the CNPC 14th Five-Year Perspective Fundamental Research Project(No.2021DJ2104)the Science Foundation of China University of Petroleum,Beijing(No.2462021YXZZ010).
文摘Recent advances in deep neural networks have shed new light on physics,engineering,and scientific computing.Reconciling the data-centered viewpoint with physical simulation is one of the research hotspots.The physicsinformedneural network(PINN)is currently the most general framework,which is more popular due to theconvenience of constructing NNs and excellent generalization ability.The automatic differentiation(AD)-basedPINN model is suitable for the homogeneous scientific problem;however,it is unclear how AD can enforce fluxcontinuity across boundaries between cells of different properties where spatial heterogeneity is represented bygrid cells with different physical properties.In this work,we propose a criss-cross physics-informed convolutionalneural network(CC-PINN)learning architecture,aiming to learn the solution of parametric PDEs with spatialheterogeneity of physical properties.To achieve the seamless enforcement of flux continuity and integration ofphysicalmeaning into CNN,a predefined 2D convolutional layer is proposed to accurately express transmissibilitybetween adjacent cells.The efficacy of the proposedmethodwas evaluated through predictions of several petroleumreservoir problems with spatial heterogeneity and compared against state-of-the-art(PINN)through numericalanalysis as a benchmark,which demonstrated the superiority of the proposed method over the PINN.
基金financially supported by China Postdoctoral Science Foundation(Grant No.2023M730365)Natural Science Foundation of Hubei Province of China(Grant No.2023AFB232)。
文摘Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.
基金supported by the Researchers Supporting Project number (RSPD2024R526),King Saud University,Riyadh,Saudi Arabi.
文摘Heat transport has been significantly enhanced by the widespread usage of extended surfaces in various engi-neering domains.Gas turbine blade cooling,refrigeration,and electronic equipment cooling are a few prevalent applications.Thus,the thermal analysis of extended surfaces has been the subject of a significant assessment by researchers.Motivated by this,the present study describes the unsteady thermal dispersal phenomena in a wavy fin with the presence of convection heat transmission.This analysis also emphasizes a novel mathematical model in accordance with transient thermal change in a wavy profiled fin resulting from convection using the finite difference method(FDM)and physics informed neural network(PINN).The time and space-dependent governing partial differential equation(PDE)for the suggested heat problem has been translated into a dimensionless form using the relevant dimensionless terms.The graph depicts the effect of thermal parameters on the fin’s thermal profile.The temperature dispersion in the fin decreases as the dimensionless convection-conduction variable rises.The heat dispersion in the fin is decreased by increasing the aspect ratio,whereas the reverse behavior is seen with the time change.Furthermore,FDM-PINN results are validated against the outcomes of the FDM.