The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ...The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
The accurate prediction of the bearing capacity of ring footings,which is crucial for civil engineering projects,has historically posed significant challenges.Previous research in this area has been constrained by con...The accurate prediction of the bearing capacity of ring footings,which is crucial for civil engineering projects,has historically posed significant challenges.Previous research in this area has been constrained by considering only a limited number of parameters or utilizing relatively small datasets.To overcome these limitations,a comprehensive finite element limit analysis(FELA)was conducted to predict the bearing capacity of ring footings.The study considered a range of effective parameters,including clay undrained shear strength,heterogeneity factor of clay,soil friction angle of the sand layer,radius ratio of the ring footing,sand layer thickness,and the interface between the ring footing and the soil.An extensive dataset comprising 80,000 samples was assembled,exceeding the limitations of previous research.The availability of this dataset enabled more robust and statistically significant analyses and predictions of ring footing bearing capacity.In light of the time-intensive nature of gathering a substantial dataset,a customized deep neural network(DNN)was developed specifically to predict the bearing capacity of the dataset rapidly.Both computational and comparative results indicate that the proposed DNN(i.e.DNN-4)can accurately predict the bearing capacity of a soil with an R2 value greater than 0.99 and a mean squared error(MSE)below 0.009 in a fraction of 1 s,reflecting the effectiveness and efficiency of the proposed method.展开更多
Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to sca...Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.展开更多
This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intr...This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge.展开更多
The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably a...The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.展开更多
Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial ...Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task.展开更多
This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that co...This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that compare different CNN configurations and deployed the best-performing architecture to downscale one-month lead seasonal forecasts of June–July–August–September(JJAS) precipitation from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0(NUIST-CFS1.0) for 1982–2020. We also perform hyper-parameter optimization and introduce predictors over a larger area to include information about the main large-scale circulations that drive precipitation over the East Africa region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results show that the CNN-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme precipitation spatial patterns. Besides, CNN-based downscaling yields a much more accurate forecast of extreme and spell indicators and reduces the significant relative biases exhibited by the raw model predictions. Moreover, our results show that CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of East Africa. The results demonstrate the potential usefulness of CNN in downscaling seasonal precipitation predictions over East Africa,particularly in providing improved forecast products which are essential for end users.展开更多
Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challe...Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines.展开更多
It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly eval...It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key...The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key role in improving the safety and economic benefits of the power grid.This paper proposes a wind power predicting method based on a convolutional graph attention deep neural network with multi-wind farm data.Based on the graph attention network and attention mechanism,the method extracts spatial-temporal characteristics from the data of multiple wind farms.Then,combined with a deep neural network,a convolutional graph attention deep neural network model is constructed.Finally,the model is trained with the quantile regression loss function to achieve the wind power deterministic and probabilistic prediction based on multi-wind farm spatial-temporal data.A wind power dataset in the U.S.is taken as an example to demonstrate the efficacy of the proposed model.Compared with the selected baseline methods,the proposed model achieves the best prediction performance.The point prediction errors(i.e.,root mean square error(RMSE)and normalized mean absolute percentage error(NMAPE))are 0.304 MW and 1.177%,respectively.And the comprehensive performance of probabilistic prediction(i.e.,con-tinuously ranked probability score(CRPS))is 0.580.Thus,the significance of multi-wind farm data and spatial-temporal feature extraction module is self-evident.展开更多
In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a f...In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a focus on the area over the Yellow Sea and the Bohai Sea(32°-42°N,117°-127°E).The objective was to develop an algorithm for fusing and segmenting multi-channel images from geostationary meteorological satellites,specifically for monitoring sea fog in this region.Firstly,the extreme gradient boosting algorithm was adopted to evaluate the data from the 16 channels of the Himawari-8 satellite for sea fog detection,and we found that the top three channels in order of importance were channels 3,4,and 14,which were fused into false color daytime images,while channels 7,13,and 15 were fused into false color nighttime images.Secondly,the simple linear iterative super-pixel clustering algorithm was used for the pixel-level segmentation of false color images,and based on super-pixel blocks,manual sea-fog annotation was performed to obtain fine-grained annotation labels.The deep convolutional neural network D-LinkNet was built on the ResNet backbone and the dilated convolutional layers with direct connections were added in the central part to form a string-and-combine structure with five branches having different depths and receptive fields.Results show that the accuracy rate of fog area(proportion of detected real fog to detected fog)was 66.5%,the recognition rate of fog zone(proportion of detected real fog to real fog or cloud cover)was 51.9%,and the detection accuracy rate(proportion of samples detected correctly to total samples)was 93.2%.展开更多
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualiz...Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches.展开更多
Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been c...Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been considered as one of the powerful tools in promoting the fields of imaging processing and object recognition.However,the existing optical system architecture cannot be reconstructed to the realization of multi-functional artificial intelligence systems simultaneously.To push the development of this issue,we propose the pluggable diffractive neural networks(P-DNN),a general paradigm resorting to the cascaded metasurfaces,which can be applied to recognize various tasks by switching internal plug-ins.As the proof-of-principle,the recognition functions of six types of handwritten digits and six types of fashions are numerical simulated and experimental demonstrated at near-infrared regimes.Encouragingly,the proposed paradigm not only improves the flexibility of the optical neural networks but paves the new route for achieving high-speed,low-power and versatile artificial intelligence systems.展开更多
For training the present Neural Network(NN)models,the standard technique is to utilize decaying Learning Rates(LR).While the majority of these techniques commence with a large LR,they will decay multiple times over ti...For training the present Neural Network(NN)models,the standard technique is to utilize decaying Learning Rates(LR).While the majority of these techniques commence with a large LR,they will decay multiple times over time.Decaying has been proved to enhance generalization as well as optimization.Other parameters,such as the network’s size,the number of hidden layers,drop-outs to avoid overfitting,batch size,and so on,are solely based on heuristics.This work has proposed Adaptive Teaching Learning Based(ATLB)Heuristic to identify the optimal hyperparameters for diverse networks.Here we consider three architec-tures Recurrent Neural Networks(RNN),Long Short Term Memory(LSTM),Bidirectional Long Short Term Memory(BiLSTM)of Deep Neural Networks for classification.The evaluation of the proposed ATLB is done through the various learning rate schedulers Cyclical Learning Rate(CLR),Hyperbolic Tangent Decay(HTD),and Toggle between Hyperbolic Tangent Decay and Triangular mode with Restarts(T-HTR)techniques.Experimental results have shown the performance improvement on the 20Newsgroup,Reuters Newswire and IMDB dataset.展开更多
The composition control of molten steel is one of the main functions in the ladle furnace(LF)refining process.In this study,a feasible model was established to predict the alloying element yield using principal compon...The composition control of molten steel is one of the main functions in the ladle furnace(LF)refining process.In this study,a feasible model was established to predict the alloying element yield using principal component analysis(PCA)and deep neural network(DNN).The PCA was used to eliminate collinearity and reduce the dimension of the input variables,and then the data processed by PCA were used to establish the DNN model.The prediction hit ratios for the Si element yield in the error ranges of±1%,±3%,and±5%are 54.0%,93.8%,and98.8%,respectively,whereas those of the Mn element yield in the error ranges of±1%,±2%,and±3%are 77.0%,96.3%,and 99.5%,respectively,in the PCA-DNN model.The results demonstrate that the PCA-DNN model performs better than the known models,such as the reference heat method,multiple linear regression,modified backpropagation,and DNN model.Meanwhile,the accurate prediction of the alloying element yield can greatly contribute to realizing a“narrow window”control of composition in molten steel.The construction of the prediction model for the element yield can also provide a reference for the development of an alloying control model in LF intelligent refining in the modern iron and steel industry.展开更多
Microseism,acoustic emission and electromagnetic radiation(M-A-E)data are usually used for predicting rockburst hazards.However,it is a great challenge to realize the prediction of M-A-E data.In this study,with the ai...Microseism,acoustic emission and electromagnetic radiation(M-A-E)data are usually used for predicting rockburst hazards.However,it is a great challenge to realize the prediction of M-A-E data.In this study,with the aid of a deep learning algorithm,a new method for the prediction of M-A-E data is proposed.In this method,an M-A-E data prediction model is built based on a variety of neural networks after analyzing numerous M-A-E data,and then the M-A-E data can be predicted.The predicted results are highly correlated with the real data collected in the field.Through field verification,the deep learning-based prediction method of M-A-E data provides quantitative prediction data for rockburst monitoring.展开更多
Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structu...Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structure and diversified application requirements call for the applying of network slicing technology.Guaranteeing robust network slicing is essential for Industrial Internet,but it faces the challenge of complex slice topologies caused by the intricate interaction relationships among Network Functions(NFs)composing the slice.Existing works have not concerned the strengthening problem of industrial network slicing regarding its complex network properties.Towards this end,we aim to study this issue by intelligently selecting a subset of most valuable NFs with the minimum cost to satisfy the strengthening requirements.State-of-the-art AlphaGo series of algorithms and the advanced graph neural network technology are combined to build the solution.Simulation results demonstrate the superior performance of our scheme compared to the benchmark schemes.展开更多
基金financially supported by the National Natural Science Foundation of China (Nos.51974023 and52374321)the funding of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing,China (No.41620007)。
文摘The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
文摘The accurate prediction of the bearing capacity of ring footings,which is crucial for civil engineering projects,has historically posed significant challenges.Previous research in this area has been constrained by considering only a limited number of parameters or utilizing relatively small datasets.To overcome these limitations,a comprehensive finite element limit analysis(FELA)was conducted to predict the bearing capacity of ring footings.The study considered a range of effective parameters,including clay undrained shear strength,heterogeneity factor of clay,soil friction angle of the sand layer,radius ratio of the ring footing,sand layer thickness,and the interface between the ring footing and the soil.An extensive dataset comprising 80,000 samples was assembled,exceeding the limitations of previous research.The availability of this dataset enabled more robust and statistically significant analyses and predictions of ring footing bearing capacity.In light of the time-intensive nature of gathering a substantial dataset,a customized deep neural network(DNN)was developed specifically to predict the bearing capacity of the dataset rapidly.Both computational and comparative results indicate that the proposed DNN(i.e.DNN-4)can accurately predict the bearing capacity of a soil with an R2 value greater than 0.99 and a mean squared error(MSE)below 0.009 in a fraction of 1 s,reflecting the effectiveness and efficiency of the proposed method.
基金supported by the National Natural Science Foundation of China-China State Railway Group Co.,Ltd.Railway Basic Research Joint Fund (Grant No.U2268217)the Scientific Funding for China Academy of Railway Sciences Corporation Limited (No.2021YJ183).
文摘Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.
基金Princess Nourah bint Abdulrahman University for funding this project through the Researchers Supporting Project(PNURSP2024R319)funded by the Prince Sultan University,Riyadh,Saudi Arabia.
文摘This study describes improving network security by implementing and assessing an intrusion detection system(IDS)based on deep neural networks(DNNs).The paper investigates contemporary technical ways for enhancing intrusion detection performance,given the vital relevance of safeguarding computer networks against harmful activity.The DNN-based IDS is trained and validated by the model using the NSL-KDD dataset,a popular benchmark for IDS research.The model performs well in both the training and validation stages,with 91.30%training accuracy and 94.38%validation accuracy.Thus,the model shows good learning and generalization capabilities with minor losses of 0.22 in training and 0.1553 in validation.Furthermore,for both macro and micro averages across class 0(normal)and class 1(anomalous)data,the study evaluates the model using a variety of assessment measures,such as accuracy scores,precision,recall,and F1 scores.The macro-average recall is 0.9422,the macro-average precision is 0.9482,and the accuracy scores are 0.942.Furthermore,macro-averaged F1 scores of 0.9245 for class 1 and 0.9434 for class 0 demonstrate the model’s ability to precisely identify anomalies precisely.The research also highlights how real-time threat monitoring and enhanced resistance against new online attacks may be achieved byDNN-based intrusion detection systems,which can significantly improve network security.The study underscores the critical function ofDNN-based IDS in contemporary cybersecurity procedures by setting the foundation for further developments in this field.Upcoming research aims to enhance intrusion detection systems by examining cooperative learning techniques and integrating up-to-date threat knowledge.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62375140 and 62001249)the Open Research Fund of National Laboratory of Solid State Microstructures(Grant No.M36055).
文摘The vector vortex beam(VVB)has attracted significant attention due to its intrinsic diversity of information and has found great applications in both classical and quantum communications.However,a VVB is unavoidably affected by atmospheric turbulence(AT)when it propagates through the free-space optical communication environment,which results in detection errors at the receiver.In this paper,we propose a VVB classification scheme to detect VVBs with continuously changing polarization states under AT,where a diffractive deep neural network(DDNN)is designed and trained to classify the intensity distribution of the input distorted VVBs,and the horizontal direction of polarization of the input distorted beam is adopted as the feature for the classification through the DDNN.The numerical simulations and experimental results demonstrate that the proposed scheme has high accuracy in classification tasks.The energy distribution percentage remains above 95%from weak to medium AT,and the classification accuracy can remain above 95%for various strengths of turbulence.It has a faster convergence and better accuracy than that based on a convolutional neural network.
基金Shenzhen Science and Technology Program,Grant/Award Number:ZDSYS20211021111415025Shenzhen Institute of Artificial Intelligence and Robotics for SocietyYouth Science and Technology Talents Development Project of Guizhou Education Department,Grant/Award Number:QianJiaoheKYZi[2018]459。
文摘Facial beauty analysis is an important topic in human society.It may be used as a guidance for face beautification applications such as cosmetic surgery.Deep neural networks(DNNs)have recently been adopted for facial beauty analysis and have achieved remarkable performance.However,most existing DNN-based models regard facial beauty analysis as a normal classification task.They ignore important prior knowledge in traditional machine learning models which illustrate the significant contribution of the geometric features in facial beauty analysis.To be specific,landmarks of the whole face and facial organs are introduced to extract geometric features to make the decision.Inspired by this,we introduce a novel dual-branch network for facial beauty analysis:one branch takes the Swin Transformer as the backbone to model the full face and global patterns,and another branch focuses on the masked facial organs with the residual network to model the local patterns of certain facial parts.Additionally,the designed multi-scale feature fusion module can further facilitate our network to learn complementary semantic information between the two branches.In model optimisation,we propose a hybrid loss function,where especially geometric regulation is introduced by regressing the facial landmarks and it can force the extracted features to convey facial geometric features.Experiments performed on the SCUT-FBP5500 dataset and the SCUT-FBP dataset demonstrate that our model outperforms the state-of-the-art convolutional neural networks models,which proves the effectiveness of the proposed geometric regularisation and dual-branch structure with the hybrid network.To the best of our knowledge,this is the first study to introduce a Vision Transformer into the facial beauty analysis task.
基金supported by the National Key Research and Development Program of China (Grant No.2020YFA0608000)the National Natural Science Foundation of China (Grant No. 42030605)the High-Performance Computing of Nanjing University of Information Science&Technology for their support of this work。
文摘This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that compare different CNN configurations and deployed the best-performing architecture to downscale one-month lead seasonal forecasts of June–July–August–September(JJAS) precipitation from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0(NUIST-CFS1.0) for 1982–2020. We also perform hyper-parameter optimization and introduce predictors over a larger area to include information about the main large-scale circulations that drive precipitation over the East Africa region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results show that the CNN-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme precipitation spatial patterns. Besides, CNN-based downscaling yields a much more accurate forecast of extreme and spell indicators and reduces the significant relative biases exhibited by the raw model predictions. Moreover, our results show that CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of East Africa. The results demonstrate the potential usefulness of CNN in downscaling seasonal precipitation predictions over East Africa,particularly in providing improved forecast products which are essential for end users.
基金This work was supported by the Pilot Seed Grant(Grant No.RES0049944)the Collaborative Research Project(Grant No.RES0043251)from the University of Alberta.
文摘Ore production is usually affected by multiple influencing inputs at open-pit mines.Nevertheless,the complex nonlinear relationships between these inputs and ore production remain unclear.This becomes even more challenging when training data(e.g.truck haulage information and weather conditions)are massive.In machine learning(ML)algorithms,deep neural network(DNN)is a superior method for processing nonlinear and massive data by adjusting the amount of neurons and hidden layers.This study adopted DNN to forecast ore production using truck haulage information and weather conditions at open-pit mines as training data.Before the prediction models were built,principal component analysis(PCA)was employed to reduce the data dimensionality and eliminate the multicollinearity among highly correlated input variables.To verify the superiority of DNN,three ANNs containing only one hidden layer and six traditional ML models were established as benchmark models.The DNN model with multiple hidden layers performed better than the ANN models with a single hidden layer.The DNN model outperformed the extensively applied benchmark models in predicting ore production.This can provide engineers and researchers with an accurate method to forecast ore production,which helps make sound budgetary decisions and mine planning at open-pit mines.
基金supported by the National Natural Science Foundation of China (12072365)the Natural Science Foundation of Hunan Province of China (2020JJ4657)。
文摘It is important to calculate the reachable domain(RD)of the manned lunar mission to evaluate whether a lunar landing site could be reached by the spacecraft. In this paper, the RD of free return orbits is quickly evaluated and calculated via the classification and regression neural networks. An efficient databasegeneration method is developed for obtaining eight types of free return orbits and then the RD is defined by the orbit’s inclination and right ascension of ascending node(RAAN) at the perilune. A classify neural network and a regression network are trained respectively. The former is built for classifying the type of the RD, and the latter is built for calculating the inclination and RAAN of the RD. The simulation results show that two neural networks are well trained. The classification model has an accuracy of more than 99% and the mean square error of the regression model is less than 0.01°on the test set. Moreover, a serial strategy is proposed to combine the two surrogate models and a recognition tool is built to evaluate whether a lunar site could be reached. The proposed deep learning method shows the superiority in computation efficiency compared with the traditional double two-body model.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金supported by the Science and Technology Project of State Grid Corporation of China(4000-202122070A-0-0-00).
文摘The fluctuation of wind power affects the operating safety and power consumption of the electric power grid and restricts the grid connection of wind power on a large scale.Therefore,wind power forecasting plays a key role in improving the safety and economic benefits of the power grid.This paper proposes a wind power predicting method based on a convolutional graph attention deep neural network with multi-wind farm data.Based on the graph attention network and attention mechanism,the method extracts spatial-temporal characteristics from the data of multiple wind farms.Then,combined with a deep neural network,a convolutional graph attention deep neural network model is constructed.Finally,the model is trained with the quantile regression loss function to achieve the wind power deterministic and probabilistic prediction based on multi-wind farm spatial-temporal data.A wind power dataset in the U.S.is taken as an example to demonstrate the efficacy of the proposed model.Compared with the selected baseline methods,the proposed model achieves the best prediction performance.The point prediction errors(i.e.,root mean square error(RMSE)and normalized mean absolute percentage error(NMAPE))are 0.304 MW and 1.177%,respectively.And the comprehensive performance of probabilistic prediction(i.e.,con-tinuously ranked probability score(CRPS))is 0.580.Thus,the significance of multi-wind farm data and spatial-temporal feature extraction module is self-evident.
基金National Key R&D Program of China(2021YFC3000905)Open Research Program of the State Key Laboratory of Severe Weather(2022LASW-B09)National Natural Science Foundation of China(42375010)。
文摘In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a focus on the area over the Yellow Sea and the Bohai Sea(32°-42°N,117°-127°E).The objective was to develop an algorithm for fusing and segmenting multi-channel images from geostationary meteorological satellites,specifically for monitoring sea fog in this region.Firstly,the extreme gradient boosting algorithm was adopted to evaluate the data from the 16 channels of the Himawari-8 satellite for sea fog detection,and we found that the top three channels in order of importance were channels 3,4,and 14,which were fused into false color daytime images,while channels 7,13,and 15 were fused into false color nighttime images.Secondly,the simple linear iterative super-pixel clustering algorithm was used for the pixel-level segmentation of false color images,and based on super-pixel blocks,manual sea-fog annotation was performed to obtain fine-grained annotation labels.The deep convolutional neural network D-LinkNet was built on the ResNet backbone and the dilated convolutional layers with direct connections were added in the central part to form a string-and-combine structure with five branches having different depths and receptive fields.Results show that the accuracy rate of fog area(proportion of detected real fog to detected fog)was 66.5%,the recognition rate of fog zone(proportion of detected real fog to real fog or cloud cover)was 51.9%,and the detection accuracy rate(proportion of samples detected correctly to total samples)was 93.2%.
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.
文摘Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches.
基金The authors acknowledge the funding provided by the National Key R&D Program of China(2021YFA1401200)Beijing Outstanding Young Scientist Program(BJJWZYJH01201910007022)+2 种基金National Natural Science Foundation of China(No.U21A20140,No.92050117,No.62005017)programBeijing Municipal Science&Technology Commission,Administrative Commission of Zhongguancun Science Park(No.Z211100004821009)This work was supported by the Synergetic Extreme Condition User Facility(SECUF).
文摘Optical neural networks have significant advantages in terms of power consumption,parallelism,and high computing speed,which has intrigued extensive attention in both academic and engineering communities.It has been considered as one of the powerful tools in promoting the fields of imaging processing and object recognition.However,the existing optical system architecture cannot be reconstructed to the realization of multi-functional artificial intelligence systems simultaneously.To push the development of this issue,we propose the pluggable diffractive neural networks(P-DNN),a general paradigm resorting to the cascaded metasurfaces,which can be applied to recognize various tasks by switching internal plug-ins.As the proof-of-principle,the recognition functions of six types of handwritten digits and six types of fashions are numerical simulated and experimental demonstrated at near-infrared regimes.Encouragingly,the proposed paradigm not only improves the flexibility of the optical neural networks but paves the new route for achieving high-speed,low-power and versatile artificial intelligence systems.
文摘For training the present Neural Network(NN)models,the standard technique is to utilize decaying Learning Rates(LR).While the majority of these techniques commence with a large LR,they will decay multiple times over time.Decaying has been proved to enhance generalization as well as optimization.Other parameters,such as the network’s size,the number of hidden layers,drop-outs to avoid overfitting,batch size,and so on,are solely based on heuristics.This work has proposed Adaptive Teaching Learning Based(ATLB)Heuristic to identify the optimal hyperparameters for diverse networks.Here we consider three architec-tures Recurrent Neural Networks(RNN),Long Short Term Memory(LSTM),Bidirectional Long Short Term Memory(BiLSTM)of Deep Neural Networks for classification.The evaluation of the proposed ATLB is done through the various learning rate schedulers Cyclical Learning Rate(CLR),Hyperbolic Tangent Decay(HTD),and Toggle between Hyperbolic Tangent Decay and Triangular mode with Restarts(T-HTR)techniques.Experimental results have shown the performance improvement on the 20Newsgroup,Reuters Newswire and IMDB dataset.
基金supported by the National Natural Science Foundation of China(No.51974023)State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing(No.41621005)。
文摘The composition control of molten steel is one of the main functions in the ladle furnace(LF)refining process.In this study,a feasible model was established to predict the alloying element yield using principal component analysis(PCA)and deep neural network(DNN).The PCA was used to eliminate collinearity and reduce the dimension of the input variables,and then the data processed by PCA were used to establish the DNN model.The prediction hit ratios for the Si element yield in the error ranges of±1%,±3%,and±5%are 54.0%,93.8%,and98.8%,respectively,whereas those of the Mn element yield in the error ranges of±1%,±2%,and±3%are 77.0%,96.3%,and 99.5%,respectively,in the PCA-DNN model.The results demonstrate that the PCA-DNN model performs better than the known models,such as the reference heat method,multiple linear regression,modified backpropagation,and DNN model.Meanwhile,the accurate prediction of the alloying element yield can greatly contribute to realizing a“narrow window”control of composition in molten steel.The construction of the prediction model for the element yield can also provide a reference for the development of an alloying control model in LF intelligent refining in the modern iron and steel industry.
基金supported by the National Natural Science Foundation of China(Grant No.51934007)the Natural Science Foundation of Jiangsu Province,China(Grant No.BK20220691).
文摘Microseism,acoustic emission and electromagnetic radiation(M-A-E)data are usually used for predicting rockburst hazards.However,it is a great challenge to realize the prediction of M-A-E data.In this study,with the aid of a deep learning algorithm,a new method for the prediction of M-A-E data is proposed.In this method,an M-A-E data prediction model is built based on a variety of neural networks after analyzing numerous M-A-E data,and then the M-A-E data can be predicted.The predicted results are highly correlated with the real data collected in the field.Through field verification,the deep learning-based prediction method of M-A-E data provides quantitative prediction data for rockburst monitoring.
基金supported by National Key R&D Program of China(2022YFB3104200)in part by National Natural Science Foundation of China(62202386)+2 种基金in part by Basic Research Programs of Taicang(TC2021JC31)in part by Fundamental Research Funds for the Central Universities(D5000210817)in part by Xi’an Unmanned System Security and Intelligent Communications ISTC Center,and in part by Special Funds for Central Universities Construction of World-Class Universities(Disciplines)and Special Development Guidance(0639022GH0202237 and 0639022SH0201237).
文摘Industrial Internet combines the industrial system with Internet connectivity to build a new manufacturing and service system covering the entire industry chain and value chain.Its highly heterogeneous network structure and diversified application requirements call for the applying of network slicing technology.Guaranteeing robust network slicing is essential for Industrial Internet,but it faces the challenge of complex slice topologies caused by the intricate interaction relationships among Network Functions(NFs)composing the slice.Existing works have not concerned the strengthening problem of industrial network slicing regarding its complex network properties.Towards this end,we aim to study this issue by intelligently selecting a subset of most valuable NFs with the minimum cost to satisfy the strengthening requirements.State-of-the-art AlphaGo series of algorithms and the advanced graph neural network technology are combined to build the solution.Simulation results demonstrate the superior performance of our scheme compared to the benchmark schemes.