The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power suppor...The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.展开更多
In this paper, we investigate a cooperation mechanism for satellite-terrestrial integrated networks. The terrestrial relays act as the supplement of traditional small cells and cooperatively provide seamless coverage ...In this paper, we investigate a cooperation mechanism for satellite-terrestrial integrated networks. The terrestrial relays act as the supplement of traditional small cells and cooperatively provide seamless coverage for users in the densely populated areas.To deal with the dynamic satellite backhaul links and backhaul capacity caused by the satellite mobility, severe co-channel interference in both satellite backhaul links and user links introduced by spectrum sharing,and the difference demands of users as well as heterogeneous characteristics of terrestrial backhaul and satellite backhaul, we propose a joint user association and satellite selection scheme to maximize the total sum rate. The optimization problem is formulated via jointly considering the influence of dynamic backhaul links, individual requirements and targeted interference management strategies, which is decomposed into two subproblems: user association and satellite selection. The user association is formulated as a nonconvex optimization problem, and solved through a low-complexity heuristic scheme to find the most suitable access point serving each user. Then, the satellite selection is resolved based on the cooperation among terrestrial relays to maximize the total backhaul capacity with the minimum date rate constraints. Finally,simulation results show the effectiveness of the proposed scheme in terms of total sum rate and power efficiency of TRs' backhaul.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedd...The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices.In order to reduce the complexity and overhead of deploying neural networks on Integeronly hardware,most current quantization methods use a symmetric quantization mapping strategy to quantize a floating-point neural network into an integer network.However,although symmetric quantization has the advantage of easier implementation,it is sub-optimal for cases where the range could be skewed and not symmetric.This often comes at the cost of lower accuracy.This paper proposed an activation redistribution-based hybrid asymmetric quantizationmethod for neural networks.The proposedmethod takes data distribution into consideration and can resolve the contradiction between the quantization accuracy and the ease of implementation,balance the trade-off between clipping range and quantization resolution,and thus improve the accuracy of the quantized neural network.The experimental results indicate that the accuracy of the proposed method is 2.02%and 5.52%higher than the traditional symmetric quantization method for classification and detection tasks,respectively.The proposed method paves the way for computationally intensive neural network models to be deployed on devices with limited computing resources.Codes will be available on https://github.com/ycjcy/Hybrid-Asymmetric-Quantization.展开更多
The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ...The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.展开更多
Satellite-Terrestrial integrated Networks(STNs)have been advocated by both academia and industry as a promising network paradigm to achieve service continuity and ubiquity.However,STNs suffer from problems including p...Satellite-Terrestrial integrated Networks(STNs)have been advocated by both academia and industry as a promising network paradigm to achieve service continuity and ubiquity.However,STNs suffer from problems including poor flexibility of network architecture,low adaptability to dynamic environments,the lack of network intelligence,and low resource utilization.To handle these challenges,a Software defined Intelligent STN(SISTN)architecture is introduced.Specifically,the hierarchical architecture of the proposal is described and a distributed deployment scheme for SISTNs controllers is proposed to realize agile and effective network management and control.Moreover,three use cases in SISTNs are discussed.Meanwhile,key techniques and their corresponding solutions are presented,followed by the identification of several open issues in SISTNs including compatibility with existing networks,the tradeoff between network flexibility and performance,and so on.展开更多
The ultra-dense low earth orbit(LEO)integrated satellite-terrestrial networks(UDLEO-ISTN)can bring lots of benefits in terms of wide coverage,high capacity,and strong robustness.Meanwhile,the broadcasting and open nat...The ultra-dense low earth orbit(LEO)integrated satellite-terrestrial networks(UDLEO-ISTN)can bring lots of benefits in terms of wide coverage,high capacity,and strong robustness.Meanwhile,the broadcasting and open natures of satellite links also reveal many challenges for transmission security protection,especially for eavesdropping defence.How to efficiently take advantage of the LEO satellite’s density and ensure the secure communication by leveraging physical layer security with the cooperation of jammers deserves further investigation.To our knowledge,using satellites as jammers in UDLEO-ISTN is still a new problem since existing works mainly focused on this issue only from the aspect of terrestrial networks.To this end,we study in this paper the cooperative secrecy communication problem in UDLEOISTN by utilizing several satellites to send jamming signal to the eavesdroppers.An iterative scheme is proposed as our solution to maximize the system secrecy energy efficiency(SEE)via jointly optimizing transmit power allocation and user association.Extensive experiment results verify that our designed optimization scheme can significantly enhance the system SEE and achieve the optimal power allocation and user association strategies.展开更多
We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantu...We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network.展开更多
Vertically oriented carbon structures constructed from low-dimen-sional carbon materials are ideal frameworks for high-performance thermal inter-face materials(TIMs).However,improving the interfacial heat-transfer eff...Vertically oriented carbon structures constructed from low-dimen-sional carbon materials are ideal frameworks for high-performance thermal inter-face materials(TIMs).However,improving the interfacial heat-transfer efficiency of vertically oriented carbon structures is a challenging task.Herein,an orthotropic three-dimensional(3D)hybrid carbon network(VSCG)is fabricated by depositing vertically aligned carbon nanotubes(VACNTs)on the surface of a horizontally oriented graphene film(HOGF).The interfacial interaction between the VACNTs and HOGF is then optimized through an annealing strategy.After regulating the orientation structure of the VACNTs and filling the VSCG with polydimethylsi-loxane(PDMS),VSCG/PDMS composites with excellent 3D thermal conductive properties are obtained.The highest in-plane and through-plane thermal conduc-tivities of the composites are 113.61 and 24.37 W m^(-1)K^(-1),respectively.The high contact area of HOGF and good compressibility of VACNTs imbue the VSCG/PDMS composite with low thermal resistance.In addition,the interfacial heat-transfer efficiency of VSCG/PDMS composite in the TIM performance was improved by 71.3%compared to that of a state-of-the-art thermal pad.This new structural design can potentially realize high-performance TIMs that meet the need for high thermal conductivity and low contact thermal resistance in interfacial heat-transfer processes.展开更多
We design a new hybrid quantum-classical convolutional neural network(HQCCNN)model based on parameter quantum circuits.In this model,we use parameterized quantum circuits(PQCs)to redesign the convolutional layer in cl...We design a new hybrid quantum-classical convolutional neural network(HQCCNN)model based on parameter quantum circuits.In this model,we use parameterized quantum circuits(PQCs)to redesign the convolutional layer in classical convolutional neural networks,forming a new quantum convolutional layer to achieve unitary transformation of quantum states,enabling the model to more accurately extract hidden information from images.At the same time,we combine the classical fully connected layer with PQCs to form a new hybrid quantum-classical fully connected layer to further improve the accuracy of classification.Finally,we use the MNIST dataset to test the potential of the HQCCNN.The results indicate that the HQCCNN has good performance in solving classification problems.In binary classification tasks,the classification accuracy of numbers 5 and 7 is as high as 99.71%.In multivariate classification,the accuracy rate also reaches 98.51%.Finally,we compare the performance of the HQCCNN with other models and find that the HQCCNN has better classification performance and convergence speed.展开更多
As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can p...As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm.展开更多
With the wide application of drone technology,there is an increasing demand for the detection of radar return signals from drones.Existing detection methods mainly rely on time-frequency domain feature extraction and ...With the wide application of drone technology,there is an increasing demand for the detection of radar return signals from drones.Existing detection methods mainly rely on time-frequency domain feature extraction and classical machine learning algorithms for image recognition.This method suffers from the problem of large dimensionality of image features,which leads to large input data size and noise affecting learning.Therefore,this paper proposes to extract signal time-domain statistical features for radar return signals from drones and reduce the feature dimension from 512×4 to 16 dimensions.However,the downscaled feature data makes the accuracy of traditional machine learning algorithms decrease,so we propose a new hybrid quantum neural network with signal feature overlay projection(HQNN-SFOP),which reduces the dimensionality of the signal by extracting the statistical features in the time domain of the signal,introduces the signal feature overlay projection to enhance the expression ability of quantum computation on the signal features,and introduces the quantum circuits to improve the neural network’s ability to obtain the inline relationship of features,thus improving the accuracy and migration generalization ability of drone detection.In order to validate the effectiveness of the proposed method,we experimented with the method using the MM model that combines the real parameters of five commercial drones and random drones parameters to generate data to simulate a realistic environment.The results show that the method based on statistical features in the time domain of the signal is able to extract features at smaller scales and obtain higher accuracy on a dataset with an SNR of 10 dB.On the time-domain feature data set,HQNNSFOP obtains the highest accuracy compared to other conventional methods.In addition,HQNN-SFOP has good migration generalization ability on five commercial drones and random drones data at different SNR conditions.Our method verifies the feasibility and effectiveness of signal detection methods based on quantum computation and experimentally demonstrates that the advantages of quantum computation for information processing are still valid in the field of signal processing,it provides a highly efficient method for the drone detection using radar return signals.展开更多
Hybrid Power-line/Visible-light Communication(HPVC)network has been one of the most promising Cooperative Communication(CC)technologies for constructing Smart Home due to its superior communication reliability and har...Hybrid Power-line/Visible-light Communication(HPVC)network has been one of the most promising Cooperative Communication(CC)technologies for constructing Smart Home due to its superior communication reliability and hardware efficiency.Current research on HPVC networks focuses on the performance analysis and optimization of the Physical(PHY)layer,where the Power Line Communication(PLC)component only serves as the backbone to provide power to light Emitting Diode(LED)devices.So designing a Media Access Control(MAC)protocol remains a great challenge because it allows both PLC and Visible Light Communication(VLC)components to operate data transmission,i.e.,to achieve a true HPVC network CC.To solve this problem,we propose a new HPC network MAC protocol(HPVC MAC)based on Carrier Sense Multiple Access/Collision Avoidance(CSMA/CA)by combining IEEE 802.15.7 and IEEE 1901 standards.Firstly,we add an Additional Assistance(AA)layer to provide the channel selection strategies for sensor stations,so that they can complete data transmission on the selected channel via the specified CSMA/CA mechanism,respectively.Based on this,we give a detailed working principle of the HPVC MAC,followed by the construction of a joint analytical model for mathematicalmathematical validation of the HPVC MAC.In the modeling process,the impacts of PHY layer settings(including channel fading types and additive noise feature),CSMA/CA mechanisms of 802.15.7 and 1901,and practical configurations(such as traffic rate,transit buffer size)are comprehensively taken into consideration.Moreover,we prove the proposed analytical model has the solvability.Finally,through extensive simulations,we characterize the HPVC MAC performance under different system parameters and verify the correctness of the corresponding analytical model with an average error rate of 4.62%between the simulation and analytical results.展开更多
A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductiv...A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductivity and effective absorption coefficient of semitransparent materials.For the direct model,the spherical harmonic method and the finite volume method are used to solve the coupled conduction-radiation heat transfer problem in an absorbing,emitting,and non-scattering 2D axisymmetric gray medium in the background of laser flash method.For the identification part,firstly,the temperature field and the incident radiation field in different positions are chosen as observables.Then,a traditional identification model based on PSO algorithm is established.Finally,multilayer ANNs are built to fit and replace the direct model in the traditional identification model to speed up the identification process.The results show that compared with the traditional identification model,the time cost of the hybrid identification model is reduced by about 1 000 times.Besides,the hybrid identification model remains a high level of accuracy even with measurement errors.展开更多
With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and th...With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.展开更多
To solve the contradiction between limited spectrum resources and increasing communication demand,this paper proposes a wireless resource allocation scheme based on the Deep Q Network(DQN)to allocate radio resources i...To solve the contradiction between limited spectrum resources and increasing communication demand,this paper proposes a wireless resource allocation scheme based on the Deep Q Network(DQN)to allocate radio resources in a downlink multi-user cognitive radio(CR)network with slicing.Secondary users(SUs)are multiplexed using non-orthogonal multiple access(NOMA).The SUs use the hybrid spectrum access mode to improve the spectral efficiency(SE).Considering the demand for multiple services,the enhanced mobile broadband(eMBB)slice and ultrareliable low-latency communication(URLLC)slice were established.The proposed scheme can maximize the SE while ensuring Quality of Service(QoS)for the users.This study established a mapping relationship between resource allocation and the DQN algorithm in the CR-NOMA network.According to the signal-to-interference-plusnoise ratio(SINR)of the primary users(PUs),the proposed scheme can output the optimal channel selection and power allocation.The simulation results reveal that the proposed scheme can converge faster and obtain higher rewards compared with the Q-Learning scheme.Additionally,the proposed scheme has better SE than both the overlay and underlay only modes.展开更多
This paper addresses the urgent need to detect network security attacks,which have increased significantly in recent years,with high accuracy and avoid the adverse effects of these attacks.The intrusion detection syst...This paper addresses the urgent need to detect network security attacks,which have increased significantly in recent years,with high accuracy and avoid the adverse effects of these attacks.The intrusion detection system should respond seamlessly to attack patterns and approaches.The use of metaheuristic algorithms in attack detection can produce near-optimal solutions with low computational costs.To achieve better performance of these algorithms and further improve the results,hybridization of algorithms can be used,which leads to more successful results.Nowadays,many studies are conducted on this topic.In this study,a new hybrid approach using Gray Wolf Optimizer(GWO)and Moth-Flame Optimization(MFO)algorithms was developed and applied to widely used data sets such as NSL-KDD,UNSW-NB15,and CIC IDS 2017,as well as various benchmark functions.The ease of hybridization of the GWO algorithm,its simplicity,its ability to perform global optimal search,and the success of the MFO algorithm in obtaining the best solution suggested that an effective solution would be obtained by combining these two algorithms.For these reasons,the developed hybrid algorithm aims to achieve better results by using the good aspects of both the GWO algorithm and the MFO algorithm.In reviewing the results,it was found that a high level of success was achieved in the benchmark functions.It achieved better results in 12 of the 13 benchmark functions compared.In addition,the success rates obtained according to the evaluation criteria in the different data sets are also remarkable.Comparing the 97.4%,98.3%,and 99.2% classification accuracy results obtained in the NSL-KDD,UNSW-NB15,and CIC IDS 2017 data sets with the studies in the literature,they seem to be quite successful.展开更多
Many scholars have focused on applying machine learning models in bottom hole pressure (BHP) prediction. However, the complex and uncertain conditions in deep wells make it difficult to capture spatial and temporal co...Many scholars have focused on applying machine learning models in bottom hole pressure (BHP) prediction. However, the complex and uncertain conditions in deep wells make it difficult to capture spatial and temporal correlations of measurement while drilling (MWD) data with traditional intelligent models. In this work, we develop a novel hybrid neural network, which integrates the Convolution Neural Network (CNN) and the Gate Recurrent Unit (GRU) for predicting BHP fluctuations more accurately. The CNN structure is used to analyze spatial local dependency patterns and the GRU structure is used to discover depth variation trends of MWD data. To further improve the prediction accuracy, we explore two types of GRU-based structure: skip-GRU and attention-GRU, which can capture more long-term potential periodic correlation in drilling data. Then, the different model structures tuned by the Bayesian optimization (BO) algorithm are compared and analyzed. Results indicate that the hybrid models can extract spatial-temporal information of data effectively and predict more accurately than random forests, extreme gradient boosting, back propagation neural network, CNN and GRU. The CNN-attention-GRU model with BO algorithm shows great superiority in prediction accuracy and robustness due to the hybrid network structure and attention mechanism, having the lowest mean absolute percentage error of 0.025%. This study provides a reference for solving the problem of extracting spatial and temporal characteristics and guidance for managed pressure drilling in complex formations.展开更多
Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as con...Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as convergence difficulty,model collapse,etc.In this work,an end-to-end speech enhancement model based on Wasserstein Generative Adversarial Networks is proposed,and some improvements have been made in order to get faster convergence speed and better generated speech quality.Specifically,in the generator coding part,each convolution layer adopts different convolution kernel sizes to conduct convolution operations for obtaining speech coding information from multiple scales;a gated linear unit is introduced to alleviate the vanishing gradient problem with the increase of network depth;the gradient penalty of the discriminator is replaced with spectral normalization to accelerate the convergence rate of themodel;a hybrid penalty termcomposed of L1 regularization and a scale-invariant signal-to-distortion ratio is introduced into the loss function of the generator to improve the quality of generated speech.The experimental results on both TIMIT corpus and Tibetan corpus show that the proposed model improves the speech quality significantly and accelerates the convergence speed of the model.展开更多
We propose new hybrid Lagrange neural networks called LaNets to predict the numerical solutions of partial differential equations.That is,we embed Lagrange interpolation and small sample learning into deep neural netw...We propose new hybrid Lagrange neural networks called LaNets to predict the numerical solutions of partial differential equations.That is,we embed Lagrange interpolation and small sample learning into deep neural network frameworks.Concretely,we first perform Lagrange interpolation in front of the deep feedforward neural network.The Lagrange basis function has a neat structure and a strong expression ability,which is suitable to be a preprocessing tool for pre-fitting and feature extraction.Second,we introduce small sample learning into training,which is beneficial to guide themodel to be corrected quickly.Taking advantages of the theoretical support of traditional numerical method and the efficient allocation of modern machine learning,LaNets achieve higher predictive accuracy compared to the state-of-the-artwork.The stability and accuracy of the proposed algorithmare demonstrated through a series of classical numerical examples,including one-dimensional Burgers equation,onedimensional carburizing diffusion equations,two-dimensional Helmholtz equation and two-dimensional Burgers equation.Experimental results validate the robustness,effectiveness and flexibility of the proposed algorithm.展开更多
基金supported by the National Key Research and Development Plan(No.2022YFB2902701)the key Natural Science Foundation of Shenzhen(No.JCYJ20220818102209020).
文摘The satellite-terrestrial networks possess the ability to transcend geographical constraints inherent in traditional communication networks,enabling global coverage and offering users ubiquitous computing power support,which is an important development direction of future communications.In this paper,we take into account a multi-scenario network model under the coverage of low earth orbit(LEO)satellite,which can provide computing resources to users in faraway areas to improve task processing efficiency.However,LEO satellites experience limitations in computing and communication resources and the channels are time-varying and complex,which makes the extraction of state information a daunting task.Therefore,we explore the dynamic resource management issue pertaining to joint computing,communication resource allocation and power control for multi-access edge computing(MEC).In order to tackle this formidable issue,we undertake the task of transforming the issue into a Markov decision process(MDP)problem and propose the self-attention based dynamic resource management(SABDRM)algorithm,which effectively extracts state information features to enhance the training process.Simulation results show that the proposed algorithm is capable of effectively reducing the long-term average delay and energy consumption of the tasks.
基金supported by National Natural Science Foundation of China (No. 62201593, 62471480, and 62171466)。
文摘In this paper, we investigate a cooperation mechanism for satellite-terrestrial integrated networks. The terrestrial relays act as the supplement of traditional small cells and cooperatively provide seamless coverage for users in the densely populated areas.To deal with the dynamic satellite backhaul links and backhaul capacity caused by the satellite mobility, severe co-channel interference in both satellite backhaul links and user links introduced by spectrum sharing,and the difference demands of users as well as heterogeneous characteristics of terrestrial backhaul and satellite backhaul, we propose a joint user association and satellite selection scheme to maximize the total sum rate. The optimization problem is formulated via jointly considering the influence of dynamic backhaul links, individual requirements and targeted interference management strategies, which is decomposed into two subproblems: user association and satellite selection. The user association is formulated as a nonconvex optimization problem, and solved through a low-complexity heuristic scheme to find the most suitable access point serving each user. Then, the satellite selection is resolved based on the cooperation among terrestrial relays to maximize the total backhaul capacity with the minimum date rate constraints. Finally,simulation results show the effectiveness of the proposed scheme in terms of total sum rate and power efficiency of TRs' backhaul.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金The Qian Xuesen Youth Innovation Foundation from China Aerospace Science and Technology Corporation(Grant Number 2022JY51).
文摘The demand for adopting neural networks in resource-constrained embedded devices is continuously increasing.Quantization is one of the most promising solutions to reduce computational cost and memory storage on embedded devices.In order to reduce the complexity and overhead of deploying neural networks on Integeronly hardware,most current quantization methods use a symmetric quantization mapping strategy to quantize a floating-point neural network into an integer network.However,although symmetric quantization has the advantage of easier implementation,it is sub-optimal for cases where the range could be skewed and not symmetric.This often comes at the cost of lower accuracy.This paper proposed an activation redistribution-based hybrid asymmetric quantizationmethod for neural networks.The proposedmethod takes data distribution into consideration and can resolve the contradiction between the quantization accuracy and the ease of implementation,balance the trade-off between clipping range and quantization resolution,and thus improve the accuracy of the quantized neural network.The experimental results indicate that the accuracy of the proposed method is 2.02%and 5.52%higher than the traditional symmetric quantization method for classification and detection tasks,respectively.The proposed method paves the way for computationally intensive neural network models to be deployed on devices with limited computing resources.Codes will be available on https://github.com/ycjcy/Hybrid-Asymmetric-Quantization.
基金financially supported by the National Natural Science Foundation of China (Nos.51974023 and52374321)the funding of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing,China (No.41620007)。
文摘The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.
基金This work was supported in part by the National Key Research and Development Program of China under Grant 2020YFB1806703in part by the National Natural Science Foundation of China under Grant 62001053,Grant 61831002,and Grant 61925101in part by Young Elite Scientist Sponsorship Program by China Institute of Communications,and in part by the BUPT Excellent Ph.D.Students Foundation under Grant CX2020106.
文摘Satellite-Terrestrial integrated Networks(STNs)have been advocated by both academia and industry as a promising network paradigm to achieve service continuity and ubiquity.However,STNs suffer from problems including poor flexibility of network architecture,low adaptability to dynamic environments,the lack of network intelligence,and low resource utilization.To handle these challenges,a Software defined Intelligent STN(SISTN)architecture is introduced.Specifically,the hierarchical architecture of the proposal is described and a distributed deployment scheme for SISTNs controllers is proposed to realize agile and effective network management and control.Moreover,three use cases in SISTNs are discussed.Meanwhile,key techniques and their corresponding solutions are presented,followed by the identification of several open issues in SISTNs including compatibility with existing networks,the tradeoff between network flexibility and performance,and so on.
基金supported by National Key R&D Program of China(2022YFB3104200)in part by National Natural Science Foundation of China(62202386)+6 种基金in part by Basic Research Programs of Taicang(TC2021JC31)in part by Fundamental Research Funds for the Central Universities(D5000210817)in part by Xi’an Unmanned System Security and Intelligent Communications ISTC Centerin part by Special Funds for Central Universities Construction of World-Class Universities(Disciplines)and Special Development Guidance(0639022GH0202237 and 0639022SH0201237)in part by the Henan Key Scientific Research Program of Higher Education(23B510003,21A510008 and 21A510009)in part by Henan Key Scientific and Technological Projects(212102210553)。
文摘The ultra-dense low earth orbit(LEO)integrated satellite-terrestrial networks(UDLEO-ISTN)can bring lots of benefits in terms of wide coverage,high capacity,and strong robustness.Meanwhile,the broadcasting and open natures of satellite links also reveal many challenges for transmission security protection,especially for eavesdropping defence.How to efficiently take advantage of the LEO satellite’s density and ensure the secure communication by leveraging physical layer security with the cooperation of jammers deserves further investigation.To our knowledge,using satellites as jammers in UDLEO-ISTN is still a new problem since existing works mainly focused on this issue only from the aspect of terrestrial networks.To this end,we study in this paper the cooperative secrecy communication problem in UDLEOISTN by utilizing several satellites to send jamming signal to the eavesdroppers.An iterative scheme is proposed as our solution to maximize the system secrecy energy efficiency(SEE)via jointly optimizing transmit power allocation and user association.Extensive experiment results verify that our designed optimization scheme can significantly enhance the system SEE and achieve the optimal power allocation and user association strategies.
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No. ZR2021MF049)the Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos. ZR2022LLZ012 and ZR2021LLZ001)。
文摘We redesign the parameterized quantum circuit in the quantum deep neural network, construct a three-layer structure as the hidden layer, and then use classical optimization algorithms to train the parameterized quantum circuit, thereby propose a novel hybrid quantum deep neural network(HQDNN) used for image classification. After bilinear interpolation reduces the original image to a suitable size, an improved novel enhanced quantum representation(INEQR) is used to encode it into quantum states as the input of the HQDNN. Multi-layer parameterized quantum circuits are used as the main structure to implement feature extraction and classification. The output results of parameterized quantum circuits are converted into classical data through quantum measurements and then optimized on a classical computer. To verify the performance of the HQDNN, we conduct binary classification and three classification experiments on the MNIST(Modified National Institute of Standards and Technology) data set. In the first binary classification, the accuracy of 0 and 4 exceeds98%. Then we compare the performance of three classification with other algorithms, the results on two datasets show that the classification accuracy is higher than that of quantum deep neural network and general quantum convolutional neural network.
基金financially supported by the National Natural Science Foundation of China(Grant Nos.52130303,52327802,52303101,52173078,51973158)the China Postdoctoral Science Foundation(2023M732579)+2 种基金Young Elite Scientists Sponsorship Program by CAST(No.2022QNRC001)National Key R&D Program of China(No.2022YFB3805702)Joint Funds of Ministry of Education(8091B032218).
文摘Vertically oriented carbon structures constructed from low-dimen-sional carbon materials are ideal frameworks for high-performance thermal inter-face materials(TIMs).However,improving the interfacial heat-transfer efficiency of vertically oriented carbon structures is a challenging task.Herein,an orthotropic three-dimensional(3D)hybrid carbon network(VSCG)is fabricated by depositing vertically aligned carbon nanotubes(VACNTs)on the surface of a horizontally oriented graphene film(HOGF).The interfacial interaction between the VACNTs and HOGF is then optimized through an annealing strategy.After regulating the orientation structure of the VACNTs and filling the VSCG with polydimethylsi-loxane(PDMS),VSCG/PDMS composites with excellent 3D thermal conductive properties are obtained.The highest in-plane and through-plane thermal conduc-tivities of the composites are 113.61 and 24.37 W m^(-1)K^(-1),respectively.The high contact area of HOGF and good compressibility of VACNTs imbue the VSCG/PDMS composite with low thermal resistance.In addition,the interfacial heat-transfer efficiency of VSCG/PDMS composite in the TIM performance was improved by 71.3%compared to that of a state-of-the-art thermal pad.This new structural design can potentially realize high-performance TIMs that meet the need for high thermal conductivity and low contact thermal resistance in interfacial heat-transfer processes.
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No.ZR2021MF049)the Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos.ZR2022LLZ012 and ZR2021LLZ001)。
文摘We design a new hybrid quantum-classical convolutional neural network(HQCCNN)model based on parameter quantum circuits.In this model,we use parameterized quantum circuits(PQCs)to redesign the convolutional layer in classical convolutional neural networks,forming a new quantum convolutional layer to achieve unitary transformation of quantum states,enabling the model to more accurately extract hidden information from images.At the same time,we combine the classical fully connected layer with PQCs to form a new hybrid quantum-classical fully connected layer to further improve the accuracy of classification.Finally,we use the MNIST dataset to test the potential of the HQCCNN.The results indicate that the HQCCNN has good performance in solving classification problems.In binary classification tasks,the classification accuracy of numbers 5 and 7 is as high as 99.71%.In multivariate classification,the accuracy rate also reaches 98.51%.Finally,we compare the performance of the HQCCNN with other models and find that the HQCCNN has better classification performance and convergence speed.
文摘As the demands of massive connections and vast coverage rapidly grow in the next wireless communication networks, rate splitting multiple access(RSMA) is considered to be the new promising access scheme since it can provide higher efficiency with limited spectrum resources. In this paper, combining spectrum splitting with rate splitting, we propose to allocate resources with traffic offloading in hybrid satellite terrestrial networks. A novel deep reinforcement learning method is adopted to solve this challenging non-convex problem. However, the neverending learning process could prohibit its practical implementation. Therefore, we introduce the switch mechanism to avoid unnecessary learning. Additionally, the QoS constraint in the scheme can rule out unsuccessful transmission. The simulation results validates the energy efficiency performance and the convergence speed of the proposed algorithm.
基金supported by Major Science and Technology Projects in Henan Province,China,Grant No.221100210600.
文摘With the wide application of drone technology,there is an increasing demand for the detection of radar return signals from drones.Existing detection methods mainly rely on time-frequency domain feature extraction and classical machine learning algorithms for image recognition.This method suffers from the problem of large dimensionality of image features,which leads to large input data size and noise affecting learning.Therefore,this paper proposes to extract signal time-domain statistical features for radar return signals from drones and reduce the feature dimension from 512×4 to 16 dimensions.However,the downscaled feature data makes the accuracy of traditional machine learning algorithms decrease,so we propose a new hybrid quantum neural network with signal feature overlay projection(HQNN-SFOP),which reduces the dimensionality of the signal by extracting the statistical features in the time domain of the signal,introduces the signal feature overlay projection to enhance the expression ability of quantum computation on the signal features,and introduces the quantum circuits to improve the neural network’s ability to obtain the inline relationship of features,thus improving the accuracy and migration generalization ability of drone detection.In order to validate the effectiveness of the proposed method,we experimented with the method using the MM model that combines the real parameters of five commercial drones and random drones parameters to generate data to simulate a realistic environment.The results show that the method based on statistical features in the time domain of the signal is able to extract features at smaller scales and obtain higher accuracy on a dataset with an SNR of 10 dB.On the time-domain feature data set,HQNNSFOP obtains the highest accuracy compared to other conventional methods.In addition,HQNN-SFOP has good migration generalization ability on five commercial drones and random drones data at different SNR conditions.Our method verifies the feasibility and effectiveness of signal detection methods based on quantum computation and experimentally demonstrates that the advantages of quantum computation for information processing are still valid in the field of signal processing,it provides a highly efficient method for the drone detection using radar return signals.
基金supported by the National Natural Science Foundation of China(No.61772386)National Key Research and Development Project(No.2018YFB1305001)Fundamental Research Funds for the Central Universities(No.KJ02072021-0119).
文摘Hybrid Power-line/Visible-light Communication(HPVC)network has been one of the most promising Cooperative Communication(CC)technologies for constructing Smart Home due to its superior communication reliability and hardware efficiency.Current research on HPVC networks focuses on the performance analysis and optimization of the Physical(PHY)layer,where the Power Line Communication(PLC)component only serves as the backbone to provide power to light Emitting Diode(LED)devices.So designing a Media Access Control(MAC)protocol remains a great challenge because it allows both PLC and Visible Light Communication(VLC)components to operate data transmission,i.e.,to achieve a true HPVC network CC.To solve this problem,we propose a new HPC network MAC protocol(HPVC MAC)based on Carrier Sense Multiple Access/Collision Avoidance(CSMA/CA)by combining IEEE 802.15.7 and IEEE 1901 standards.Firstly,we add an Additional Assistance(AA)layer to provide the channel selection strategies for sensor stations,so that they can complete data transmission on the selected channel via the specified CSMA/CA mechanism,respectively.Based on this,we give a detailed working principle of the HPVC MAC,followed by the construction of a joint analytical model for mathematicalmathematical validation of the HPVC MAC.In the modeling process,the impacts of PHY layer settings(including channel fading types and additive noise feature),CSMA/CA mechanisms of 802.15.7 and 1901,and practical configurations(such as traffic rate,transit buffer size)are comprehensively taken into consideration.Moreover,we prove the proposed analytical model has the solvability.Finally,through extensive simulations,we characterize the HPVC MAC performance under different system parameters and verify the correctness of the corresponding analytical model with an average error rate of 4.62%between the simulation and analytical results.
基金supported by the Fundamental Research Funds for the Central Universities (No.3122020072)the Multi-investment Project of Tianjin Applied Basic Research(No.23JCQNJC00250)。
文摘A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductivity and effective absorption coefficient of semitransparent materials.For the direct model,the spherical harmonic method and the finite volume method are used to solve the coupled conduction-radiation heat transfer problem in an absorbing,emitting,and non-scattering 2D axisymmetric gray medium in the background of laser flash method.For the identification part,firstly,the temperature field and the incident radiation field in different positions are chosen as observables.Then,a traditional identification model based on PSO algorithm is established.Finally,multilayer ANNs are built to fit and replace the direct model in the traditional identification model to speed up the identification process.The results show that compared with the traditional identification model,the time cost of the hybrid identification model is reduced by about 1 000 times.Besides,the hybrid identification model remains a high level of accuracy even with measurement errors.
文摘With limited number of labeled samples,hyperspectral image(HSI)classification is a difficult Problem in current research.The graph neural network(GNN)has emerged as an approach to semi-supervised classification,and the application of GNN to hyperspectral images has attracted much attention.However,in the existing GNN-based methods a single graph neural network or graph filter is mainly used to extract HSI features,which does not take full advantage of various graph neural networks(graph filters).Moreover,the traditional GNNs have the problem of oversmoothing.To alleviate these shortcomings,we introduce a deep hybrid multi-graph neural network(DHMG),where two different graph filters,i.e.,the spectral filter and the autoregressive moving average(ARMA)filter,are utilized in two branches.The former can well extract the spectral features of the nodes,and the latter has a good suppression effect on graph noise.The network realizes information interaction between the two branches and takes good advantage of different graph filters.In addition,to address the problem of oversmoothing,a dense network is proposed,where the local graph features are preserved.The dense structure satisfies the needs of different classification targets presenting different features.Finally,we introduce a GraphSAGEbased network to refine the graph features produced by the deep hybrid network.Extensive experiments on three public HSI datasets strongly demonstrate that the DHMG dramatically outperforms the state-ofthe-art models.
基金the National Natural Science Foundation of China(Grant No.61971057).
文摘To solve the contradiction between limited spectrum resources and increasing communication demand,this paper proposes a wireless resource allocation scheme based on the Deep Q Network(DQN)to allocate radio resources in a downlink multi-user cognitive radio(CR)network with slicing.Secondary users(SUs)are multiplexed using non-orthogonal multiple access(NOMA).The SUs use the hybrid spectrum access mode to improve the spectral efficiency(SE).Considering the demand for multiple services,the enhanced mobile broadband(eMBB)slice and ultrareliable low-latency communication(URLLC)slice were established.The proposed scheme can maximize the SE while ensuring Quality of Service(QoS)for the users.This study established a mapping relationship between resource allocation and the DQN algorithm in the CR-NOMA network.According to the signal-to-interference-plusnoise ratio(SINR)of the primary users(PUs),the proposed scheme can output the optimal channel selection and power allocation.The simulation results reveal that the proposed scheme can converge faster and obtain higher rewards compared with the Q-Learning scheme.Additionally,the proposed scheme has better SE than both the overlay and underlay only modes.
基金supported by the Kırıkkale University Department of Scientific Research Projects (2022/022).
文摘This paper addresses the urgent need to detect network security attacks,which have increased significantly in recent years,with high accuracy and avoid the adverse effects of these attacks.The intrusion detection system should respond seamlessly to attack patterns and approaches.The use of metaheuristic algorithms in attack detection can produce near-optimal solutions with low computational costs.To achieve better performance of these algorithms and further improve the results,hybridization of algorithms can be used,which leads to more successful results.Nowadays,many studies are conducted on this topic.In this study,a new hybrid approach using Gray Wolf Optimizer(GWO)and Moth-Flame Optimization(MFO)algorithms was developed and applied to widely used data sets such as NSL-KDD,UNSW-NB15,and CIC IDS 2017,as well as various benchmark functions.The ease of hybridization of the GWO algorithm,its simplicity,its ability to perform global optimal search,and the success of the MFO algorithm in obtaining the best solution suggested that an effective solution would be obtained by combining these two algorithms.For these reasons,the developed hybrid algorithm aims to achieve better results by using the good aspects of both the GWO algorithm and the MFO algorithm.In reviewing the results,it was found that a high level of success was achieved in the benchmark functions.It achieved better results in 12 of the 13 benchmark functions compared.In addition,the success rates obtained according to the evaluation criteria in the different data sets are also remarkable.Comparing the 97.4%,98.3%,and 99.2% classification accuracy results obtained in the NSL-KDD,UNSW-NB15,and CIC IDS 2017 data sets with the studies in the literature,they seem to be quite successful.
基金The authors express their appreciation to National Key Research and Development Project“Key Scientific Issues of Revolutionary Technology”(2019YFA0708300)Strategic Cooperation Technology Projects of CNPC and CUPB(ZLZX2020-03)+1 种基金Distinguished Young Foundation of National Natural Science Foundation of China(52125401)Science Foundation of China University of Petroleum,Beijing(2462022SZBH002).
文摘Many scholars have focused on applying machine learning models in bottom hole pressure (BHP) prediction. However, the complex and uncertain conditions in deep wells make it difficult to capture spatial and temporal correlations of measurement while drilling (MWD) data with traditional intelligent models. In this work, we develop a novel hybrid neural network, which integrates the Convolution Neural Network (CNN) and the Gate Recurrent Unit (GRU) for predicting BHP fluctuations more accurately. The CNN structure is used to analyze spatial local dependency patterns and the GRU structure is used to discover depth variation trends of MWD data. To further improve the prediction accuracy, we explore two types of GRU-based structure: skip-GRU and attention-GRU, which can capture more long-term potential periodic correlation in drilling data. Then, the different model structures tuned by the Bayesian optimization (BO) algorithm are compared and analyzed. Results indicate that the hybrid models can extract spatial-temporal information of data effectively and predict more accurately than random forests, extreme gradient boosting, back propagation neural network, CNN and GRU. The CNN-attention-GRU model with BO algorithm shows great superiority in prediction accuracy and robustness due to the hybrid network structure and attention mechanism, having the lowest mean absolute percentage error of 0.025%. This study provides a reference for solving the problem of extracting spatial and temporal characteristics and guidance for managed pressure drilling in complex formations.
基金supported by the National Science Foundation under Grant No.62066039.
文摘Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as convergence difficulty,model collapse,etc.In this work,an end-to-end speech enhancement model based on Wasserstein Generative Adversarial Networks is proposed,and some improvements have been made in order to get faster convergence speed and better generated speech quality.Specifically,in the generator coding part,each convolution layer adopts different convolution kernel sizes to conduct convolution operations for obtaining speech coding information from multiple scales;a gated linear unit is introduced to alleviate the vanishing gradient problem with the increase of network depth;the gradient penalty of the discriminator is replaced with spectral normalization to accelerate the convergence rate of themodel;a hybrid penalty termcomposed of L1 regularization and a scale-invariant signal-to-distortion ratio is introduced into the loss function of the generator to improve the quality of generated speech.The experimental results on both TIMIT corpus and Tibetan corpus show that the proposed model improves the speech quality significantly and accelerates the convergence speed of the model.
基金supported by NSFC(No.11971296)National Key Research and Development Program of China(No.2021YFA1003004).
文摘We propose new hybrid Lagrange neural networks called LaNets to predict the numerical solutions of partial differential equations.That is,we embed Lagrange interpolation and small sample learning into deep neural network frameworks.Concretely,we first perform Lagrange interpolation in front of the deep feedforward neural network.The Lagrange basis function has a neat structure and a strong expression ability,which is suitable to be a preprocessing tool for pre-fitting and feature extraction.Second,we introduce small sample learning into training,which is beneficial to guide themodel to be corrected quickly.Taking advantages of the theoretical support of traditional numerical method and the efficient allocation of modern machine learning,LaNets achieve higher predictive accuracy compared to the state-of-the-artwork.The stability and accuracy of the proposed algorithmare demonstrated through a series of classical numerical examples,including one-dimensional Burgers equation,onedimensional carburizing diffusion equations,two-dimensional Helmholtz equation and two-dimensional Burgers equation.Experimental results validate the robustness,effectiveness and flexibility of the proposed algorithm.