Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV ...Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.展开更多
The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based ...The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.展开更多
Concrete subjected to fire loads is susceptible to explosive spalling, which can lead to the exposure of reinforcingsteel bars to the fire, substantially jeopardizing the structural safety and stability. The spalling ...Concrete subjected to fire loads is susceptible to explosive spalling, which can lead to the exposure of reinforcingsteel bars to the fire, substantially jeopardizing the structural safety and stability. The spalling of fire-loaded concreteis closely related to the evolution of pore pressure and temperature. Conventional analytical methods involve theresolution of complex, strongly coupled multifield equations, necessitating significant computational efforts. Torapidly and accurately obtain the distributions of pore-pressure and temperature, the Pix2Pix model is adoptedin this work, which is celebrated for its capabilities in image generation. The open-source dataset used hereinfeatures RGB images we generated using a sophisticated coupled model, while the grayscale images encapsulate the15 principal variables influencing spalling. After conducting a series of tests with different layers configurations,activation functions and loss functions, the Pix2Pix model suitable for assessing the spalling risk of fire-loadedconcrete has been meticulously designed and trained. The applicability and reliability of the Pix2Pix model inconcrete parameter prediction are verified by comparing its outcomes with those derived fromthe strong couplingTHC model. Notably, for the practical engineering applications, our findings indicate that utilizing monochromeimages as the initial target for analysis yields more dependable results. This work not only offers valuable insightsfor civil engineers specializing in concrete structures but also establishes a robust methodological approach forresearchers seeking to create similar predictive models.展开更多
As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as ...As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as packet forwarding hardware,known as“OpenFlow switches”.Since load balancing service is essential to distribute workload across servers in data centers,we propose an effective load balancing scheme in SDN,using a genetic programming approach,called Genetic Programming based Load Balancing(GPLB).We formulate the problem to find a path:1)with the best bottleneck switch which has the lowest capacity within bottleneck switches of each path,2)with the shortest path,and 3)requiring the less possible operations.For the purpose of choosing the real-time least loaded path,GPLB immediately calculates the integrated load of paths based on the information that receives from the SDN controller.Hence,in this design,the controller sends the load information of each path to the load balancing algorithm periodically and then the load balancing algorithm returns a least loaded path to the controller.In this paper,we use the Mininet emulator and the OpenDaylight controller to evaluate the effectiveness of the GPLB.The simulative study of the GPLB shows that there is a big improvement in performance metrics and the latency and the jitter are minimized.The GPLB also has the maximum throughput in comparison with related works and has performed better in the heavy traffic situation.The results show that our model stands smartly while not increasing further overhead.展开更多
In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol...In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol for load transfer for load balancing. Groups are formed and every group has a node called a designated representative (DR). During load transferring processes, loads are transferred using the DR in each group to achieve load balancing purposes. The simulation results show that the performance of the protocol proposed is better than the compared conventional method. This protocol is more stable than the method without using the fuzzy logic control.展开更多
Energy-efficient data gathering in multi-hop wireless sensor networks was studied,considering that different node produces different amounts of data in realistic environments.A novel dominating set based clustering pr...Energy-efficient data gathering in multi-hop wireless sensor networks was studied,considering that different node produces different amounts of data in realistic environments.A novel dominating set based clustering protocol (DSCP) was proposed to solve the data gathering problem in this scenario.In DSCP,a node evaluates the potential lifetime of the network (from its local point of view) assuming that it acts as the cluster head,and claims to be a tentative cluster head if it maximizes the potential lifetime.When evaluating the potential lifetime of the network,a node considers not only its remaining energy,but also other factors including its traffic load,the number of its neighbors,and the traffic loads of its neighbors.A tentative cluster head becomes a final cluster head with a probability inversely proportional to the number of tentative cluster heads that cover its neighbors.The protocol can terminate in O(n/lg n) steps,and its total message complexity is O(n2/lg n).Simulation results show that DSCP can effectively prolong the lifetime of the network in multi-hop networks with unbalanced traffic load.Compared with EECT,the network lifetime is prolonged by 56.6% in average.展开更多
Real-time applications based on Wireless Sensor Network(WSN)tech-nologies are quickly increasing due to intelligent surroundings.Among the most significant resources in the WSN are battery power and security.Clustering...Real-time applications based on Wireless Sensor Network(WSN)tech-nologies are quickly increasing due to intelligent surroundings.Among the most significant resources in the WSN are battery power and security.Clustering stra-tegies improve the power factor and secure the WSN environment.It takes more electricity to forward data in a WSN.Though numerous clustering methods have been developed to provide energy consumption,there is indeed a risk of unequal load balancing,resulting in a decrease in the network’s lifetime due to network inequalities and less security.These possibilities arise due to the cluster head’s limited life span.These cluster heads(CH)are in charge of all activities and con-trol intra-cluster and inter-cluster interactions.The proposed method uses Lifetime centric load balancing mechanisms(LCLBM)and Cluster-based energy optimiza-tion using a mobile sink algorithm(CEOMS).LCLBM emphasizes the selection of CH,system architectures,and optimal distribution of CH.In addition,the LCLBM was added with an assistant cluster head(ACH)for load balancing.Power consumption,communications latency,the frequency of failing nodes,high security,and one-way delay are essential variables to consider while evaluating LCLBM.CEOMS will choose a cluster leader based on the influence of the fol-lowing parameters on the energy balance of WSNs.According to simulatedfind-ings,the suggested LCLBM-CEOMS method increases cluster head selection self-adaptability,improves the network’s lifetime,decreases data latency,and bal-ances network capacity.展开更多
The Internet of Vehicles(IoV)has been widely researched in recent years,and cloud computing has been one of the key technologies in the IoV.Although cloud computing provides high performance compute,storage and networ...The Internet of Vehicles(IoV)has been widely researched in recent years,and cloud computing has been one of the key technologies in the IoV.Although cloud computing provides high performance compute,storage and networking services,the IoV still suffers with high processing latency,less mobility support and location awareness.In this paper,we integrate fog computing and software defined networking(SDN) to address those problems.Fog computing extends computing and storing to the edge of the network,which could decrease latency remarkably in addition to enable mobility support and location awareness.Meanwhile,SDN provides flexible centralized control and global knowledge to the network.In order to apply the software defined cloud/fog networking(SDCFN) architecture in the IoV effectively,we propose a novel SDN-based modified constrained optimization particle swarm optimization(MPSO-CO) algorithm which uses the reverse of the flight of mutation particles and linear decrease inertia weight to enhance the performance of constrained optimization particle swarm optimization(PSO-CO).The simulation results indicate that the SDN-based MPSO-CO algorithm could effectively decrease the latency and improve the quality of service(QoS) in the SDCFN architecture.展开更多
In order to improve turbine internal efficiency and lower manufacturing cost, a new highly loaded rotating blade has been developed. The 3D optimization design method based on artificial neural network and genetic alg...In order to improve turbine internal efficiency and lower manufacturing cost, a new highly loaded rotating blade has been developed. The 3D optimization design method based on artificial neural network and genetic algorithm is adopted to construct the blade shape. The blade is stacked by the center of gravity in radial direction with five sections. For each blade section, independent suction and pressure sides are constructed from the camber line using Bezier curves. Three-dimensional flow analysis is carried out to verify the performance of the new blade. It is found that the new blade has improved the blade performance by 0.5%. Consequently, it is verified that the new blade is effective to improve the turbine internal efficiency and to lower the turbine weight and manufacturing cost by reducing the blade number by about 15%.展开更多
Concern on alteration of sediment natural flow caused by developments of water resources system, has been addressed in many river basins around the world especially in developing and remote regions where sediment data...Concern on alteration of sediment natural flow caused by developments of water resources system, has been addressed in many river basins around the world especially in developing and remote regions where sediment data are poorly gauged or ungauged. Since suspended sediment load (SSL) is predominant, the objectives of this research are to: 1) simulate monthly average SSL (SSLm) of four catchments using artificial neural network (ANN);2) assess the application of the calibrated ANN (Cal-ANN) models in three ungauged catchment representatives (UCR) before using them to predict SSLm of three actual ungauged catchments (AUC) in the Tonle Sap River Basin;and 3) estimate annual SSL (SSLA) of each AUC for the case of with and without dam-reservoirs. The model performance for total load (SSLT) prediction was also investigated because it is important for dam-reservoir management. For model simulation, ANN yielded very satisfactory results with determination coefficient (R2) ranging from 0.81 to 0.94 in calibration stage and 0.63 to 0.87 in validation stage. The Cal-ANN models also performed well in UCRs with R2 ranging from 0.59 to 0.64. From the result of this study, one can estimate SSLm and SSLT of ungauged catchments with an accuracy of 0.61 in term of R2 and 34.06% in term of absolute percentage bias, respectively. SSLA of the AUCs was found between 159,281 and 723,580 t/year. In combination with Brune’s method, the impact of dam-reservoirs could reduce SSLA between 47% and 68%. This result is key information for sustainable development of such infrastructures.展开更多
Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attracti...Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attractive applications for cloud computing is the concept of distributed information processing.Security,privacy,energy saving,reliability and load balancing are the major challenges facing cloud computing and most information technology innovations.Load balancing is the process of redistributing workload among all nodes in a network;to improve resource utilization and job response time,while avoiding overloading some nodes when other nodes are underloaded or idle is a major challenge.Thus,this research aims to design a novel load balancing systems in a cloud computing environment.The research is based on the modification of the existing approaches,namely;particle swarm optimization(PSO),honeybee,and ant colony optimization(ACO)with mathematical expression to form a novel approach called PACOHONEYBEE.The experiments were conducted on response time and throughput.The results of the response time of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:2791,2780,2784,2767,2727,and 2599(ms)respectively.The outcome of throughput of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:7451,7425,7398,7357,7387 and 7482(bps)respectively.It is observed that P-ACOHONEYBEE approach produces the lowest response time,high throughput and overall improved performance for the 10 nodes.The research has helped in managing the imbalance drawback by maximizing throughput,and reducing response time with scalability and reliability.展开更多
To evaluate the nitrogen pollution load in an aquifer, a water and nitrogen balance analysis was conducted over a thirty-five year period at five yearly intervals. First, we established a two-horizon model comprising ...To evaluate the nitrogen pollution load in an aquifer, a water and nitrogen balance analysis was conducted over a thirty-five year period at five yearly intervals. First, we established a two-horizon model comprising a channel/soil horizon, and an aquifer horizon, with exchange of water between the aquifer and river. The nitrogen balance was estimated from the product of nitrogen concentration and water flow obtained from the water balance analysis. The aquifer nitrogen balance results were as follows: 1) In the aquifer horizon, the total nitrogen pollution load potential (NPLP) peaked in the period 1981-1990 at 1800 t·yr-1;following this the NPLP rapidly decreased to about 600 t·yr-1 in the period 2006-2010. The largest NPLP input component of 1000 t·yr-1 in the period 1976-1990 was from farmland. Subsequently, farmland NPLP decreased to only 400 t·yr-1 between 2006 and 2010. The second largest input component, 600 t·yr-1, was effluent from wastewater treatment works (WWTWs) in the period 1986-1990;this also decreased markedly to about 100 t·yr-1 between 2006 and 2010;2) The difference between input and output in the aquifer horizon, used as an index of groundwater pollution, peaked in the period 1986-1990 at about 1200 t·yr-1. This gradually decreased to about 200 t·yr-1 by 2006-2010. 3) The temporal change in NPLP coincided with the nitrogen concentration of the rivers in the study area. In addition, nitrogen concentrations in two test wells were 1.0 mg·l-1 at a depth of 150 m and only 0.25 mg·l-1 at 50 m, suggesting gradual percolation of the nitrogen polluted water deeper in the aquifer.展开更多
In this study, recurrent networks to downscale meteorological fields of the ERA-40 re-analysis dataset with focus on the meso-scale water balance were investigated. Therefore two types of recurrent neural networks wer...In this study, recurrent networks to downscale meteorological fields of the ERA-40 re-analysis dataset with focus on the meso-scale water balance were investigated. Therefore two types of recurrent neural networks were used. The first approach is a coupling between a recurrent neural network and a distributed watershed model and the second a nonlinear autoregressive with exogenous inputs (NARX) network, which directly predicted the component of the water balance. The approaches were deployed for a meso-scale catchment area in the Free State of Saxony, Germany. The results show that the coupled approach did not perform as well as the NARX network. But the meteorological output of the coupled approach already reaches an adequate quality. However the coupled model generates as input for the watershed model insufficient daily precipitation sums and not enough wet days were predicted. Hence the long-term annual cycle of the water balance could not be preserved with acceptable quality in contrary to the NARX approach. The residual storage change term indicates physical restrictions of the plausibility of the neural networks, whereas the physically based correlations among?the components of the water balance were preserved more accurately by the coupled approach.展开更多
To improve the security and reliability of a distribution network, several issues, such as influences of operation con-strains, real-time load margin calculation, and online security level evaluation, are with great s...To improve the security and reliability of a distribution network, several issues, such as influences of operation con-strains, real-time load margin calculation, and online security level evaluation, are with great significance. In this pa-per, a mathematical model for load capability online assessment of a distribution network is established, and a repeti-tive power flow calculation algorithm is proposed to solve the problem as well. With assessment on three levels: the entire distribution network, a sub-area of the network and a load bus, the security level of current operation mode and load transfer capability during outage are thus obtained. The results can provide guidelines for prevention control, as well as restoration control. Simulation results show that the method is simple, fast and can be applied to distribution networks belonged to any voltage level while taking into account all of the operation constraints.展开更多
In this article, we construct a triangle-growing network with tunable clusters and study the social balance dynamics in this network. The built network, which could reflect more features of real communities, has more ...In this article, we construct a triangle-growing network with tunable clusters and study the social balance dynamics in this network. The built network, which could reflect more features of real communities, has more triangle relations than the ordinary random-growing network. Then we apply the local triad social dynamics to the built network. The effects of the different cluster coefficients and the initial states to the final stationary states are discussed. Some new features of the sparse networks are found as well.展开更多
Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes i...Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.展开更多
The concurrent processing and load capacity of a single server cannot meet the growing demand of users for a variety of services in a campus network system. This document put forward to solve this problem using load b...The concurrent processing and load capacity of a single server cannot meet the growing demand of users for a variety of services in a campus network system. This document put forward to solve this problem using load balancing techniques based on LVS-NAT, discussed the key technologies of LVS-NAT, designed and implemented campus network service system with LVS-NAT load balancing technology and tested. The results showed that this system improved the processing and load capacity of the concurrent server effectively and provided a good reference to building the efficient and stable digital campus network system.展开更多
Wireless Mesh Networks(WMNs) are envisioned to support the wired backbone with a wireless Backbone Networks(BNet) for providing internet connectivity to large-scale areas.With a wide range of internet-oriented applica...Wireless Mesh Networks(WMNs) are envisioned to support the wired backbone with a wireless Backbone Networks(BNet) for providing internet connectivity to large-scale areas.With a wide range of internet-oriented applications with different Quality of Service(QoS) requirement, the large-scale WMNs should have good scalability and large bandwidth.In this paper, a Load Aware Adaptive Backbone Synthesis(LAABS) algorithm is proposed to automatically balance the traffic flow in the WMNs.The BNet will dynamically split into smaller size or merge into bigger one according to statistic load information of Backbone Nodes(BNs).Simulation results show LAABS generates moderate BNet size and converges quickly, thus providing scalable and stable BNet to facilitate traffic flow.展开更多
A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Ea...A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%.展开更多
This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependenci...This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.展开更多
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807003in part by the National Natural Science Foundation of China under Grants 61901381,62171385,and 61901378+3 种基金in part by the Aeronautical Science Foundation of China under Grant 2020z073053004in part by the Foundation of the State Key Laboratory of Integrated Services Networks of Xidian University under Grant ISN21-06in part by the Key Research Program and Industrial Innovation Chain Project of Shaanxi Province under Grant 2019ZDLGY07-10in part by the Natural Science Fundamental Research Program of Shaanxi Province under Grant 2021JM-069.
文摘Unbalanced traffic distribution in cellular networks results in congestion and degrades spectrum efficiency.To tackle this problem,we propose an Unmanned Aerial Vehicle(UAV)-assisted wireless network in which the UAV acts as an aerial relay to divert some traffic from the overloaded cell to its adjacent underloaded cell.To fully exploit its potential,we jointly optimize the UAV position,user association,spectrum allocation,and power allocation to maximize the sum-log-rate of all users in two adjacent cells.To tackle the complicated joint optimization problem,we first design a genetic-based algorithm to optimize the UAV position.Then,we simplify the problem by theoretical analysis and devise a low-complexity algorithm according to the branch-and-bound method,so as to obtain the optimal user association and spectrum allocation schemes.We further propose an iterative power allocation algorithm based on the sequential convex approximation theory.The simulation results indicate that the proposed UAV-assisted wireless network is superior to the terrestrial network in both utility and throughput,and the proposed algorithms can substantially improve the network performance in comparison with the other schemes.
基金financially supported by the National Natural Science Foundation of China (Nos.51974023 and52374321)the funding of State Key Laboratory of Advanced Metallurgy,University of Science and Technology Beijing,China (No.41620007)。
文摘The amount of oxygen blown into the converter is one of the key parameters for the control of the converter blowing process,which directly affects the tap-to-tap time of converter. In this study, a hybrid model based on oxygen balance mechanism (OBM) and deep neural network (DNN) was established for predicting oxygen blowing time in converter. A three-step method was utilized in the hybrid model. First, the oxygen consumption volume was predicted by the OBM model and DNN model, respectively. Second, a more accurate oxygen consumption volume was obtained by integrating the OBM model and DNN model. Finally, the converter oxygen blowing time was calculated according to the oxygen consumption volume and the oxygen supply intensity of each heat. The proposed hybrid model was verified using the actual data collected from an integrated steel plant in China, and compared with multiple linear regression model, OBM model, and neural network model including extreme learning machine, back propagation neural network, and DNN. The test results indicate that the hybrid model with a network structure of 3 hidden layer layers, 32-16-8 neurons per hidden layer, and 0.1 learning rate has the best prediction accuracy and stronger generalization ability compared with other models. The predicted hit ratio of oxygen consumption volume within the error±300 m^(3)is 96.67%;determination coefficient (R^(2)) and root mean square error (RMSE) are0.6984 and 150.03 m^(3), respectively. The oxygen blow time prediction hit ratio within the error±0.6 min is 89.50%;R2and RMSE are0.9486 and 0.3592 min, respectively. As a result, the proposed model can effectively predict the oxygen consumption volume and oxygen blowing time in the converter.
基金the National Natural Science Foundation of China(NSFC)(52178324).
文摘Concrete subjected to fire loads is susceptible to explosive spalling, which can lead to the exposure of reinforcingsteel bars to the fire, substantially jeopardizing the structural safety and stability. The spalling of fire-loaded concreteis closely related to the evolution of pore pressure and temperature. Conventional analytical methods involve theresolution of complex, strongly coupled multifield equations, necessitating significant computational efforts. Torapidly and accurately obtain the distributions of pore-pressure and temperature, the Pix2Pix model is adoptedin this work, which is celebrated for its capabilities in image generation. The open-source dataset used hereinfeatures RGB images we generated using a sophisticated coupled model, while the grayscale images encapsulate the15 principal variables influencing spalling. After conducting a series of tests with different layers configurations,activation functions and loss functions, the Pix2Pix model suitable for assessing the spalling risk of fire-loadedconcrete has been meticulously designed and trained. The applicability and reliability of the Pix2Pix model inconcrete parameter prediction are verified by comparing its outcomes with those derived fromthe strong couplingTHC model. Notably, for the practical engineering applications, our findings indicate that utilizing monochromeimages as the initial target for analysis yields more dependable results. This work not only offers valuable insightsfor civil engineers specializing in concrete structures but also establishes a robust methodological approach forresearchers seeking to create similar predictive models.
文摘As a new networking paradigm,Software-Defined Networking(SDN)enables us to cope with the limitations of traditional networks.SDN uses a controller that has a global view of the network and switch devices which act as packet forwarding hardware,known as“OpenFlow switches”.Since load balancing service is essential to distribute workload across servers in data centers,we propose an effective load balancing scheme in SDN,using a genetic programming approach,called Genetic Programming based Load Balancing(GPLB).We formulate the problem to find a path:1)with the best bottleneck switch which has the lowest capacity within bottleneck switches of each path,2)with the shortest path,and 3)requiring the less possible operations.For the purpose of choosing the real-time least loaded path,GPLB immediately calculates the integrated load of paths based on the information that receives from the SDN controller.Hence,in this design,the controller sends the load information of each path to the load balancing algorithm periodically and then the load balancing algorithm returns a least loaded path to the controller.In this paper,we use the Mininet emulator and the OpenDaylight controller to evaluate the effectiveness of the GPLB.The simulative study of the GPLB shows that there is a big improvement in performance metrics and the latency and the jitter are minimized.The GPLB also has the maximum throughput in comparison with related works and has performed better in the heavy traffic situation.The results show that our model stands smartly while not increasing further overhead.
文摘In this paper, a sender-initiated protocol is applied which uses fuzzy logic control method to improve computer networks performance by balancing loads among computers. This new model devises sender-initiated protocol for load transfer for load balancing. Groups are formed and every group has a node called a designated representative (DR). During load transferring processes, loads are transferred using the DR in each group to achieve load balancing purposes. The simulation results show that the performance of the protocol proposed is better than the compared conventional method. This protocol is more stable than the method without using the fuzzy logic control.
基金Projects(61173169,61103203)supported by the National Natural Science Foundation of ChinaProject(NCET-10-0798)supported by the Program for New Century Excellent Talents in University of ChinaProject supported by the Post-doctoral Program and the Freedom Explore Program of Central South University,China
文摘Energy-efficient data gathering in multi-hop wireless sensor networks was studied,considering that different node produces different amounts of data in realistic environments.A novel dominating set based clustering protocol (DSCP) was proposed to solve the data gathering problem in this scenario.In DSCP,a node evaluates the potential lifetime of the network (from its local point of view) assuming that it acts as the cluster head,and claims to be a tentative cluster head if it maximizes the potential lifetime.When evaluating the potential lifetime of the network,a node considers not only its remaining energy,but also other factors including its traffic load,the number of its neighbors,and the traffic loads of its neighbors.A tentative cluster head becomes a final cluster head with a probability inversely proportional to the number of tentative cluster heads that cover its neighbors.The protocol can terminate in O(n/lg n) steps,and its total message complexity is O(n2/lg n).Simulation results show that DSCP can effectively prolong the lifetime of the network in multi-hop networks with unbalanced traffic load.Compared with EECT,the network lifetime is prolonged by 56.6% in average.
文摘Real-time applications based on Wireless Sensor Network(WSN)tech-nologies are quickly increasing due to intelligent surroundings.Among the most significant resources in the WSN are battery power and security.Clustering stra-tegies improve the power factor and secure the WSN environment.It takes more electricity to forward data in a WSN.Though numerous clustering methods have been developed to provide energy consumption,there is indeed a risk of unequal load balancing,resulting in a decrease in the network’s lifetime due to network inequalities and less security.These possibilities arise due to the cluster head’s limited life span.These cluster heads(CH)are in charge of all activities and con-trol intra-cluster and inter-cluster interactions.The proposed method uses Lifetime centric load balancing mechanisms(LCLBM)and Cluster-based energy optimiza-tion using a mobile sink algorithm(CEOMS).LCLBM emphasizes the selection of CH,system architectures,and optimal distribution of CH.In addition,the LCLBM was added with an assistant cluster head(ACH)for load balancing.Power consumption,communications latency,the frequency of failing nodes,high security,and one-way delay are essential variables to consider while evaluating LCLBM.CEOMS will choose a cluster leader based on the influence of the fol-lowing parameters on the energy balance of WSNs.According to simulatedfind-ings,the suggested LCLBM-CEOMS method increases cluster head selection self-adaptability,improves the network’s lifetime,decreases data latency,and bal-ances network capacity.
基金supported in part by National Natural Science Foundation of China (No.61401331,No.61401328)111 Project in Xidian University of China(B08038)+2 种基金Hong Kong,Macao and Taiwan Science and Technology Cooperation Special Project (2014DFT10320,2015DFT10160)The National Science and Technology Major Project of the Ministry of Science and Technology of China(2015zx03002006-003)FundamentalResearch Funds for the Central Universities (20101155739)
文摘The Internet of Vehicles(IoV)has been widely researched in recent years,and cloud computing has been one of the key technologies in the IoV.Although cloud computing provides high performance compute,storage and networking services,the IoV still suffers with high processing latency,less mobility support and location awareness.In this paper,we integrate fog computing and software defined networking(SDN) to address those problems.Fog computing extends computing and storing to the edge of the network,which could decrease latency remarkably in addition to enable mobility support and location awareness.Meanwhile,SDN provides flexible centralized control and global knowledge to the network.In order to apply the software defined cloud/fog networking(SDCFN) architecture in the IoV effectively,we propose a novel SDN-based modified constrained optimization particle swarm optimization(MPSO-CO) algorithm which uses the reverse of the flight of mutation particles and linear decrease inertia weight to enhance the performance of constrained optimization particle swarm optimization(PSO-CO).The simulation results indicate that the SDN-based MPSO-CO algorithm could effectively decrease the latency and improve the quality of service(QoS) in the SDCFN architecture.
文摘In order to improve turbine internal efficiency and lower manufacturing cost, a new highly loaded rotating blade has been developed. The 3D optimization design method based on artificial neural network and genetic algorithm is adopted to construct the blade shape. The blade is stacked by the center of gravity in radial direction with five sections. For each blade section, independent suction and pressure sides are constructed from the camber line using Bezier curves. Three-dimensional flow analysis is carried out to verify the performance of the new blade. It is found that the new blade has improved the blade performance by 0.5%. Consequently, it is verified that the new blade is effective to improve the turbine internal efficiency and to lower the turbine weight and manufacturing cost by reducing the blade number by about 15%.
文摘Concern on alteration of sediment natural flow caused by developments of water resources system, has been addressed in many river basins around the world especially in developing and remote regions where sediment data are poorly gauged or ungauged. Since suspended sediment load (SSL) is predominant, the objectives of this research are to: 1) simulate monthly average SSL (SSLm) of four catchments using artificial neural network (ANN);2) assess the application of the calibrated ANN (Cal-ANN) models in three ungauged catchment representatives (UCR) before using them to predict SSLm of three actual ungauged catchments (AUC) in the Tonle Sap River Basin;and 3) estimate annual SSL (SSLA) of each AUC for the case of with and without dam-reservoirs. The model performance for total load (SSLT) prediction was also investigated because it is important for dam-reservoir management. For model simulation, ANN yielded very satisfactory results with determination coefficient (R2) ranging from 0.81 to 0.94 in calibration stage and 0.63 to 0.87 in validation stage. The Cal-ANN models also performed well in UCRs with R2 ranging from 0.59 to 0.64. From the result of this study, one can estimate SSLm and SSLT of ungauged catchments with an accuracy of 0.61 in term of R2 and 34.06% in term of absolute percentage bias, respectively. SSLA of the AUCs was found between 159,281 and 723,580 t/year. In combination with Brune’s method, the impact of dam-reservoirs could reduce SSLA between 47% and 68%. This result is key information for sustainable development of such infrastructures.
基金Taif University Researchers are supporting project number(TURSP-2020/211),Taif University,Taif,Saudi Arabia.
文摘Cloud computing is a collection of disparate resources or services,a web of massive infrastructures,which is aimed at achieving maximum utilization with higher availability at a minimized cost.One of the most attractive applications for cloud computing is the concept of distributed information processing.Security,privacy,energy saving,reliability and load balancing are the major challenges facing cloud computing and most information technology innovations.Load balancing is the process of redistributing workload among all nodes in a network;to improve resource utilization and job response time,while avoiding overloading some nodes when other nodes are underloaded or idle is a major challenge.Thus,this research aims to design a novel load balancing systems in a cloud computing environment.The research is based on the modification of the existing approaches,namely;particle swarm optimization(PSO),honeybee,and ant colony optimization(ACO)with mathematical expression to form a novel approach called PACOHONEYBEE.The experiments were conducted on response time and throughput.The results of the response time of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:2791,2780,2784,2767,2727,and 2599(ms)respectively.The outcome of throughput of honeybee,PSO,SASOS,round-robin,PSO-ACO,and P-ACOHONEYBEE are:7451,7425,7398,7357,7387 and 7482(bps)respectively.It is observed that P-ACOHONEYBEE approach produces the lowest response time,high throughput and overall improved performance for the 10 nodes.The research has helped in managing the imbalance drawback by maximizing throughput,and reducing response time with scalability and reliability.
文摘To evaluate the nitrogen pollution load in an aquifer, a water and nitrogen balance analysis was conducted over a thirty-five year period at five yearly intervals. First, we established a two-horizon model comprising a channel/soil horizon, and an aquifer horizon, with exchange of water between the aquifer and river. The nitrogen balance was estimated from the product of nitrogen concentration and water flow obtained from the water balance analysis. The aquifer nitrogen balance results were as follows: 1) In the aquifer horizon, the total nitrogen pollution load potential (NPLP) peaked in the period 1981-1990 at 1800 t·yr-1;following this the NPLP rapidly decreased to about 600 t·yr-1 in the period 2006-2010. The largest NPLP input component of 1000 t·yr-1 in the period 1976-1990 was from farmland. Subsequently, farmland NPLP decreased to only 400 t·yr-1 between 2006 and 2010. The second largest input component, 600 t·yr-1, was effluent from wastewater treatment works (WWTWs) in the period 1986-1990;this also decreased markedly to about 100 t·yr-1 between 2006 and 2010;2) The difference between input and output in the aquifer horizon, used as an index of groundwater pollution, peaked in the period 1986-1990 at about 1200 t·yr-1. This gradually decreased to about 200 t·yr-1 by 2006-2010. 3) The temporal change in NPLP coincided with the nitrogen concentration of the rivers in the study area. In addition, nitrogen concentrations in two test wells were 1.0 mg·l-1 at a depth of 150 m and only 0.25 mg·l-1 at 50 m, suggesting gradual percolation of the nitrogen polluted water deeper in the aquifer.
基金supported by the Erasmus Mundus Action 2 Programme of the European Union and the German Weather Service(DWD)and the Czech Hydrological-Meteorological Service(CHMI).
文摘In this study, recurrent networks to downscale meteorological fields of the ERA-40 re-analysis dataset with focus on the meso-scale water balance were investigated. Therefore two types of recurrent neural networks were used. The first approach is a coupling between a recurrent neural network and a distributed watershed model and the second a nonlinear autoregressive with exogenous inputs (NARX) network, which directly predicted the component of the water balance. The approaches were deployed for a meso-scale catchment area in the Free State of Saxony, Germany. The results show that the coupled approach did not perform as well as the NARX network. But the meteorological output of the coupled approach already reaches an adequate quality. However the coupled model generates as input for the watershed model insufficient daily precipitation sums and not enough wet days were predicted. Hence the long-term annual cycle of the water balance could not be preserved with acceptable quality in contrary to the NARX approach. The residual storage change term indicates physical restrictions of the plausibility of the neural networks, whereas the physically based correlations among?the components of the water balance were preserved more accurately by the coupled approach.
文摘To improve the security and reliability of a distribution network, several issues, such as influences of operation con-strains, real-time load margin calculation, and online security level evaluation, are with great significance. In this pa-per, a mathematical model for load capability online assessment of a distribution network is established, and a repeti-tive power flow calculation algorithm is proposed to solve the problem as well. With assessment on three levels: the entire distribution network, a sub-area of the network and a load bus, the security level of current operation mode and load transfer capability during outage are thus obtained. The results can provide guidelines for prevention control, as well as restoration control. Simulation results show that the method is simple, fast and can be applied to distribution networks belonged to any voltage level while taking into account all of the operation constraints.
文摘In this article, we construct a triangle-growing network with tunable clusters and study the social balance dynamics in this network. The built network, which could reflect more features of real communities, has more triangle relations than the ordinary random-growing network. Then we apply the local triad social dynamics to the built network. The effects of the different cluster coefficients and the initial states to the final stationary states are discussed. Some new features of the sparse networks are found as well.
基金supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)+2 种基金JST Through the Establishment of University Fellowships Towards the Creation of Science Technology Innovation(JPMJFS2115)the National Natural Science Foundation of China(52078382)the State Key Laboratory of Disaster Reduction in Civil Engineering(CE19-A-01)。
文摘Accurately predicting fluid forces acting on the sur-face of a structure is crucial in engineering design.However,this task becomes particularly challenging in turbulent flow,due to the complex and irregular changes in the flow field.In this study,we propose a novel deep learning method,named mapping net-work-coordinated stacked gated recurrent units(MSU),for pre-dicting pressure on a circular cylinder from velocity data.Specifi-cally,our coordinated learning strategy is designed to extract the most critical velocity point for prediction,a process that has not been explored before.In our experiments,MSU extracts one point from a velocity field containing 121 points and utilizes this point to accurately predict 100 pressure points on the cylinder.This method significantly reduces the workload of data measure-ment in practical engineering applications.Our experimental results demonstrate that MSU predictions are highly similar to the real turbulent data in both spatio-temporal and individual aspects.Furthermore,the comparison results show that MSU predicts more precise results,even outperforming models that use all velocity field points.Compared with state-of-the-art methods,MSU has an average improvement of more than 45%in various indicators such as root mean square error(RMSE).Through comprehensive and authoritative physical verification,we estab-lished that MSU’s prediction results closely align with pressure field data obtained in real turbulence fields.This confirmation underscores the considerable potential of MSU for practical applications in real engineering scenarios.The code is available at https://github.com/zhangzm0128/MSU.
文摘The concurrent processing and load capacity of a single server cannot meet the growing demand of users for a variety of services in a campus network system. This document put forward to solve this problem using load balancing techniques based on LVS-NAT, discussed the key technologies of LVS-NAT, designed and implemented campus network service system with LVS-NAT load balancing technology and tested. The results showed that this system improved the processing and load capacity of the concurrent server effectively and provided a good reference to building the efficient and stable digital campus network system.
基金Supported in part by Natural Science Fundation of Jiangsu Province (No.06KJA51001)
文摘Wireless Mesh Networks(WMNs) are envisioned to support the wired backbone with a wireless Backbone Networks(BNet) for providing internet connectivity to large-scale areas.With a wide range of internet-oriented applications with different Quality of Service(QoS) requirement, the large-scale WMNs should have good scalability and large bandwidth.In this paper, a Load Aware Adaptive Backbone Synthesis(LAABS) algorithm is proposed to automatically balance the traffic flow in the WMNs.The BNet will dynamically split into smaller size or merge into bigger one according to statistic load information of Backbone Nodes(BNs).Simulation results show LAABS generates moderate BNet size and converges quickly, thus providing scalable and stable BNet to facilitate traffic flow.
基金This work was supported by the National Key R&D Program of China(2021YFB2900604).
文摘A low-Earth-orbit(LEO)satellite network can provide full-coverage access services worldwide and is an essential candidate for future 6G networking.However,the large variability of the geographic distribution of the Earth’s population leads to an uneven service volume distribution of access service.Moreover,the limitations on the resources of satellites are far from being able to serve the traffic in hotspot areas.To enhance the forwarding capability of satellite networks,we first assess how hotspot areas under different load cases and spatial scales significantly affect the network throughput of an LEO satellite network overall.Then,we propose a multi-region cooperative traffic scheduling algorithm.The algorithm migrates low-grade traffic from hotspot areas to coldspot areas for forwarding,significantly increasing the overall throughput of the satellite network while sacrificing some latency of end-to-end forwarding.This algorithm can utilize all the global satellite resources and improve the utilization of network resources.We model the cooperative multi-region scheduling of large-scale LEO satellites.Based on the model,we build a system testbed using OMNET++to compare the proposed method with existing techniques.The simulations show that our proposed method can reduce the packet loss probability by 30%and improve the resource utilization ratio by 3.69%.
基金funded by the Science and Technology Foundation of State Grid Corporation of China(Grant No.5108-202218280A-2-397-XG).
文摘This paper focuses on the scheduling problem of workflow tasks that exhibit interdependencies.Unlike indepen-dent batch tasks,workflows typically consist of multiple subtasks with intrinsic correlations and dependencies.It necessitates the distribution of various computational tasks to appropriate computing node resources in accor-dance with task dependencies to ensure the smooth completion of the entire workflow.Workflow scheduling must consider an array of factors,including task dependencies,availability of computational resources,and the schedulability of tasks.Therefore,this paper delves into the distributed graph database workflow task scheduling problem and proposes a workflow scheduling methodology based on deep reinforcement learning(DRL).The method optimizes the maximum completion time(makespan)and response time of workflow tasks,aiming to enhance the responsiveness of workflow tasks while ensuring the minimization of the makespan.The experimental results indicate that the Q-learning Deep Reinforcement Learning(Q-DRL)algorithm markedly diminishes the makespan and refines the average response time within distributed graph database environments.In quantifying makespan,Q-DRL achieves mean reductions of 12.4%and 11.9%over established First-fit and Random scheduling strategies,respectively.Additionally,Q-DRL surpasses the performance of both DRL-Cloud and Improved Deep Q-learning Network(IDQN)algorithms,with improvements standing at 4.4%and 2.6%,respectively.With reference to average response time,the Q-DRL approach exhibits a significantly enhanced performance in the scheduling of workflow tasks,decreasing the average by 2.27%and 4.71%when compared to IDQN and DRL-Cloud,respectively.The Q-DRL algorithm also demonstrates a notable increase in the efficiency of system resource utilization,reducing the average idle rate by 5.02%and 9.30%in comparison to IDQN and DRL-Cloud,respectively.These findings support the assertion that Q-DRL not only upholds a lower average idle rate but also effectively curtails the average response time,thereby substantially improving processing efficiency and optimizing resource utilization within distributed graph database systems.