Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task schedu...Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task scheduling problem in the hierarchically deployed edge cloud.We first formulate the minimization of the service time of scheduled tasks in edge cloud as a combinatorial optimization problem,blue and then prove the NP-hardness of the problem.Different from the existing work that mostly designs heuristic approximation-based algorithms or policies to make scheduling decision,we propose a newly designed scheduling policy,named Joint Neural Network and Heuristic Scheduling(JNNHSP),which combines a neural network-based method with a heuristic based solution.JNNHSP takes the Sequence-to-Sequence(Seq2Seq)model trained by Reinforcement Learning(RL)as the primary policy and adopts the heuristic algorithm as the auxiliary policy to obtain the scheduling solution,thereby achieving a good balance between the quality and the efficiency of the scheduling solution.In-depth experiments show that compared with a variety of related policies and optimization solvers,JNNHSP can achieve better performance in terms of scheduling error ratio,the degree to which the policy is affected by re-sources limitations,average service latency,and execution efficiency in a typical hierarchical edge cloud.展开更多
Cloud Computing(CC)provides data storage options as well as computing services to its users through the Internet.On the other hand,cloud users are concerned about security and privacy issues due to the increased numbe...Cloud Computing(CC)provides data storage options as well as computing services to its users through the Internet.On the other hand,cloud users are concerned about security and privacy issues due to the increased number of cyberattacks.Data protection has become an important issue since the users’information gets exposed to third parties.Computer networks are exposed to different types of attacks which have extensively grown in addition to the novel intrusion methods and hacking tools.Intrusion Detection Systems(IDSs)can be used in a network to manage suspicious activities.These IDSs monitor the activities of the CC environment and decide whether an activity is legitimate(normal)or malicious(intrusive)based on the established system’s confidentiality,availability and integrity of the data sources.In the current study,a Chaotic Metaheuristics with Optimal Multi-Spiking Neural Network-based Intrusion Detection(CMOMSNN-ID)model is proposed to secure the cloud environment.The presented CMOMSNNID model involves the Chaotic Artificial Bee Colony Optimization-based Feature Selection(CABC-FS)technique to reduce the curse of dimensionality.In addition,the Multi-Spiking Neural Network(MSNN)classifier is also used based on the simulation of brain functioning.It is applied to resolve pattern classification problems.In order to fine-tune the parameters relevant to theMSNN model,theWhale Optimization Algorithm(WOA)is employed to boost the classification results.To demonstrate the superiority of the proposed CMOMSNN-ID model,a useful set of simulations was performed.The simulation outcomes inferred that the proposed CMOMSNN-ID model accomplished a superior performance over other models with a maximum accuracy of 99.20%.展开更多
Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualiz...Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches.展开更多
There is instability in the distributed energy storage cloud group end region on the power grid side.In order to avoid large-scale fluctuating charging and discharging in the power grid environment and make the capaci...There is instability in the distributed energy storage cloud group end region on the power grid side.In order to avoid large-scale fluctuating charging and discharging in the power grid environment and make the capacitor components showa continuous and stable charging and discharging state,a hierarchical time-sharing configuration algorithm of distributed energy storage cloud group end region on the power grid side based on multi-scale and multi feature convolution neural network is proposed.Firstly,a voltage stability analysis model based onmulti-scale and multi feature convolution neural network is constructed,and the multi-scale and multi feature convolution neural network is optimized based on Self-OrganizingMaps(SOM)algorithm to analyze the voltage stability of the cloud group end region of distributed energy storage on the grid side under the framework of credibility.According to the optimal scheduling objectives and network size,the distributed robust optimal configuration control model is solved under the framework of coordinated optimal scheduling at multiple time scales;Finally,the time series characteristics of regional power grid load and distributed generation are analyzed.According to the regional hierarchical time-sharing configuration model of“cloud”,“group”and“end”layer,the grid side distributed energy storage cloud group end regional hierarchical time-sharing configuration algorithm is realized.The experimental results show that after applying this algorithm,the best grid side distributed energy storage configuration scheme can be determined,and the stability of grid side distributed energy storage cloud group end region layered timesharing configuration can be improved.展开更多
Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various t...Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various thermal transport behaviors,achieving thermal transparency stands out as particularly desirable and intriguing.Our earlier work demonstrated the use of a thermal metamaterial-based periodic interparticle system as the underlying structure for manipulating thermal transport behavior and achieving thermal transparency.In this paper,we introduce an approach based on graph neural network to address the complex inverse design problem of determining the design parameters for a thermal metamaterial-based periodic interparticle system with the desired thermal transport behavior.Our work demonstrates that combining graph neural network modeling and inference is an effective approach for solving inverse design problems associated with attaining desirable thermal transport behaviors using thermal metamaterials.展开更多
In the railway system,fasteners have the functions of damping,maintaining the track distance,and adjusting the track level.Therefore,routine maintenance and inspection of fasteners are important to ensure the safe ope...In the railway system,fasteners have the functions of damping,maintaining the track distance,and adjusting the track level.Therefore,routine maintenance and inspection of fasteners are important to ensure the safe operation of track lines.Currently,assessment methods for fastener tightness include manual observation,acoustic wave detection,and image detection.There are limitations such as low accuracy and efficiency,easy interference and misjudgment,and a lack of accurate,stable,and fast detection methods.Aiming at the small deformation characteristics and large elastic change of fasteners from full loosening to full tightening,this study proposes high-precision surface-structured light technology for fastener detection and fastener deformation feature extraction based on the center-line projection distance and a fastener tightness regression method based on neural networks.First,the method uses a 3D camera to obtain a fastener point cloud and then segments the elastic rod area based on the iterative closest point algorithm registration.Principal component analysis is used to calculate the normal vector of the segmented elastic rod surface and extract the point on the centerline of the elastic rod.The point is projected onto the upper surface of the bolt to calculate the projection distance.Subsequently,the mapping relationship between the projection distance sequence and fastener tightness is established,and the influence of each parameter on the fastener tightness prediction is analyzed.Finally,by setting up a fastener detection scene in the track experimental base,collecting data,and completing the algorithm verification,the results showed that the deviation between the fastener tightness regression value obtained after the algorithm processing and the actual measured value RMSE was 0.2196 mm,which significantly improved the effect compared with other tightness detection methods,and realized an effective fastener tightness regression.展开更多
When checking the ice shape calculation software,its accuracy is judged based on the proximity between the calculated ice shape and the typical test ice shape.Therefore,determining the typical test ice shape becomes t...When checking the ice shape calculation software,its accuracy is judged based on the proximity between the calculated ice shape and the typical test ice shape.Therefore,determining the typical test ice shape becomes the key task of the icing wind tunnel tests.In the icing wind tunnel test of the tail wing model of a large amphibious aircraft,in order to obtain accurate typical test ice shape,the Romer Absolute Scanner is used to obtain the 3D point cloud data of the ice shape on the tail wing model.Then,the batch-learning self-organizing map(BLSOM)neural network is used to obtain the 2D average ice shape along the model direction based on the 3D point cloud data of the ice shape,while its tolerance band is calculated using the probabilistic statistical method.The results show that the combination of 2D average ice shape and its tolerance band can represent the 3D characteristics of the test ice shape effectively,which can be used as the typical test ice shape for comparative analysis with the calculated ice shape.展开更多
In light of the limited efficacy of conventional methods for identifying pavement cracks and the absence of comprehensive depth and location data in two-dimensional photographs,this study presents an intelligent strat...In light of the limited efficacy of conventional methods for identifying pavement cracks and the absence of comprehensive depth and location data in two-dimensional photographs,this study presents an intelligent strategy for extracting road cracks.This methodology involves the integration of laser point cloud data obtained from a vehicle-mounted system and a panoramic sequence of images.The study employs a vehicle-mounted LiDAR measurement system to acquire laser point cloud and panoramic sequence image data simultaneously.A convolutional neural network is utilized to extract cracks from the panoramic sequence image.The extracted sequence image is then aligned with the laser point cloud,enabling the assignment of RGB information to the vehicle-mounted three dimensional(3D)point cloud and location information to the two dimensional(2D)panoramic image.Additionally,a threshold value is set based on the crack elevation change to extract the aligned roadway point cloud.The three-dimensional data pertaining to the cracks can be acquired.The experimental findings demonstrate that the use of convolutional neural networks has yielded noteworthy outcomes in the extraction of road cracks.The utilization of point cloud and image alignment techniques enables the extraction of precise location data pertaining to road cracks.This approach exhibits superior accuracy when compared to conventional methods.Moreover,it facilitates rapid and accurate identification and localization of road cracks,thereby playing a crucial role in ensuring road maintenance and traffic safety.Consequently,this technique finds extensive application in the domains of intelligent transportation and urbanization development.The technology exhibits significant promise for use in the domains of intelligent transportation and city development.展开更多
Cloud computing aims to maximize the benefit of distributed resources and aggregate them to achieve higher throughput to solve large scale computation problems. In this technology, the customers rent the resources and...Cloud computing aims to maximize the benefit of distributed resources and aggregate them to achieve higher throughput to solve large scale computation problems. In this technology, the customers rent the resources and only pay per use. Job scheduling is one of the biggest issues in cloud computing. Scheduling of users’ requests means how to allocate resources to these requests to finish the tasks in minimum time. The main task of job scheduling system is to find the best resources for user’s jobs, taking into consideration some statistics and dynamic parameters restrictions of users’ jobs. In this research, we introduce cloud computing, genetic algorithm and artificial neural networks, and then review the literature of cloud job scheduling. Many researchers in the literature tried to solve the cloud job scheduling using different techniques. Most of them use artificial intelligence techniques such as genetic algorithm and ant colony to solve the problem of job scheduling and to find the optimal distribution of resources. Unfortunately, there are still some problems in this research area. Therefore, we propose implementing artificial neural networks to optimize the job scheduling results in cloud as it can find new set of classifications not only search within the available set.展开更多
The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a promi...The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach.展开更多
Detecting the underground disease is very crucial for the roadbed health monitoring and maintenance of transport facilities,since it is very closely related to the structural health and reliability with the rapid deve...Detecting the underground disease is very crucial for the roadbed health monitoring and maintenance of transport facilities,since it is very closely related to the structural health and reliability with the rapid development of road traffic.Ground penetrating radar(GPR)is widely used to detect road and underground diseases.However,it is still a challenging task due to data access anywhere,transmission security and data processing on cloud.Cloud computing can provide scalable and powerful technologies for large-scale storage,processing and dissemination of GPR data.Combined with cloud computing and radar detection technology,it is possible to locate the underground disease quickly and accurately.This paper deploys the framework of a ground disease detection system based on cloud computing and proposes an attention region convolution neural network for object detection in the GPR images.Experimental results of the precision and recall metrics show that the proposed approach is more efficient than traditional objection detection method in ground disease detection of cloud based system.展开更多
Tax is very important to the whole country, so a scientific tax predictive model is needed. This paper introduces the theory of the cloud model. On this basis, it presents a cloud neural network, and analyzes the main...Tax is very important to the whole country, so a scientific tax predictive model is needed. This paper introduces the theory of the cloud model. On this basis, it presents a cloud neural network, and analyzes the main factors which influence the tax revenue. Then if proposes a tax predictive model based on the cloud neural network. The model combines the strongpoints of the cloud model and the neural network. The experiment and simulation results show the effectiveness of the algorithm in this paper.展开更多
Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase ...Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase of information in every field and the need for a centralized location for processing efficiently.This cloud is nowadays highly affected by internal threats of the user.Sensitive applications such as banking,hospital,and business are more likely affected by real user threats.An intruder is presented as a user and set as a member of the network.After becoming an insider in the network,they will try to attack or steal sensitive data during information sharing or conversation.The major issue in today's technological development is identifying the insider threat in the cloud network.When data are lost,compromising cloud users is difficult.Privacy and security are not ensured,and then,the usage of the cloud is not trusted.Several solutions are available for the external security of the cloud network.However,insider or internal threats need to be addressed.In this research work,we focus on a solution for identifying an insider attack using the artificial intelligence technique.An insider attack is possible by using nodes of weak users’systems.They will log in using a weak user id,connect to a network,and pretend to be a trusted node.Then,they can easily attack and hack information as an insider,and identifying them is very difficult.These types of attacks need intelligent solutions.A machine learning approach is widely used for security issues.To date,the existing lags can classify the attackers accurately.This information hijacking process is very absurd,which motivates young researchers to provide a solution for internal threats.In our proposed work,we track the attackers using a user interaction behavior pattern and deep learning technique.The usage of mouse movements and clicks and keystrokes of the real user is stored in a database.The deep belief neural network is designed using a restricted Boltzmann machine(RBM)so that the layer of RBM communicates with the previous and subsequent layers.The result is evaluated using a Cooja simulator based on the cloud environment.The accuracy and F-measure are highly improved compared with when using the existing long short-term memory and support vector machine.展开更多
The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.H...The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost.展开更多
In recent decades,the cloud computing contributes a prominent role in health care sector as the patient health records are transferred and collected using cloud computing services.The doctors have switched to cloud co...In recent decades,the cloud computing contributes a prominent role in health care sector as the patient health records are transferred and collected using cloud computing services.The doctors have switched to cloud computing as it provides multiple advantageous measures including wide storage space and easy availability without any limitations.This necessitates the medical field to be redesigned by cloud technology to preserve information about patient’s critical diseases,electrocardiogram(ECG)reports,and payment details.The proposed work utilizes a hybrid cloud pattern to share Massachusetts Institute of Technology-Beth Israel Hospital(MIT-BIH)resources over the private and public cloud.The stored data are categorized as significant and non-significant by Artificial Neural Networks(ANN).The significant data undergoes encryption by Lagrange key management which automatically generates the key and stores it in the hidden layer.Upon receiving the request from a secondary user,the primary user verifies the authentication of the request and transmits the key via Gmail to the secondary user.Once the key matches the key in the hidden layer,the preserved information will be shared between the users.Due to the enhanced privacy preserving key generation,the proposed work prevents the tracking of keys by malicious users.The outcomes reveal that the introduced work provides improved success rate with reduced computational time.展开更多
Predicting the usage of container cloud resources has always been an important and challenging problem in improving the performance of cloud resource clusters.We proposed an integrated prediction method of stacking co...Predicting the usage of container cloud resources has always been an important and challenging problem in improving the performance of cloud resource clusters.We proposed an integrated prediction method of stacking container cloud resources based on variational modal decomposition(VMD)-Permutation entropy(PE)and long short-term memory(LSTM)neural network to solve the prediction difficulties caused by the non-stationarity and volatility of resource data.The variational modal decomposition algorithm decomposes the time series data of cloud resources to obtain intrinsic mode function and residual components,which solves the signal decomposition algorithm’s end-effect and modal confusion problems.The permutation entropy is used to evaluate the complexity of the intrinsic mode function,and the reconstruction based on similar entropy and low complexity is used to reduce the difficulty of modeling.Finally,we use the LSTM and stacking fusion models to predict and superimpose;the stacking integration model integrates Gradient boosting regression(GBR),Kernel ridge regression(KRR),and Elastic net regression(ENet)as primary learners,and the secondary learner adopts the kernel ridge regression method with solid generalization ability.The Amazon public data set experiment shows that compared with Holt-winters,LSTM,and Neuralprophet models,we can see that the optimization range of multiple evaluation indicators is 0.338∼1.913,0.057∼0.940,0.000∼0.017 and 1.038∼8.481 in root means square error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE)and variance(VAR),showing its stability and better prediction accuracy.展开更多
点云的处理、传输、语义分割等是3维计算机视觉领域重要的分析任务.现如今,图神经网络和图结构在点云研究方面的有效性已被证实,基于图的点云(graph-based point cloud,GPC)研究不断涌现.因此,一种统一的研究角度、框架和方法论亟待形成...点云的处理、传输、语义分割等是3维计算机视觉领域重要的分析任务.现如今,图神经网络和图结构在点云研究方面的有效性已被证实,基于图的点云(graph-based point cloud,GPC)研究不断涌现.因此,一种统一的研究角度、框架和方法论亟待形成.系统性梳理了GPC研究的各种应用场景,包括配准、降噪、压缩、表示学习、分类、分割、检测等任务,概括出GPC研究的一般性框架,提出了一条覆盖当前GPC全域研究的技术路线.具体来说,给出了GPC研究的分层概念范畴,包括底层数据处理、中层表示学习、高层识别任务;综述了各领域中的GPC模型或算法,包括静态和动态点云的处理算法、有监督和无监督的表示学习模型、传统或机器学习的GPC识别算法;总结了其中代表性的成果及其核心思想,譬如动态更新每层特征空间对应的最近邻图、分层以及参数共享的动态点聚合模块,结合图划分和图卷积提高分割精度;对比了模型性能,包括总体精度(overall accuracy,OA)、平均精度(mean accuracy,mAcc)、平均交并比(mean intersection over union,mIoU);在分析比较现有模型和方法的基础上,归纳了GPC目前面临的主要挑战,提出相应的研究问题,并展望未来的研究方向.建立的GPC研究框架具有一般性和通用性,为后续研究者从事GPC这个新型交叉领域研究提供了领域定位、技术总结及宏观视角.点云研究的出现,是探测器硬件技术长足进步后应运而生的结果;点云研究的现状表明在理论和实践之间存在一些挑战,一些关键问题还有待解决.同时,点云研究的发展将推动人工智能进入新的时代.展开更多
基金Supported by Scientific and Technological Innovation Project of Chongqing(No.cstc2021jxjl20010)The Graduate Student Innovation Program of Chongqing University of Technology(No.clgycx-20203166,No.gzlcx20222061,No.gzlcx20223229)。
文摘Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task scheduling problem in the hierarchically deployed edge cloud.We first formulate the minimization of the service time of scheduled tasks in edge cloud as a combinatorial optimization problem,blue and then prove the NP-hardness of the problem.Different from the existing work that mostly designs heuristic approximation-based algorithms or policies to make scheduling decision,we propose a newly designed scheduling policy,named Joint Neural Network and Heuristic Scheduling(JNNHSP),which combines a neural network-based method with a heuristic based solution.JNNHSP takes the Sequence-to-Sequence(Seq2Seq)model trained by Reinforcement Learning(RL)as the primary policy and adopts the heuristic algorithm as the auxiliary policy to obtain the scheduling solution,thereby achieving a good balance between the quality and the efficiency of the scheduling solution.In-depth experiments show that compared with a variety of related policies and optimization solvers,JNNHSP can achieve better performance in terms of scheduling error ratio,the degree to which the policy is affected by re-sources limitations,average service latency,and execution efficiency in a typical hierarchical edge cloud.
基金This research work was funded by Institutional Fund Projects under Grant No.(IFPHI-099-120-2020)..
文摘Cloud Computing(CC)provides data storage options as well as computing services to its users through the Internet.On the other hand,cloud users are concerned about security and privacy issues due to the increased number of cyberattacks.Data protection has become an important issue since the users’information gets exposed to third parties.Computer networks are exposed to different types of attacks which have extensively grown in addition to the novel intrusion methods and hacking tools.Intrusion Detection Systems(IDSs)can be used in a network to manage suspicious activities.These IDSs monitor the activities of the CC environment and decide whether an activity is legitimate(normal)or malicious(intrusive)based on the established system’s confidentiality,availability and integrity of the data sources.In the current study,a Chaotic Metaheuristics with Optimal Multi-Spiking Neural Network-based Intrusion Detection(CMOMSNN-ID)model is proposed to secure the cloud environment.The presented CMOMSNNID model involves the Chaotic Artificial Bee Colony Optimization-based Feature Selection(CABC-FS)technique to reduce the curse of dimensionality.In addition,the Multi-Spiking Neural Network(MSNN)classifier is also used based on the simulation of brain functioning.It is applied to resolve pattern classification problems.In order to fine-tune the parameters relevant to theMSNN model,theWhale Optimization Algorithm(WOA)is employed to boost the classification results.To demonstrate the superiority of the proposed CMOMSNN-ID model,a useful set of simulations was performed.The simulation outcomes inferred that the proposed CMOMSNN-ID model accomplished a superior performance over other models with a maximum accuracy of 99.20%.
文摘Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches.
基金supported by State Grid Corporation Limited Science and Technology Project Funding(Contract No.SGCQSQ00YJJS2200380).
文摘There is instability in the distributed energy storage cloud group end region on the power grid side.In order to avoid large-scale fluctuating charging and discharging in the power grid environment and make the capacitor components showa continuous and stable charging and discharging state,a hierarchical time-sharing configuration algorithm of distributed energy storage cloud group end region on the power grid side based on multi-scale and multi feature convolution neural network is proposed.Firstly,a voltage stability analysis model based onmulti-scale and multi feature convolution neural network is constructed,and the multi-scale and multi feature convolution neural network is optimized based on Self-OrganizingMaps(SOM)algorithm to analyze the voltage stability of the cloud group end region of distributed energy storage on the grid side under the framework of credibility.According to the optimal scheduling objectives and network size,the distributed robust optimal configuration control model is solved under the framework of coordinated optimal scheduling at multiple time scales;Finally,the time series characteristics of regional power grid load and distributed generation are analyzed.According to the regional hierarchical time-sharing configuration model of“cloud”,“group”and“end”layer,the grid side distributed energy storage cloud group end regional hierarchical time-sharing configuration algorithm is realized.The experimental results show that after applying this algorithm,the best grid side distributed energy storage configuration scheme can be determined,and the stability of grid side distributed energy storage cloud group end region layered timesharing configuration can be improved.
基金funding from the National Natural Science Foundation of China (Grant Nos.12035004 and 12320101004)the Innovation Program of Shanghai Municipal Education Commission (Grant No.2023ZKZD06).
文摘Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various thermal transport behaviors,achieving thermal transparency stands out as particularly desirable and intriguing.Our earlier work demonstrated the use of a thermal metamaterial-based periodic interparticle system as the underlying structure for manipulating thermal transport behavior and achieving thermal transparency.In this paper,we introduce an approach based on graph neural network to address the complex inverse design problem of determining the design parameters for a thermal metamaterial-based periodic interparticle system with the desired thermal transport behavior.Our work demonstrates that combining graph neural network modeling and inference is an effective approach for solving inverse design problems associated with attaining desirable thermal transport behaviors using thermal metamaterials.
基金Supported by Fundamental Research Funds for the Central Universities of China(Grant No.2023JBMC014).
文摘In the railway system,fasteners have the functions of damping,maintaining the track distance,and adjusting the track level.Therefore,routine maintenance and inspection of fasteners are important to ensure the safe operation of track lines.Currently,assessment methods for fastener tightness include manual observation,acoustic wave detection,and image detection.There are limitations such as low accuracy and efficiency,easy interference and misjudgment,and a lack of accurate,stable,and fast detection methods.Aiming at the small deformation characteristics and large elastic change of fasteners from full loosening to full tightening,this study proposes high-precision surface-structured light technology for fastener detection and fastener deformation feature extraction based on the center-line projection distance and a fastener tightness regression method based on neural networks.First,the method uses a 3D camera to obtain a fastener point cloud and then segments the elastic rod area based on the iterative closest point algorithm registration.Principal component analysis is used to calculate the normal vector of the segmented elastic rod surface and extract the point on the centerline of the elastic rod.The point is projected onto the upper surface of the bolt to calculate the projection distance.Subsequently,the mapping relationship between the projection distance sequence and fastener tightness is established,and the influence of each parameter on the fastener tightness prediction is analyzed.Finally,by setting up a fastener detection scene in the track experimental base,collecting data,and completing the algorithm verification,the results showed that the deviation between the fastener tightness regression value obtained after the algorithm processing and the actual measured value RMSE was 0.2196 mm,which significantly improved the effect compared with other tightness detection methods,and realized an effective fastener tightness regression.
基金supported by the AG600 project of AVIC General Huanan Aircraft Industry Co.,Ltd.
文摘When checking the ice shape calculation software,its accuracy is judged based on the proximity between the calculated ice shape and the typical test ice shape.Therefore,determining the typical test ice shape becomes the key task of the icing wind tunnel tests.In the icing wind tunnel test of the tail wing model of a large amphibious aircraft,in order to obtain accurate typical test ice shape,the Romer Absolute Scanner is used to obtain the 3D point cloud data of the ice shape on the tail wing model.Then,the batch-learning self-organizing map(BLSOM)neural network is used to obtain the 2D average ice shape along the model direction based on the 3D point cloud data of the ice shape,while its tolerance band is calculated using the probabilistic statistical method.The results show that the combination of 2D average ice shape and its tolerance band can represent the 3D characteristics of the test ice shape effectively,which can be used as the typical test ice shape for comparative analysis with the calculated ice shape.
基金founded by National Key R&D Program of China (No.2021YFB2601200)National Natural Science Foundation of China (No.42171416)Teacher Support Program for Pyramid Talent Training Project of Beijing University of Civil Engineering and Architecture (No.JDJQ20200307).
文摘In light of the limited efficacy of conventional methods for identifying pavement cracks and the absence of comprehensive depth and location data in two-dimensional photographs,this study presents an intelligent strategy for extracting road cracks.This methodology involves the integration of laser point cloud data obtained from a vehicle-mounted system and a panoramic sequence of images.The study employs a vehicle-mounted LiDAR measurement system to acquire laser point cloud and panoramic sequence image data simultaneously.A convolutional neural network is utilized to extract cracks from the panoramic sequence image.The extracted sequence image is then aligned with the laser point cloud,enabling the assignment of RGB information to the vehicle-mounted three dimensional(3D)point cloud and location information to the two dimensional(2D)panoramic image.Additionally,a threshold value is set based on the crack elevation change to extract the aligned roadway point cloud.The three-dimensional data pertaining to the cracks can be acquired.The experimental findings demonstrate that the use of convolutional neural networks has yielded noteworthy outcomes in the extraction of road cracks.The utilization of point cloud and image alignment techniques enables the extraction of precise location data pertaining to road cracks.This approach exhibits superior accuracy when compared to conventional methods.Moreover,it facilitates rapid and accurate identification and localization of road cracks,thereby playing a crucial role in ensuring road maintenance and traffic safety.Consequently,this technique finds extensive application in the domains of intelligent transportation and urbanization development.The technology exhibits significant promise for use in the domains of intelligent transportation and city development.
文摘Cloud computing aims to maximize the benefit of distributed resources and aggregate them to achieve higher throughput to solve large scale computation problems. In this technology, the customers rent the resources and only pay per use. Job scheduling is one of the biggest issues in cloud computing. Scheduling of users’ requests means how to allocate resources to these requests to finish the tasks in minimum time. The main task of job scheduling system is to find the best resources for user’s jobs, taking into consideration some statistics and dynamic parameters restrictions of users’ jobs. In this research, we introduce cloud computing, genetic algorithm and artificial neural networks, and then review the literature of cloud job scheduling. Many researchers in the literature tried to solve the cloud job scheduling using different techniques. Most of them use artificial intelligence techniques such as genetic algorithm and ant colony to solve the problem of job scheduling and to find the optimal distribution of resources. Unfortunately, there are still some problems in this research area. Therefore, we propose implementing artificial neural networks to optimize the job scheduling results in cloud as it can find new set of classifications not only search within the available set.
基金supported by the Universiti Tunku Abdul Rahman (UTAR) Malaysia under UTARRF (IPSR/RMC/UTARRF/2021-C1/T05)
文摘The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach.
基金The work was supported by the State Key Laboratory of Coal Resources and Safe Mining under Contract SKLCRSM16KFD04The work was also supported in part by the Natural Science Foundation of Beijing,China(8162035)+2 种基金the Fundamental Research Funds for the Central Universities(2016QJ04)Yue Qi Young Scholar Project of CUMTBthe National Training Program of Innovation and Entrepreneurship for Undergraduates(C201804970).
文摘Detecting the underground disease is very crucial for the roadbed health monitoring and maintenance of transport facilities,since it is very closely related to the structural health and reliability with the rapid development of road traffic.Ground penetrating radar(GPR)is widely used to detect road and underground diseases.However,it is still a challenging task due to data access anywhere,transmission security and data processing on cloud.Cloud computing can provide scalable and powerful technologies for large-scale storage,processing and dissemination of GPR data.Combined with cloud computing and radar detection technology,it is possible to locate the underground disease quickly and accurately.This paper deploys the framework of a ground disease detection system based on cloud computing and proposes an attention region convolution neural network for object detection in the GPR images.Experimental results of the precision and recall metrics show that the proposed approach is more efficient than traditional objection detection method in ground disease detection of cloud based system.
文摘Tax is very important to the whole country, so a scientific tax predictive model is needed. This paper introduces the theory of the cloud model. On this basis, it presents a cloud neural network, and analyzes the main factors which influence the tax revenue. Then if proposes a tax predictive model based on the cloud neural network. The model combines the strongpoints of the cloud model and the neural network. The experiment and simulation results show the effectiveness of the algorithm in this paper.
文摘Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase of information in every field and the need for a centralized location for processing efficiently.This cloud is nowadays highly affected by internal threats of the user.Sensitive applications such as banking,hospital,and business are more likely affected by real user threats.An intruder is presented as a user and set as a member of the network.After becoming an insider in the network,they will try to attack or steal sensitive data during information sharing or conversation.The major issue in today's technological development is identifying the insider threat in the cloud network.When data are lost,compromising cloud users is difficult.Privacy and security are not ensured,and then,the usage of the cloud is not trusted.Several solutions are available for the external security of the cloud network.However,insider or internal threats need to be addressed.In this research work,we focus on a solution for identifying an insider attack using the artificial intelligence technique.An insider attack is possible by using nodes of weak users’systems.They will log in using a weak user id,connect to a network,and pretend to be a trusted node.Then,they can easily attack and hack information as an insider,and identifying them is very difficult.These types of attacks need intelligent solutions.A machine learning approach is widely used for security issues.To date,the existing lags can classify the attackers accurately.This information hijacking process is very absurd,which motivates young researchers to provide a solution for internal threats.In our proposed work,we track the attackers using a user interaction behavior pattern and deep learning technique.The usage of mouse movements and clicks and keystrokes of the real user is stored in a database.The deep belief neural network is designed using a restricted Boltzmann machine(RBM)so that the layer of RBM communicates with the previous and subsequent layers.The result is evaluated using a Cooja simulator based on the cloud environment.The accuracy and F-measure are highly improved compared with when using the existing long short-term memory and support vector machine.
基金supported by the National Natural Science Foundation of China(62231020,62101401)the Youth Innovation Team of Shaanxi Universities。
文摘The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost.
文摘In recent decades,the cloud computing contributes a prominent role in health care sector as the patient health records are transferred and collected using cloud computing services.The doctors have switched to cloud computing as it provides multiple advantageous measures including wide storage space and easy availability without any limitations.This necessitates the medical field to be redesigned by cloud technology to preserve information about patient’s critical diseases,electrocardiogram(ECG)reports,and payment details.The proposed work utilizes a hybrid cloud pattern to share Massachusetts Institute of Technology-Beth Israel Hospital(MIT-BIH)resources over the private and public cloud.The stored data are categorized as significant and non-significant by Artificial Neural Networks(ANN).The significant data undergoes encryption by Lagrange key management which automatically generates the key and stores it in the hidden layer.Upon receiving the request from a secondary user,the primary user verifies the authentication of the request and transmits the key via Gmail to the secondary user.Once the key matches the key in the hidden layer,the preserved information will be shared between the users.Due to the enhanced privacy preserving key generation,the proposed work prevents the tracking of keys by malicious users.The outcomes reveal that the introduced work provides improved success rate with reduced computational time.
基金The National Natural Science Foundation of China (No.62262011)The Natural Science Foundation of Guangxi (No.2021JJA170130).
文摘Predicting the usage of container cloud resources has always been an important and challenging problem in improving the performance of cloud resource clusters.We proposed an integrated prediction method of stacking container cloud resources based on variational modal decomposition(VMD)-Permutation entropy(PE)and long short-term memory(LSTM)neural network to solve the prediction difficulties caused by the non-stationarity and volatility of resource data.The variational modal decomposition algorithm decomposes the time series data of cloud resources to obtain intrinsic mode function and residual components,which solves the signal decomposition algorithm’s end-effect and modal confusion problems.The permutation entropy is used to evaluate the complexity of the intrinsic mode function,and the reconstruction based on similar entropy and low complexity is used to reduce the difficulty of modeling.Finally,we use the LSTM and stacking fusion models to predict and superimpose;the stacking integration model integrates Gradient boosting regression(GBR),Kernel ridge regression(KRR),and Elastic net regression(ENet)as primary learners,and the secondary learner adopts the kernel ridge regression method with solid generalization ability.The Amazon public data set experiment shows that compared with Holt-winters,LSTM,and Neuralprophet models,we can see that the optimization range of multiple evaluation indicators is 0.338∼1.913,0.057∼0.940,0.000∼0.017 and 1.038∼8.481 in root means square error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE)and variance(VAR),showing its stability and better prediction accuracy.
文摘点云的处理、传输、语义分割等是3维计算机视觉领域重要的分析任务.现如今,图神经网络和图结构在点云研究方面的有效性已被证实,基于图的点云(graph-based point cloud,GPC)研究不断涌现.因此,一种统一的研究角度、框架和方法论亟待形成.系统性梳理了GPC研究的各种应用场景,包括配准、降噪、压缩、表示学习、分类、分割、检测等任务,概括出GPC研究的一般性框架,提出了一条覆盖当前GPC全域研究的技术路线.具体来说,给出了GPC研究的分层概念范畴,包括底层数据处理、中层表示学习、高层识别任务;综述了各领域中的GPC模型或算法,包括静态和动态点云的处理算法、有监督和无监督的表示学习模型、传统或机器学习的GPC识别算法;总结了其中代表性的成果及其核心思想,譬如动态更新每层特征空间对应的最近邻图、分层以及参数共享的动态点聚合模块,结合图划分和图卷积提高分割精度;对比了模型性能,包括总体精度(overall accuracy,OA)、平均精度(mean accuracy,mAcc)、平均交并比(mean intersection over union,mIoU);在分析比较现有模型和方法的基础上,归纳了GPC目前面临的主要挑战,提出相应的研究问题,并展望未来的研究方向.建立的GPC研究框架具有一般性和通用性,为后续研究者从事GPC这个新型交叉领域研究提供了领域定位、技术总结及宏观视角.点云研究的出现,是探测器硬件技术长足进步后应运而生的结果;点云研究的现状表明在理论和实践之间存在一些挑战,一些关键问题还有待解决.同时,点云研究的发展将推动人工智能进入新的时代.