A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process u...A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.展开更多
The main goal of distribution network(DN)expan-sion planning is essentially to achieve minimal investment con-strained by specified reliability requirements.The reliability-constrained distribution network planning(Rc...The main goal of distribution network(DN)expan-sion planning is essentially to achieve minimal investment con-strained by specified reliability requirements.The reliability-constrained distribution network planning(RcDNP)problem can be cast as an instance of mixed-integer linear programming(MILP)which involves ultra-heavy computation burden espe-cially for large-scale DNs.In this paper,we propose a parallel computing based solution method for the RcDNP problem.The RcDNP is decomposed into a backbone grid and several lateral grid problems with coordination.Then,a parallelizable aug-mented Lagrangian algorithm with acceleration method is devel-oped to solve the coordination planning problems.The lateral grid problems are solved in parallel through coordinating with the backbone grid planning problem.Gauss-Seidel iteration is adopted on the subset of the convex hull of the feasible region constructed by decomposition.Under mild conditions,the opti-mality and convergence of the proposed method are verified.Numerical tests show that the proposed method can significant-ly reduce the solution time and make the RcDNP applicable for real-worldproblems.展开更多
Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for ga...Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%.展开更多
Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the chall...Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.展开更多
With the development of multi-signal monitoring technology,the research on multiple signal analysis and processing has become a hot subject.Mechanical equipment often works under variable working conditions,and the ac...With the development of multi-signal monitoring technology,the research on multiple signal analysis and processing has become a hot subject.Mechanical equipment often works under variable working conditions,and the acquired vibration signals are often non-stationary and nonlinear,which are difficult to be processed by traditional analysis methods.In order to solve the noise reduction problem of multiple signals under variable speed,a COT-DCS method combining the Computed Order Tracking(COT)based on Chirplet Path Pursuit(CPP)and Distributed Compressed Sensing(DCS)is proposed.Firstly,the instantaneous frequency(IF)is extracted by CPP,and the speed is obtained by fitting.Then,the speed is used for equal angle sampling of time-domain signals,and angle-domain signals are obtained by COT without a tachometer to eliminate the nonstationarity,and the angledomain signals are compressed and reconstructed by DCS to achieve noise reduction of multiple signals.The accuracy of the CPP method is verified by simulated,experimental signals and compared with some existing IF extraction methods.The COT method also shows good signal stabilization ability through simulation and experiment.Finally,combined with the comparative test of the other two algorithms and four noise reduction effect indicators,the COT-DCS based on the CPP method combines the advantages of the two algorithms and has better noise reduction effect and stability.It is shown that this method is an effective multi-signal noise reduction method.展开更多
This work aims to analyse the actions that companies working in large-scale distribution carry along their value chains to minimise impacts on climate change.Companies operating in this field are aware that it is less...This work aims to analyse the actions that companies working in large-scale distribution carry along their value chains to minimise impacts on climate change.Companies operating in this field are aware that it is less effective to act directly on the core processes and need to involve the upstream value chain in their carbon reduction strategy.These businesses,in fact,need to focus on the indirect GHG(Greenhouse Gases)emissions and depend on how suppliers manage their impacts.In this sector,virtuous companies collaborate with their suppliers to get involved in a common path of quantifying and cutting said impacts together.This aspect is particularly relevant in the case of large-scale retailers.However,the process is not immediate since the supply chain is usually very dense and diverse,for instance,adopting various approaches that do not always coincide.In any case,the key aspect is mapping these suppliers:one of the tools mostly used for this purpose is the survey,as a quick instrument able to reach hundreds of suppliers at the same time,receiving a fast and standardized response,which can easily be processed to form a comprehensive and harmonized mapping of the results as the first step for the subsequent implementation of mitigation strategies.展开更多
Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interacti...Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.展开更多
In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In orde...In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In order to improve the fault tolerance rate,a novel public blockchain consensus mechanism that applies a distributed computing architecture in a public network is proposed.Redundant calculation of blockchain ensures the credibility of the results;and the transactions with calculation results of a task are stored distributed in sequence in Directed Acyclic Graphs(DAG).The transactions issued by nodes are connected to form a net.The net can quickly provide node reputation evaluation that does not rely on third parties.Simulations show that our proposed blockchain has the following advantages:1.The task processing speed of the blockchain can be close to that of the fastest node in the entire blockchain;2.When the tasks’arrival time intervals and demanded working nodes(WNs)meet certain conditions,the network can tolerate more than 50%of malicious devices;3.No matter the number of nodes in the blockchain is increased or reduced,the network can keep robustness by adjusting the task’s arrival time interval and demanded WNs.展开更多
To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the ...To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.展开更多
Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were...Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were addressed in the open literature. One of WDC key challenges is the impact of wireless channel quality on the load of distributed computations. Therefore, this research investigates the wireless channel impact on WDC performance when the tatter is applied to spectrum sensing in cognitive radio (CR) technology. However, a trade- off is found between accuracy and computational complexity in spectrum sensing approaches. Increasing these approaches accuracy is accompanied by an increase in computational complexity. This results in greater power consumption and processing time. A novel WDC scheme for cyclostationary feature detection spectrum sensing approach is proposed in this paper and thoroughly investigated. The benefits of the proposed scheme are firstly presented. Then, the impact of the wireless channel of the proposed scheme is addressed considering two scenarios. In the first scenario, workload matrices are distributed over the wireless channel展开更多
This paper presents a distributed optimization strategy for large-scale traffic network based on fog computing. Different from the traditional cloud-based centralized optimization strategy, the fog-based distributed o...This paper presents a distributed optimization strategy for large-scale traffic network based on fog computing. Different from the traditional cloud-based centralized optimization strategy, the fog-based distributed optimization strategy distributes its computing tasks to individual sub-processors, thus significantly reducing computation time. A traffic model is built and a series of communication rules between subsystems are set to ensure that the entire transportation network can be globally optimized while the subsystem is achieving its local optimization. Finally, this paper numerically simulates the operation of the traffic network by mixed-Integer programming, also, compares the advantages and disadvantages of the two optimization strategies.展开更多
Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can...Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated.展开更多
In this paper a short-circuit computation(SCC) procedure for large-scale distribution systems with high penetration of distributed generators based on contemporary technologies is proposed. The procedure is suitable f...In this paper a short-circuit computation(SCC) procedure for large-scale distribution systems with high penetration of distributed generators based on contemporary technologies is proposed. The procedure is suitable for real-time calculations.Modeling of modern distributed generators differs from the modeling of traditional synchronous and induction generators.Hence, SCC procedures found on the presumption of distribution systems with only traditional generators are not suitable in nowadays systems. In the work presented in this paper, for computation of the state of the system with short-circuit, the improved backward/forward sweep(IBFS) procedure is used.Computation results show that the IBFS procedure is much more robust than previous SCC procedures, as it takes into account all distribution system elements, including modern distributed generators.展开更多
This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The ...This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The aim of this strategy is realizing dynamic equilibrium allocation.展开更多
In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new gener...In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new generation. In addition to this model, we define remote server objects as session or entity objects according to their roles in a distributed application server, which separate information details from business operations for software reuse. A web store system is implement by using this multi-tier distributed object enterprise computing model.展开更多
his paper examines planning management problems in a Multiagentbased Distributed Open Computing Environment Model (MDOCEM). First the meaning of planning management in MDOCEM is introduced, and then a formal method to...his paper examines planning management problems in a Multiagentbased Distributed Open Computing Environment Model (MDOCEM). First the meaning of planning management in MDOCEM is introduced, and then a formal method to describe the associated task partition problems is presented, and a heuristic algorithm which gives an approximate optimum solution is given. Finally the task coordination and integration of execution results are discussed.展开更多
Vehicular networks have been envisioned to provide us with numerous interesting services such as dissemination of real-time safety warnings and commercial advertisements via car-to-car communication. However, efficien...Vehicular networks have been envisioned to provide us with numerous interesting services such as dissemination of real-time safety warnings and commercial advertisements via car-to-car communication. However, efficient routing is a research challenge due to the highly dynamic nature of these networks. Nevertheless, the availability of connections imposes additional constraint. Our earlier works in the area of efficient dissemination integrates the advantages of middleware operations with muhicast routing to de- sign a framework for distributed routing in vehicular networks. Cloud computing makes use of pools of physical computing resourc- es to meet the requirements of such highly dynamic networks. The proposed solution in this paper applies the principles of cloud computing to our existing framework. The routing protocol works at the network layer for the formation of clouds in specific geo- graphic regions. Simulation results present the effieiency of the model in terms of serviee discovery, download time and the queu- ing delay at the controller nodes.展开更多
In the large-scale Distributed Virtual Environment(DVE)multimedia systems,one of key challenges is to distributedly preserve causal order delivery of messages in real time.Most of the existing causal order control app...In the large-scale Distributed Virtual Environment(DVE)multimedia systems,one of key challenges is to distributedly preserve causal order delivery of messages in real time.Most of the existing causal order control approaches with real-time constraints use vector time as causal control information which is closely coupled with system scales.As the scale expands,each message is attached a large amount of control information that introduces too much network transmission overhead to maintain the real-time causal order delivery.In this article,a novel Lightweight Real-Time Causal Order(LRTCO)algorithm is proposed for large-scale DVE multimedia systems.LRTCO predicts and compares the network transmission times of messages so as to select the proper causal control information of which the amount is dynamically adapted to the network latency variations and unconcerned with system scales.The control information in LRTCO is effective to preserve causal order delivery of messages and lightweight to maintain the real-time property of DVE systems.Experimental results demonstrate that LRTCO costs low transmission overhead and communication bandwidth,reduces causal order violations efficiently,and improves the scalability of DVE systems.展开更多
An antenna selection algorithm based on large-scale fading between the transmitter and receiver is proposed for the uplink receive antenna selection in distributed multiple-input multiple-output(D-MIMO) systems. By ut...An antenna selection algorithm based on large-scale fading between the transmitter and receiver is proposed for the uplink receive antenna selection in distributed multiple-input multiple-output(D-MIMO) systems. By utilizing the radio access units(RAU) selection based on large-scale fading,the proposed algorithm decreases enormously the computational complexity. Based on the characteristics of distributed systems,an improved particle swarm optimization(PSO) has been proposed for the antenna selection after the RAU selection. In order to apply the improved PSO algorithm better in antenna selection,a general form of channel capacity was transformed into a binary expression by analyzing the formula of channel capacity. The proposed algorithm can make full use of the advantages of D-MIMO systems,and achieve near-optimal performance in terms of channel capacity with low computational complexity.展开更多
Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes...Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.展开更多
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900603)the National Natural Science Foundation of China(61831008).
文摘A dynamic multi-beam resource allocation algorithm for large low Earth orbit(LEO)constellation based on on-board distributed computing is proposed in this paper.The allocation is a combinatorial optimization process under a series of complex constraints,which is important for enhancing the matching between resources and requirements.A complex algorithm is not available because that the LEO on-board resources is limi-ted.The proposed genetic algorithm(GA)based on two-dimen-sional individual model and uncorrelated single paternal inheri-tance method is designed to support distributed computation to enhance the feasibility of on-board application.A distributed system composed of eight embedded devices is built to verify the algorithm.A typical scenario is built in the system to evalu-ate the resource allocation process,algorithm mathematical model,trigger strategy,and distributed computation architec-ture.According to the simulation and measurement results,the proposed algorithm can provide an allocation result for more than 1500 tasks in 14 s and the success rate is more than 91%in a typical scene.The response time is decreased by 40%com-pared with the conditional GA.
基金supported in part by the State Grid Science and Technology Program of China(No.5100-202121561A-0-5-SF).
文摘The main goal of distribution network(DN)expan-sion planning is essentially to achieve minimal investment con-strained by specified reliability requirements.The reliability-constrained distribution network planning(RcDNP)problem can be cast as an instance of mixed-integer linear programming(MILP)which involves ultra-heavy computation burden espe-cially for large-scale DNs.In this paper,we propose a parallel computing based solution method for the RcDNP problem.The RcDNP is decomposed into a backbone grid and several lateral grid problems with coordination.Then,a parallelizable aug-mented Lagrangian algorithm with acceleration method is devel-oped to solve the coordination planning problems.The lateral grid problems are solved in parallel through coordinating with the backbone grid planning problem.Gauss-Seidel iteration is adopted on the subset of the convex hull of the feasible region constructed by decomposition.Under mild conditions,the opti-mality and convergence of the proposed method are verified.Numerical tests show that the proposed method can significant-ly reduce the solution time and make the RcDNP applicable for real-worldproblems.
基金the Natural Science Foundation of Ningxia Province(No.2021AAC03230).
文摘Brain tumors come in various types,each with distinct characteristics and treatment approaches,making manual detection a time-consuming and potentially ambiguous process.Brain tumor detection is a valuable tool for gaining a deeper understanding of tumors and improving treatment outcomes.Machine learning models have become key players in automating brain tumor detection.Gradient descent methods are the mainstream algorithms for solving machine learning models.In this paper,we propose a novel distributed proximal stochastic gradient descent approach to solve the L_(1)-Smooth Support Vector Machine(SVM)classifier for brain tumor detection.Firstly,the smooth hinge loss is introduced to be used as the loss function of SVM.It avoids the issue of nondifferentiability at the zero point encountered by the traditional hinge loss function during gradient descent optimization.Secondly,the L_(1) regularization method is employed to sparsify features and enhance the robustness of the model.Finally,adaptive proximal stochastic gradient descent(PGD)with momentum,and distributed adaptive PGDwithmomentum(DPGD)are proposed and applied to the L_(1)-Smooth SVM.Distributed computing is crucial in large-scale data analysis,with its value manifested in extending algorithms to distributed clusters,thus enabling more efficient processing ofmassive amounts of data.The DPGD algorithm leverages Spark,enabling full utilization of the computer’s multi-core resources.Due to its sparsity induced by L_(1) regularization on parameters,it exhibits significantly accelerated convergence speed.From the perspective of loss reduction,DPGD converges faster than PGD.The experimental results show that adaptive PGD withmomentumand its variants have achieved cutting-edge accuracy and efficiency in brain tumor detection.Frompre-trained models,both the PGD andDPGD outperform other models,boasting an accuracy of 95.21%.
基金jointly supported by the Jiangsu Postgraduate Research and Practice Innovation Project under Grant KYCX22_1030,SJCX22_0283 and SJCX23_0293the NUPTSF under Grant NY220201.
文摘Task scheduling plays a key role in effectively managing and allocating computing resources to meet various computing tasks in a cloud computing environment.Short execution time and low load imbalance may be the challenges for some algorithms in resource scheduling scenarios.In this work,the Hierarchical Particle Swarm Optimization-Evolutionary Artificial Bee Colony Algorithm(HPSO-EABC)has been proposed,which hybrids our presented Evolutionary Artificial Bee Colony(EABC),and Hierarchical Particle Swarm Optimization(HPSO)algorithm.The HPSO-EABC algorithm incorporates both the advantages of the HPSO and the EABC algorithm.Comprehensive testing including evaluations of algorithm convergence speed,resource execution time,load balancing,and operational costs has been done.The results indicate that the EABC algorithm exhibits greater parallelism compared to the Artificial Bee Colony algorithm.Compared with the Particle Swarm Optimization algorithm,the HPSO algorithmnot only improves the global search capability but also effectively mitigates getting stuck in local optima.As a result,the hybrid HPSO-EABC algorithm demonstrates significant improvements in terms of stability and convergence speed.Moreover,it exhibits enhanced resource scheduling performance in both homogeneous and heterogeneous environments,effectively reducing execution time and cost,which also is verified by the ablation experimental.
基金the financial support of this work by the National Natural Science Foundation of Hebei Province China under Grant E2020208052.
文摘With the development of multi-signal monitoring technology,the research on multiple signal analysis and processing has become a hot subject.Mechanical equipment often works under variable working conditions,and the acquired vibration signals are often non-stationary and nonlinear,which are difficult to be processed by traditional analysis methods.In order to solve the noise reduction problem of multiple signals under variable speed,a COT-DCS method combining the Computed Order Tracking(COT)based on Chirplet Path Pursuit(CPP)and Distributed Compressed Sensing(DCS)is proposed.Firstly,the instantaneous frequency(IF)is extracted by CPP,and the speed is obtained by fitting.Then,the speed is used for equal angle sampling of time-domain signals,and angle-domain signals are obtained by COT without a tachometer to eliminate the nonstationarity,and the angledomain signals are compressed and reconstructed by DCS to achieve noise reduction of multiple signals.The accuracy of the CPP method is verified by simulated,experimental signals and compared with some existing IF extraction methods.The COT method also shows good signal stabilization ability through simulation and experiment.Finally,combined with the comparative test of the other two algorithms and four noise reduction effect indicators,the COT-DCS based on the CPP method combines the advantages of the two algorithms and has better noise reduction effect and stability.It is shown that this method is an effective multi-signal noise reduction method.
文摘This work aims to analyse the actions that companies working in large-scale distribution carry along their value chains to minimise impacts on climate change.Companies operating in this field are aware that it is less effective to act directly on the core processes and need to involve the upstream value chain in their carbon reduction strategy.These businesses,in fact,need to focus on the indirect GHG(Greenhouse Gases)emissions and depend on how suppliers manage their impacts.In this sector,virtuous companies collaborate with their suppliers to get involved in a common path of quantifying and cutting said impacts together.This aspect is particularly relevant in the case of large-scale retailers.However,the process is not immediate since the supply chain is usually very dense and diverse,for instance,adopting various approaches that do not always coincide.In any case,the key aspect is mapping these suppliers:one of the tools mostly used for this purpose is the survey,as a quick instrument able to reach hundreds of suppliers at the same time,receiving a fast and standardized response,which can easily be processed to form a comprehensive and harmonized mapping of the results as the first step for the subsequent implementation of mitigation strategies.
基金This work was supported in part by the National Natural Science Foundation of China(61772493)the CAAI-Huawei MindSpore Open Fund(CAAIXSJLJJ-2020-004B)+4 种基金the Natural Science Foundation of Chongqing(China)(cstc2019jcyjjqX0013)Chongqing Research Program of Technology Innovation and Application(cstc2019jscx-fxydX0024,cstc2019jscx-fxydX0027,cstc2018jszx-cyzdX0041)Guangdong Province Universities and College Pearl River Scholar Funded Scheme(2019)the Pioneer Hundred Talents Program of Chinese Academy of Sciencesthe Deanship of Scientific Research(DSR)at King Abdulaziz University(G-21-135-38).
文摘Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.
基金funded in part by the National Natural Science Foundation of China (Grant no. 61772352, 62172061, 61871422)National Key Research and Development Project (Grants nos. 2020YFB1711800 and 2020YFB1707900)+2 种基金the Science and Technology Project of Sichuan Province (Grants no. 2021YFG0152, 2021YFG0025, 2020YFG0479, 2020YFG0322, 2020GFW035, 2020GFW033, 2020YFH0071)the R&D Project of Chengdu City (Grant no. 2019-YF05-01790-GX)the Central Universities of Southwest Minzu University (Grants no. ZYN2022032)
文摘In LEO(Low Earth Orbit)satellite communication systems,the satellite network is made up of a large number of satellites,the dynamically changing network environment affects the results of distributed computing.In order to improve the fault tolerance rate,a novel public blockchain consensus mechanism that applies a distributed computing architecture in a public network is proposed.Redundant calculation of blockchain ensures the credibility of the results;and the transactions with calculation results of a task are stored distributed in sequence in Directed Acyclic Graphs(DAG).The transactions issued by nodes are connected to form a net.The net can quickly provide node reputation evaluation that does not rely on third parties.Simulations show that our proposed blockchain has the following advantages:1.The task processing speed of the blockchain can be close to that of the fastest node in the entire blockchain;2.When the tasks’arrival time intervals and demanded working nodes(WNs)meet certain conditions,the network can tolerate more than 50%of malicious devices;3.No matter the number of nodes in the blockchain is increased or reduced,the network can keep robustness by adjusting the task’s arrival time interval and demanded WNs.
基金partly supported by National Key Basic Research Program of China(2016YFB1000100)partly supported by National Natural Science Foundation of China(NO.61402490)。
文摘To security support large-scale intelligent applications,distributed machine learning based on blockchain is an intuitive solution scheme.However,the distributed machine learning is difficult to train due to that the corresponding optimization solver algorithms converge slowly,which highly demand on computing and memory resources.To overcome the challenges,we propose a distributed computing framework for L-BFGS optimization algorithm based on variance reduction method,which is a lightweight,few additional cost and parallelized scheme for the model training process.To validate the claims,we have conducted several experiments on multiple classical datasets.Results show that our proposed computing framework can steadily accelerate the training process of solver in either local mode or distributed mode.
文摘Recently, wireless distributed computing (WDC) concept has emerged promising manifolds improvements to current wireless technotogies. Despite the various expected benefits of this concept, significant drawbacks were addressed in the open literature. One of WDC key challenges is the impact of wireless channel quality on the load of distributed computations. Therefore, this research investigates the wireless channel impact on WDC performance when the tatter is applied to spectrum sensing in cognitive radio (CR) technology. However, a trade- off is found between accuracy and computational complexity in spectrum sensing approaches. Increasing these approaches accuracy is accompanied by an increase in computational complexity. This results in greater power consumption and processing time. A novel WDC scheme for cyclostationary feature detection spectrum sensing approach is proposed in this paper and thoroughly investigated. The benefits of the proposed scheme are firstly presented. Then, the impact of the wireless channel of the proposed scheme is addressed considering two scenarios. In the first scenario, workload matrices are distributed over the wireless channel
基金supported by the Natural Science Foundation of China under Grant 61873017 and Grant 61473016in part by the Beijing Natural Science Foundation under Grant Z180005supported in part by the National Research Foundation of South Africa under Grant 113340in part by the Oppenheimer Memorial Trust Grant
文摘This paper presents a distributed optimization strategy for large-scale traffic network based on fog computing. Different from the traditional cloud-based centralized optimization strategy, the fog-based distributed optimization strategy distributes its computing tasks to individual sub-processors, thus significantly reducing computation time. A traffic model is built and a series of communication rules between subsystems are set to ensure that the entire transportation network can be globally optimized while the subsystem is achieving its local optimization. Finally, this paper numerically simulates the operation of the traffic network by mixed-Integer programming, also, compares the advantages and disadvantages of the two optimization strategies.
基金Supported by the National Basic Research Program of China (973 Program 2004CB318004), the National Natural Science Foundation of China (NSFC90204016) and the National High Technology Research and Development Program of China (2003AA144030)
文摘Distributed cryptographic computing system plays an important role since cryptographic computing is extremely computation sensitive. However, no general cryptographic computing system is available. Grid technology can give an efficient computational support for cryptographic applications. Therefore, a general-purpose grid-based distributed computing system called DCCS is put forward in this paper. The architecture of DCCS is simply described at first. The policy of task division adapted in DCCS is then presented. The method to manage subtask is further discussed in detail. Furthermore, the building and execution process of a computing job is revealed. Finally, the details of DCCS implementation under Globus Toolkit 4 are illustrated.
文摘In this paper a short-circuit computation(SCC) procedure for large-scale distribution systems with high penetration of distributed generators based on contemporary technologies is proposed. The procedure is suitable for real-time calculations.Modeling of modern distributed generators differs from the modeling of traditional synchronous and induction generators.Hence, SCC procedures found on the presumption of distribution systems with only traditional generators are not suitable in nowadays systems. In the work presented in this paper, for computation of the state of the system with short-circuit, the improved backward/forward sweep(IBFS) procedure is used.Computation results show that the IBFS procedure is much more robust than previous SCC procedures, as it takes into account all distribution system elements, including modern distributed generators.
文摘This paper discusses the model of how the Agent is applied to implement distributed computing of Ada95 and presents a dynamic allocation strategy for distributed computing that based on pre-allocationand Agent. The aim of this strategy is realizing dynamic equilibrium allocation.
文摘In this paper, we adopt Java platform to achieve a multi-tier distributed object enterprise computing model which provides an open, flexible, robust and cross-platform standard for enterprise applications of new generation. In addition to this model, we define remote server objects as session or entity objects according to their roles in a distributed application server, which separate information details from business operations for software reuse. A web store system is implement by using this multi-tier distributed object enterprise computing model.
文摘his paper examines planning management problems in a Multiagentbased Distributed Open Computing Environment Model (MDOCEM). First the meaning of planning management in MDOCEM is introduced, and then a formal method to describe the associated task partition problems is presented, and a heuristic algorithm which gives an approximate optimum solution is given. Finally the task coordination and integration of execution results are discussed.
文摘Vehicular networks have been envisioned to provide us with numerous interesting services such as dissemination of real-time safety warnings and commercial advertisements via car-to-car communication. However, efficient routing is a research challenge due to the highly dynamic nature of these networks. Nevertheless, the availability of connections imposes additional constraint. Our earlier works in the area of efficient dissemination integrates the advantages of middleware operations with muhicast routing to de- sign a framework for distributed routing in vehicular networks. Cloud computing makes use of pools of physical computing resourc- es to meet the requirements of such highly dynamic networks. The proposed solution in this paper applies the principles of cloud computing to our existing framework. The routing protocol works at the network layer for the formation of clouds in specific geo- graphic regions. Simulation results present the effieiency of the model in terms of serviee discovery, download time and the queu- ing delay at the controller nodes.
基金This research work is supported by Hunan Provincial Natural Science Foundation of China(Grant No.2017JJ2016)Hunan Provincial Education Science 13th Five-Year Plan(Grant No.XJK016BXX001)+3 种基金Social Science Foundation of Hunan Province(Grant No.17YBA049)2017 Hunan Provincial Higher Education Teaching Re-form Research Project(Grant No.564)Scientific Research Fund of Hunan Provin-cial Education Department(Grant No.16C0269 and No.17B046)The work is also sup-ported by Open foundation for University Innovation Platform from Hunan Province,China(Grand No.16K013)and the 2011 Collaborative Innovation Center of Big Data for Finan-cial and Economical Asset Development and Utility in Universities of Hunan Province.We also thank the anonymous reviewers for their valuable comments and insightful sug-gestions.
文摘In the large-scale Distributed Virtual Environment(DVE)multimedia systems,one of key challenges is to distributedly preserve causal order delivery of messages in real time.Most of the existing causal order control approaches with real-time constraints use vector time as causal control information which is closely coupled with system scales.As the scale expands,each message is attached a large amount of control information that introduces too much network transmission overhead to maintain the real-time causal order delivery.In this article,a novel Lightweight Real-Time Causal Order(LRTCO)algorithm is proposed for large-scale DVE multimedia systems.LRTCO predicts and compares the network transmission times of messages so as to select the proper causal control information of which the amount is dynamically adapted to the network latency variations and unconcerned with system scales.The control information in LRTCO is effective to preserve causal order delivery of messages and lightweight to maintain the real-time property of DVE systems.Experimental results demonstrate that LRTCO costs low transmission overhead and communication bandwidth,reduces causal order violations efficiently,and improves the scalability of DVE systems.
基金Supported by the National Natural Science Foundation of China(No.61201086,61272495)the China Scholarship Council(No.201506375060)+1 种基金the Planned Science and Technology Project of Guangdong Province(No.2013B090500007) the Dongguan Project on the Integration of Industry,Education and Research(No.2014509102205)
文摘An antenna selection algorithm based on large-scale fading between the transmitter and receiver is proposed for the uplink receive antenna selection in distributed multiple-input multiple-output(D-MIMO) systems. By utilizing the radio access units(RAU) selection based on large-scale fading,the proposed algorithm decreases enormously the computational complexity. Based on the characteristics of distributed systems,an improved particle swarm optimization(PSO) has been proposed for the antenna selection after the RAU selection. In order to apply the improved PSO algorithm better in antenna selection,a general form of channel capacity was transformed into a binary expression by analyzing the formula of channel capacity. The proposed algorithm can make full use of the advantages of D-MIMO systems,and achieve near-optimal performance in terms of channel capacity with low computational complexity.
基金Supported by Project of National Natural Science Foundation(No.41874134)
文摘Processing large-scale 3-D gravity data is an important topic in geophysics field. Many existing inversion methods lack the competence of processing massive data and practical application capacity. This study proposes the application of GPU parallel processing technology to the focusing inversion method, aiming at improving the inversion accuracy while speeding up calculation and reducing the memory consumption, thus obtaining the fast and reliable inversion results for large complex model. In this paper, equivalent storage of geometric trellis is used to calculate the sensitivity matrix, and the inversion is based on GPU parallel computing technology. The parallel computing program that is optimized by reducing data transfer, access restrictions and instruction restrictions as well as latency hiding greatly reduces the memory usage, speeds up the calculation, and makes the fast inversion of large models possible. By comparing and analyzing the computing speed of traditional single thread CPU method and CUDA-based GPU parallel technology, the excellent acceleration performance of GPU parallel computing is verified, which provides ideas for practical application of some theoretical inversion methods restricted by computing speed and computer memory. The model test verifies that the focusing inversion method can overcome the problem of severe skin effect and ambiguity of geological body boundary. Moreover, the increase of the model cells and inversion data can more clearly depict the boundary position of the abnormal body and delineate its specific shape.