This paper is concerned with distributed Nash equi librium seeking strategies under quantized communication. In the proposed seeking strategy, a projection operator is synthesized with a gradient search method to achi...This paper is concerned with distributed Nash equi librium seeking strategies under quantized communication. In the proposed seeking strategy, a projection operator is synthesized with a gradient search method to achieve the optimization o players' objective functions while restricting their actions within required non-empty, convex and compact domains. In addition, a leader-following consensus protocol, in which quantized informa tion flows are utilized, is employed for information sharing among players. More specifically, logarithmic quantizers and uniform quantizers are investigated under both undirected and connected communication graphs and strongly connected digraphs, respec tively. Through Lyapunov stability analysis, it is shown that play ers' actions can be steered to a neighborhood of the Nash equilib rium with logarithmic and uniform quantizers, and the quanti fied convergence error depends on the parameter of the quan tizer for both undirected and directed cases. A numerical exam ple is given to verify the theoretical results.展开更多
In the past two decades,extensive and in-depth research has been conducted on Time Series InSAR technology with the advancement of high-performance SAR satellites and the accumulation of big SAR data.The introduction ...In the past two decades,extensive and in-depth research has been conducted on Time Series InSAR technology with the advancement of high-performance SAR satellites and the accumulation of big SAR data.The introduction of distributed scatterers in Distributed Scatterers InSAR(DS-InSAR)has significantly expanded the application scenarios of InSAR geodetic measurement by increasing the number of measurement points.This study traces the history of DS-InSAR,presents the definition and characteristics of distributed scatterers,and focuses on exploring the relationships and distinctions among proposed algorithms in two crucial steps:statistically homogeneous pixel selection and phase optimization.Additionally,the latest research progress in this field is tracked and the possible development direction in the future is discussed.Through simulation experiments and two real InSAR case studies,the proposed algorithms are compared and verified,and the advantages of DS-InSAR in deformation measurement practice are demonstrated.This work not only offers insights into current trends and focal points for theoretical research on DS-InSAR but also provides practical cases and guidance for applied research.展开更多
To improve the resilience of a distribution system against extreme weather,a fuel-based distributed generator(DG)allocation model is proposed in this study.In this model,the DGs are placed at the planning stage.When a...To improve the resilience of a distribution system against extreme weather,a fuel-based distributed generator(DG)allocation model is proposed in this study.In this model,the DGs are placed at the planning stage.When an extreme event occurs,the controllable generators form temporary microgrids(MGs)to restore the load maximally.Simultaneously,a demand response program(DRP)mitigates the imbalance between the power supply and demand during extreme events.To cope with the fault uncertainty,a robust optimization(RO)method is applied to reduce the long-term investment and short-term operation costs.The optimization is formulated as a tri-level defenderattacker-defender(DAD)framework.At the first level,decision-makers work out the DG allocation scheme;at the second level,the attacker finds the optimal attack strategy with maximum damage;and at the third level,restoration measures,namely distribution network reconfiguration(DNR)and demand response are performed.The problem is solved by the nested column and constraint generation(NC&CG)method and the model is validated using an IEEE 33-node system.Case studies validate the effectiveness and superiority of the proposed model according to the enhanced resilience and reduced cost.展开更多
In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory...In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors.展开更多
In this study,a novel residential virtual power plant(RVPP)scheduling method that leverages a gate recurrent unit(GRU)-integrated deep reinforcement learning(DRL)algorithm is proposed.In the proposed scheme,the GRU-in...In this study,a novel residential virtual power plant(RVPP)scheduling method that leverages a gate recurrent unit(GRU)-integrated deep reinforcement learning(DRL)algorithm is proposed.In the proposed scheme,the GRU-integrated DRL algorithm guides the RVPP to participate effectively in both the day-ahead and real-time markets,lowering the electricity purchase costs and consumption risks for end-users.The Lagrangian relaxation technique is introduced to transform the constrained Markov decision process(CMDP)into an unconstrained optimization problem,which guarantees that the constraints are strictly satisfied without determining the penalty coefficients.Furthermore,to enhance the scalability of the constrained soft actor-critic(CSAC)-based RVPP scheduling approach,a fully distributed scheduling architecture was designed to enable plug-and-play in the residential distributed energy resources(RDER).Case studies performed on the constructed RVPP scenario validated the performance of the proposed methodology in enhancing the responsiveness of the RDER to power tariffs,balancing the supply and demand of the power grid,and ensuring customer comfort.展开更多
During faults in a distribution network,the output power of a distributed generation(DG)may be uncertain.Moreover,the output currents of distributed power sources are also affected by the output power,resulting in unc...During faults in a distribution network,the output power of a distributed generation(DG)may be uncertain.Moreover,the output currents of distributed power sources are also affected by the output power,resulting in uncertainties in the calculation of the short-circuit current at the time of a fault.Additionally,the impacts of such uncertainties around short-circuit currents will increase with the increase of distributed power sources.Thus,it is very important to develop a method for calculating the short-circuit current while considering the uncertainties in a distribution network.In this study,an affine arithmetic algorithm for calculating short-circuit current intervals in distribution networks with distributed power sources while considering power fluctuations is presented.The proposed algorithm includes two stages.In the first stage,normal operations are considered to establish a conservative interval affine optimization model of injection currents in distributed power sources.Constrained by the fluctuation range of distributed generation power at the moment of fault occurrence,the model can then be used to solve for the fluctuation range of injected current amplitudes in distributed power sources.The second stage is implemented after a malfunction occurs.In this stage,an affine optimization model is first established.This model is developed to characterizes the short-circuit current interval of a transmission line,and is constrained by the fluctuation range of the injected current amplitude of DG during normal operations.Finally,the range of the short-circuit current amplitudes of distribution network lines after a short-circuit fault occurs is predicted.The algorithm proposed in this article obtains an interval range containing accurate results through interval operation.Compared with traditional point value calculation methods,interval calculation methods can provide more reliable analysis and calculation results.The range of short-circuit current amplitude obtained by this algorithm is slightly larger than those obtained using the Monte Carlo algorithm and the Latin hypercube sampling algorithm.Therefore,the proposed algorithm has good suitability and does not require iterative calculations,resulting in a significant improvement in computational speed compared to the Monte Carlo algorithm and the Latin hypercube sampling algorithm.Furthermore,the proposed algorithm can provide more reliable analysis and calculation results,improving the safety and stability of power systems.展开更多
Themassive integration of high-proportioned distributed photovoltaics into distribution networks poses significant challenges to the flexible regulation capabilities of distribution stations.To accurately assess the f...Themassive integration of high-proportioned distributed photovoltaics into distribution networks poses significant challenges to the flexible regulation capabilities of distribution stations.To accurately assess the flexible regulation capabilities of distribution stations,amulti-temporal and spatial scale regulation capability assessment technique is proposed for distribution station areas with distributed photovoltaics,considering different geographical locations,coverage areas,and response capabilities.Firstly,the multi-temporal scale regulation characteristics and response capabilities of different regulation resources in distribution station areas are analyzed,and a resource regulation capability model is established to quantify the adjustable range of different regulation resources.On this basis,considering the limitations of line transmission capacity,a regulation capability assessment index for distribution stations is proposed to evaluate their regulation capabilities.Secondly,considering different geographical locations and coverage areas,a comprehensive performance index based on electrical distance modularity and active power balance is established,and a cluster division method based on genetic algorithms is proposed to fully leverage the coordination and complementarity among nodes and improve the active power matching degree within clusters.Simultaneously,an economic optimization model with the objective of minimizing the economic cost of the distribution station is established,comprehensively considering the safety constraints of the distribution network and the regulation constraints of resources.This model can provide scientific guidance for the economic dispatch of the distribution station area.Finally,case studies demonstrate that the proposed assessment and optimization methods effectively evaluate the regulation capabilities of distribution stations,facilitate the consumption of distributed photovoltaics,and enhance the economic efficiency of the distribution station area.展开更多
In recent years,distributed photovoltaics(DPV)has ushered in a good development situation due to the advantages of pollution-free power generation,full utilization of the ground or roof of the installation site,and ba...In recent years,distributed photovoltaics(DPV)has ushered in a good development situation due to the advantages of pollution-free power generation,full utilization of the ground or roof of the installation site,and balancing a large number of loads nearby.However,under the background of a large-scale DPV grid-connected to the county distribution network,an effective analysis method is needed to analyze its impact on the voltage of the distribution network in the early development stage of DPV.Therefore,a DPV orderly grid-connected method based on photovoltaics grid-connected order degree(PGOD)is proposed.This method aims to orderly analyze the change of voltage in the distribution network when large-scale DPV will be connected.Firstly,based on the voltagemagnitude sensitivity(VMS)index of the photovoltaics permitted grid-connected node and the acceptance of grid-connected node(AoGCN)index of other nodes in the network,thePGODindex is constructed to determine the photovoltaics permitted grid-connected node of the current photovoltaics grid-connected state network.Secondly,a photovoltaics orderly grid-connected model with a continuous updating state is constructed to obtain an orderly DPV grid-connected order.The simulation results illustrate that the photovoltaics grid-connected order determined by this method based on PGOD can effectively analyze the voltage impact of large-scale photovoltaics grid-connected,and explore the internal factors and characteristics of the impact.展开更多
This paper proposes a novel approach for identifying distributed dynamic loads in the time domain.Using polynomial andmodal analysis,the load is transformed intomodal space for coefficient identification.This allows t...This paper proposes a novel approach for identifying distributed dynamic loads in the time domain.Using polynomial andmodal analysis,the load is transformed intomodal space for coefficient identification.This allows the distributed dynamic load with a two-dimensional form in terms of time and space to be simultaneously identified in the form of modal force,thereby achieving dimensionality reduction.The Impulse-based Force Estimation Algorithm is proposed to identify dynamic loads in the time domain.Firstly,the algorithm establishes a recursion scheme based on convolution integral,enabling it to identify loads with a long history and rapidly changing forms over time.Secondly,the algorithm introduces moving mean and polynomial fitting to detrend,enhancing its applicability in load estimation.The aforementioned methodology successfully accomplishes the reconstruction of distributed,instead of centralized,dynamic loads on the continuum in the time domain by utilizing acceleration response.To validate the effectiveness of the method,computational and experimental verification were conducted.展开更多
The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces ...The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology.展开更多
The use of privacy-enhanced facial recognition has increased in response to growing concerns about data securityand privacy in the digital age. This trend is spurred by rising demand for face recognition technology in...The use of privacy-enhanced facial recognition has increased in response to growing concerns about data securityand privacy in the digital age. This trend is spurred by rising demand for face recognition technology in a varietyof industries, including access control, law enforcement, surveillance, and internet communication. However,the growing usage of face recognition technology has created serious concerns about data monitoring and userprivacy preferences, especially in context-aware systems. In response to these problems, this study provides a novelframework that integrates sophisticated approaches such as Generative Adversarial Networks (GANs), Blockchain,and distributed computing to solve privacy concerns while maintaining exact face recognition. The framework’spainstaking design and execution strive to strike a compromise between precise face recognition and protectingpersonal data integrity in an increasingly interconnected environment. Using cutting-edge tools like Dlib for faceanalysis,Ray Cluster for distributed computing, and Blockchain for decentralized identity verification, the proposedsystem provides scalable and secure facial analysis while protecting user privacy. The study’s contributions includethe creation of a sustainable and scalable solution for privacy-aware face recognition, the implementation of flexibleprivacy computing approaches based on Blockchain networks, and the demonstration of higher performanceover previous methods. Specifically, the proposed StyleGAN model has an outstanding accuracy rate of 93.84%while processing high-resolution images from the CelebA-HQ dataset, beating other evaluated models such asProgressive GAN 90.27%, CycleGAN 89.80%, and MGAN 80.80%. With improvements in accuracy, speed, andprivacy protection, the framework has great promise for practical use in a variety of fields that need face recognitiontechnology. This study paves the way for future research in privacy-enhanced face recognition systems, emphasizingthe significance of using cutting-edge technology to meet rising privacy issues in digital identity.展开更多
Distribution generation(DG)technology based on a variety of renewable energy technologies has developed rapidly.A large number of multi-type DG are connected to the distribution network(DN),resulting in a decline in t...Distribution generation(DG)technology based on a variety of renewable energy technologies has developed rapidly.A large number of multi-type DG are connected to the distribution network(DN),resulting in a decline in the stability of DN operation.It is urgent to find a method that can effectively connect multi-energy DG to DN.photovoltaic(PV),wind power generation(WPG),fuel cell(FC),and micro gas turbine(MGT)are considered in this paper.A multi-objective optimization model was established based on the life cycle cost(LCC)of DG,voltage quality,voltage fluctuation,system network loss,power deviation of the tie-line,DG pollution emission index,and meteorological index weight of DN.Multi-objective artificial bee colony algorithm(MOABC)was used to determine the optimal location and capacity of the four kinds of DG access DN,and compared with the other three heuristic algorithms.Simulation tests based on IEEE 33 test node and IEEE 69 test node show that in IEEE 33 test node,the total voltage deviation,voltage fluctuation,and system network loss of DN decreased by 49.67%,7.47%and 48.12%,respectively,compared with that without DG configuration.In the IEEE 69 test node,the total voltage deviation,voltage fluctuation and system network loss of DN in the MOABC configuration scheme decreased by 54.98%,35.93%and 75.17%,respectively,compared with that without DG configuration,indicating that MOABC can reasonably plan the capacity and location of DG.Achieve the maximum trade-off between DG economy and DN operation stability.展开更多
This paper addresses a multicircular circumnavigation control for UAVs with desired angular spacing around a nonstationary target.By defining a coordinated error relative to neighboring angular spacing,under the premi...This paper addresses a multicircular circumnavigation control for UAVs with desired angular spacing around a nonstationary target.By defining a coordinated error relative to neighboring angular spacing,under the premise that target information is perfectly accessible by all nodes,a centralized circular enclosing control strategy is derived for multiple UAVs connected by an undirected graph to allow for formation behaviors concerning the moving target.Besides,to avoid the requirement of target’s states being accessible for each UAV,fixed-time distributed observers are introduced to acquire the state estimates in a fixed-time sense,and the upper boundary of settling time can be determined offline irrespective of initial properties,greatly releasing the burdensome communication traffic.Then,with the aid of fixed-time distributed observers,a distributed circular circumnavigation controller is derived to force all UAVs to collaboratively evolve along the preset circles while keeping a desired angular spacing.It is inferred from Lyapunov stability that all errors are demonstrated to be convergent.Simulations are offered to verify the utility of proposed protocol.展开更多
We develop a policy of observer-based dynamic event-triggered state feedback control for distributed parameter systems over a mobile sensor-plus-actuator network.It is assumed that the mobile sensing devices that prov...We develop a policy of observer-based dynamic event-triggered state feedback control for distributed parameter systems over a mobile sensor-plus-actuator network.It is assumed that the mobile sensing devices that provide spatially averaged state measurements can be used to improve state estimation in the network.For the purpose of decreasing the update frequency of controller and unnecessary sampled data transmission, an efficient dynamic event-triggered control policy is constructed.In an event-triggered system, when an error signal exceeds a specified time-varying threshold, it indicates the occurrence of a typical event.The global asymptotic stability of the event-triggered closed-loop system and the boundedness of the minimum inter-event time can be guaranteed.Based on the linear quadratic optimal regulator, the actuator selects the optimal displacement only when an event occurs.A simulation example is finally used to verify that the effectiveness of such a control strategy can enhance the system performance.展开更多
With the current integration of distributed energy resources into the grid,the structure of distribution networks is becoming more complex.This complexity significantly expands the solution space in the optimization p...With the current integration of distributed energy resources into the grid,the structure of distribution networks is becoming more complex.This complexity significantly expands the solution space in the optimization process for network reconstruction using intelligent algorithms.Consequently,traditional intelligent algorithms frequently encounter insufficient search accuracy and become trapped in local optima.To tackle this issue,a more advanced particle swarm optimization algorithm is proposed.To address the varying emphases at different stages of the optimization process,a dynamic strategy is implemented to regulate the social and self-learning factors.The Metropolis criterion is introduced into the simulated annealing algorithm to occasionally accept suboptimal solutions,thereby mitigating premature convergence in the population optimization process.The inertia weight is adjusted using the logistic mapping technique to maintain a balance between the algorithm’s global and local search abilities.The incorporation of the Pareto principle involves the consideration of network losses and voltage deviations as objective functions.A fuzzy membership function is employed for selecting the results.Simulation analysis is carried out on the restructuring of the distribution network,using the IEEE-33 node system and the IEEE-69 node system as examples,in conjunction with the integration of distributed energy resources.The findings demonstrate that,in comparison to other intelligent optimization algorithms,the proposed enhanced algorithm demonstrates a shorter convergence time and effectively reduces active power losses within the network.Furthermore,it enhances the amplitude of node voltages,thereby improving the stability of distribution network operations and power supply quality.Additionally,the algorithm exhibits a high level of generality and applicability.展开更多
False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural ...False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural networkmodels to detect FDIA attacks.However,they overlook the fact that FDIA attack samples at public-private network edges are extremely sparse,making it difficult for neural network models to obtain sufficient samples to construct a robust detection model.To address this problem,this paper designs an efficient sample generative adversarial model of FDIA attack in public-private network edge,which can effectively bypass the detectionmodel to threaten the power grid system.A generative adversarial network(GAN)framework is first constructed by combining residual networks(ResNet)with fully connected networks(FCN).Then,a sparse adversarial learning model is built by integrating the time-aligned data and normal data,which is used to learn the distribution characteristics between normal data and attack data through iterative confrontation.Furthermore,we introduce a Gaussian hybrid distributionmatrix by aggregating the network structure of attack data characteristics and normal data characteristics,which can connect and calculate FDIA data with normal characteristics.Finally,efficient FDIA attack samples can be sequentially generated through interactive adversarial learning.Extensive simulation experiments are conducted with IEEE 14-bus and IEEE 118-bus system data,and the results demonstrate that the generated attack samples of the proposed model can present superior performance compared to state-of-the-art models in terms of attack strength,robustness,and covert capability.展开更多
This paper presents a game theory-based method for predicting the outcomes of negotiation and group decision-making problems. We propose an extension to the BDM model to address problems where actors’ positions are d...This paper presents a game theory-based method for predicting the outcomes of negotiation and group decision-making problems. We propose an extension to the BDM model to address problems where actors’ positions are distributed over a position spectrum. We generalize the concept of position in the model to incorporate continuous positions for the actors, enabling them to have more flexibility in defining their targets. We explore different possible functions to study the role of the position function and discuss appropriate distance measures for computing the distance between the positions of actors. To validate the proposed extension, we demonstrate the trustworthiness of our model’s performance and interpretation by replicating the results based on data used in earlier studies.展开更多
Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In exist...Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.展开更多
Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error corre...Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error correction using neural network-based machine learning methods is a promising approach that is adapted to physical systems without the need to build noise models.In this paper,we use a distributed decoding strategy,which effectively alleviates the problem of exponential growth of the training set required for neural networks as the code distance of quantum error-correcting codes increases.Our decoding algorithm is based on renormalization group decoding and recurrent neural network decoder.The recurrent neural network is trained through the ResNet architecture to improve its decoding accuracy.Then we test the decoding performance of our distributed strategy decoder,recurrent neural network decoder,and the classic minimum weight perfect matching(MWPM)decoder for rotated surface codes with different code distances under the circuit noise model,the thresholds of these three decoders are about 0.0052,0.0051,and 0.0049,respectively.Our results demonstrate that the distributed strategy decoder outperforms the other two decoders,achieving approximately a 5%improvement in decoding efficiency compared to the MWPM decoder and approximately a 2%improvement compared to the recurrent neural network decoder.展开更多
We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization p...We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization parameters through the wireless network,large-scale training models can create communication bottlenecks,resulting in slower training times.To address this issue,CHOCO-SGD was proposed,which allows compressing information with arbitrary precision without reducing the convergence rate for strongly convex objective functions.Nevertheless,most convex functions are not strongly convex(such as logistic regression or Lasso),which raises the question of whether this algorithm can be applied to non-strongly convex functions.In this paper,we provide the first theoretical analysis of the convergence rate of CHOCO-SGD on non-strongly convex objectives.We derive a sufficient condition,which limits the fidelity of compression,to guarantee convergence.Moreover,our analysis demonstrates that within the fidelity threshold,this algorithm can significantly reduce transmission burden while maintaining the same convergence rate order as its no-compression equivalent.Numerical experiments further validate the theoretical findings by demonstrating that CHOCO-SGD improves communication efficiency and keeps the same convergence rate order simultaneously.And experiments also show that the algorithm fails to converge with low compression fidelity and in time-varying topologies.Overall,our study offers valuable insights into the potential applicability of CHOCO-SGD for non-strongly convex objectives.Additionally,we provide practical guidelines for researchers seeking to utilize this algorithm in real-world scenarios.展开更多
基金supported by the National Natural Science Foundation of China (NSFC)(62222308, 62173181, 62073171, 62221004)the Natural Science Foundation of Jiangsu Province (BK20200744, BK20220139)+3 种基金Jiangsu Specially-Appointed Professor (RK043STP19001)the Young Elite Scientists Sponsorship Program by CAST (2021QNRC001)1311 Talent Plan of Nanjing University of Posts and Telecommunicationsthe Fundamental Research Funds for the Central Universities (30920032203)。
文摘This paper is concerned with distributed Nash equi librium seeking strategies under quantized communication. In the proposed seeking strategy, a projection operator is synthesized with a gradient search method to achieve the optimization o players' objective functions while restricting their actions within required non-empty, convex and compact domains. In addition, a leader-following consensus protocol, in which quantized informa tion flows are utilized, is employed for information sharing among players. More specifically, logarithmic quantizers and uniform quantizers are investigated under both undirected and connected communication graphs and strongly connected digraphs, respec tively. Through Lyapunov stability analysis, it is shown that play ers' actions can be steered to a neighborhood of the Nash equilib rium with logarithmic and uniform quantizers, and the quanti fied convergence error depends on the parameter of the quan tizer for both undirected and directed cases. A numerical exam ple is given to verify the theoretical results.
基金National Natural Science Foundation of China(No.42374013)National Key Research and Development Program of China(Nos.2019YFC1509201,2021YFB3900604-03)。
文摘In the past two decades,extensive and in-depth research has been conducted on Time Series InSAR technology with the advancement of high-performance SAR satellites and the accumulation of big SAR data.The introduction of distributed scatterers in Distributed Scatterers InSAR(DS-InSAR)has significantly expanded the application scenarios of InSAR geodetic measurement by increasing the number of measurement points.This study traces the history of DS-InSAR,presents the definition and characteristics of distributed scatterers,and focuses on exploring the relationships and distinctions among proposed algorithms in two crucial steps:statistically homogeneous pixel selection and phase optimization.Additionally,the latest research progress in this field is tracked and the possible development direction in the future is discussed.Through simulation experiments and two real InSAR case studies,the proposed algorithms are compared and verified,and the advantages of DS-InSAR in deformation measurement practice are demonstrated.This work not only offers insights into current trends and focal points for theoretical research on DS-InSAR but also provides practical cases and guidance for applied research.
基金supported by the Technology Project of State Grid Jiangsu Electric Power Co.,Ltd.,China (J2022160,Research on Key Technologies of Distributed Power Dispatching Control for Resilience Improvement of Distribution Networks).
文摘To improve the resilience of a distribution system against extreme weather,a fuel-based distributed generator(DG)allocation model is proposed in this study.In this model,the DGs are placed at the planning stage.When an extreme event occurs,the controllable generators form temporary microgrids(MGs)to restore the load maximally.Simultaneously,a demand response program(DRP)mitigates the imbalance between the power supply and demand during extreme events.To cope with the fault uncertainty,a robust optimization(RO)method is applied to reduce the long-term investment and short-term operation costs.The optimization is formulated as a tri-level defenderattacker-defender(DAD)framework.At the first level,decision-makers work out the DG allocation scheme;at the second level,the attacker finds the optimal attack strategy with maximum damage;and at the third level,restoration measures,namely distribution network reconfiguration(DNR)and demand response are performed.The problem is solved by the nested column and constraint generation(NC&CG)method and the model is validated using an IEEE 33-node system.Case studies validate the effectiveness and superiority of the proposed model according to the enhanced resilience and reduced cost.
文摘In this paper, platoons of autonomous vehicles operating in urban road networks are considered. From a methodological point of view, the problem of interest consists of formally characterizing vehicle state trajectory tubes by means of routing decisions complying with traffic congestion criteria. To this end, a novel distributed control architecture is conceived by taking advantage of two methodologies: deep reinforcement learning and model predictive control. On one hand, the routing decisions are obtained by using a distributed reinforcement learning algorithm that exploits available traffic data at each road junction. On the other hand, a bank of model predictive controllers is in charge of computing the more adequate control action for each involved vehicle. Such tasks are here combined into a single framework:the deep reinforcement learning output(action) is translated into a set-point to be tracked by the model predictive controller;conversely, the current vehicle position, resulting from the application of the control move, is exploited by the deep reinforcement learning unit for improving its reliability. The main novelty of the proposed solution lies in its hybrid nature: on one hand it fully exploits deep reinforcement learning capabilities for decisionmaking purposes;on the other hand, time-varying hard constraints are always satisfied during the dynamical platoon evolution imposed by the computed routing decisions. To efficiently evaluate the performance of the proposed control architecture, a co-design procedure, involving the SUMO and MATLAB platforms, is implemented so that complex operating environments can be used, and the information coming from road maps(links,junctions, obstacles, semaphores, etc.) and vehicle state trajectories can be shared and exchanged. Finally by considering as operating scenario a real entire city block and a platoon of eleven vehicles described by double-integrator models, several simulations have been performed with the aim to put in light the main f eatures of the proposed approach. Moreover, it is important to underline that in different operating scenarios the proposed reinforcement learning scheme is capable of significantly reducing traffic congestion phenomena when compared with well-reputed competitors.
基金supported by the Sichuan Science and Technology Program(grant number 2022YFG0123).
文摘In this study,a novel residential virtual power plant(RVPP)scheduling method that leverages a gate recurrent unit(GRU)-integrated deep reinforcement learning(DRL)algorithm is proposed.In the proposed scheme,the GRU-integrated DRL algorithm guides the RVPP to participate effectively in both the day-ahead and real-time markets,lowering the electricity purchase costs and consumption risks for end-users.The Lagrangian relaxation technique is introduced to transform the constrained Markov decision process(CMDP)into an unconstrained optimization problem,which guarantees that the constraints are strictly satisfied without determining the penalty coefficients.Furthermore,to enhance the scalability of the constrained soft actor-critic(CSAC)-based RVPP scheduling approach,a fully distributed scheduling architecture was designed to enable plug-and-play in the residential distributed energy resources(RDER).Case studies performed on the constructed RVPP scenario validated the performance of the proposed methodology in enhancing the responsiveness of the RDER to power tariffs,balancing the supply and demand of the power grid,and ensuring customer comfort.
基金This article was supported by the general project“Research on Wind and Photovoltaic Fault Characteristics and Practical Short Circuit Calculation Model”(521820200097)of Jiangxi Electric Power Company.
文摘During faults in a distribution network,the output power of a distributed generation(DG)may be uncertain.Moreover,the output currents of distributed power sources are also affected by the output power,resulting in uncertainties in the calculation of the short-circuit current at the time of a fault.Additionally,the impacts of such uncertainties around short-circuit currents will increase with the increase of distributed power sources.Thus,it is very important to develop a method for calculating the short-circuit current while considering the uncertainties in a distribution network.In this study,an affine arithmetic algorithm for calculating short-circuit current intervals in distribution networks with distributed power sources while considering power fluctuations is presented.The proposed algorithm includes two stages.In the first stage,normal operations are considered to establish a conservative interval affine optimization model of injection currents in distributed power sources.Constrained by the fluctuation range of distributed generation power at the moment of fault occurrence,the model can then be used to solve for the fluctuation range of injected current amplitudes in distributed power sources.The second stage is implemented after a malfunction occurs.In this stage,an affine optimization model is first established.This model is developed to characterizes the short-circuit current interval of a transmission line,and is constrained by the fluctuation range of the injected current amplitude of DG during normal operations.Finally,the range of the short-circuit current amplitudes of distribution network lines after a short-circuit fault occurs is predicted.The algorithm proposed in this article obtains an interval range containing accurate results through interval operation.Compared with traditional point value calculation methods,interval calculation methods can provide more reliable analysis and calculation results.The range of short-circuit current amplitude obtained by this algorithm is slightly larger than those obtained using the Monte Carlo algorithm and the Latin hypercube sampling algorithm.Therefore,the proposed algorithm has good suitability and does not require iterative calculations,resulting in a significant improvement in computational speed compared to the Monte Carlo algorithm and the Latin hypercube sampling algorithm.Furthermore,the proposed algorithm can provide more reliable analysis and calculation results,improving the safety and stability of power systems.
基金funded by the“Research and Application Project of Collaborative Optimization Control Technology for Distribution Station Area for High Proportion Distributed PV Consumption(4000-202318079A-1-1-ZN)”of the Headquarters of the State Grid Corporation.
文摘Themassive integration of high-proportioned distributed photovoltaics into distribution networks poses significant challenges to the flexible regulation capabilities of distribution stations.To accurately assess the flexible regulation capabilities of distribution stations,amulti-temporal and spatial scale regulation capability assessment technique is proposed for distribution station areas with distributed photovoltaics,considering different geographical locations,coverage areas,and response capabilities.Firstly,the multi-temporal scale regulation characteristics and response capabilities of different regulation resources in distribution station areas are analyzed,and a resource regulation capability model is established to quantify the adjustable range of different regulation resources.On this basis,considering the limitations of line transmission capacity,a regulation capability assessment index for distribution stations is proposed to evaluate their regulation capabilities.Secondly,considering different geographical locations and coverage areas,a comprehensive performance index based on electrical distance modularity and active power balance is established,and a cluster division method based on genetic algorithms is proposed to fully leverage the coordination and complementarity among nodes and improve the active power matching degree within clusters.Simultaneously,an economic optimization model with the objective of minimizing the economic cost of the distribution station is established,comprehensively considering the safety constraints of the distribution network and the regulation constraints of resources.This model can provide scientific guidance for the economic dispatch of the distribution station area.Finally,case studies demonstrate that the proposed assessment and optimization methods effectively evaluate the regulation capabilities of distribution stations,facilitate the consumption of distributed photovoltaics,and enhance the economic efficiency of the distribution station area.
基金supported by North China Electric Power Research Institute’s Self-Funded Science and Technology Project“Research on Distributed Energy Storage Optimal Configuration and Operation Control Technology for Photovoltaic Promotion in the Entire County”(KJZ2022049).
文摘In recent years,distributed photovoltaics(DPV)has ushered in a good development situation due to the advantages of pollution-free power generation,full utilization of the ground or roof of the installation site,and balancing a large number of loads nearby.However,under the background of a large-scale DPV grid-connected to the county distribution network,an effective analysis method is needed to analyze its impact on the voltage of the distribution network in the early development stage of DPV.Therefore,a DPV orderly grid-connected method based on photovoltaics grid-connected order degree(PGOD)is proposed.This method aims to orderly analyze the change of voltage in the distribution network when large-scale DPV will be connected.Firstly,based on the voltagemagnitude sensitivity(VMS)index of the photovoltaics permitted grid-connected node and the acceptance of grid-connected node(AoGCN)index of other nodes in the network,thePGODindex is constructed to determine the photovoltaics permitted grid-connected node of the current photovoltaics grid-connected state network.Secondly,a photovoltaics orderly grid-connected model with a continuous updating state is constructed to obtain an orderly DPV grid-connected order.The simulation results illustrate that the photovoltaics grid-connected order determined by this method based on PGOD can effectively analyze the voltage impact of large-scale photovoltaics grid-connected,and explore the internal factors and characteristics of the impact.
文摘This paper proposes a novel approach for identifying distributed dynamic loads in the time domain.Using polynomial andmodal analysis,the load is transformed intomodal space for coefficient identification.This allows the distributed dynamic load with a two-dimensional form in terms of time and space to be simultaneously identified in the form of modal force,thereby achieving dimensionality reduction.The Impulse-based Force Estimation Algorithm is proposed to identify dynamic loads in the time domain.Firstly,the algorithm establishes a recursion scheme based on convolution integral,enabling it to identify loads with a long history and rapidly changing forms over time.Secondly,the algorithm introduces moving mean and polynomial fitting to detrend,enhancing its applicability in load estimation.The aforementioned methodology successfully accomplishes the reconstruction of distributed,instead of centralized,dynamic loads on the continuum in the time domain by utilizing acceleration response.To validate the effectiveness of the method,computational and experimental verification were conducted.
基金sponsored by the National Natural Science Foundation of China(Nos.61972208,62102194 and 62102196)National Natural Science Foundation of China(Youth Project)(No.62302237)+3 种基金Six Talent Peaks Project of Jiangsu Province(No.RJFW-111),China Postdoctoral Science Foundation Project(No.2018M640509)Postgraduate Research and Practice Innovation Program of Jiangsu Province(Nos.KYCX22_1019,KYCX23_1087,KYCX22_1027,KYCX23_1087,SJCX24_0339 and SJCX24_0346)Innovative Training Program for College Students of Nanjing University of Posts and Telecommunications(No.XZD2019116)Nanjing University of Posts and Telecommunications College Students Innovation Training Program(Nos.XZD2019116,XYB2019331).
文摘The scale and complexity of big data are growing continuously,posing severe challenges to traditional data processing methods,especially in the field of clustering analysis.To address this issue,this paper introduces a new method named Big Data Tensor Multi-Cluster Distributed Incremental Update(BDTMCDIncreUpdate),which combines distributed computing,storage technology,and incremental update techniques to provide an efficient and effective means for clustering analysis.Firstly,the original dataset is divided into multiple subblocks,and distributed computing resources are utilized to process the sub-blocks in parallel,enhancing efficiency.Then,initial clustering is performed on each sub-block using tensor-based multi-clustering techniques to obtain preliminary results.When new data arrives,incremental update technology is employed to update the core tensor and factor matrix,ensuring that the clustering model can adapt to changes in data.Finally,by combining the updated core tensor and factor matrix with historical computational results,refined clustering results are obtained,achieving real-time adaptation to dynamic data.Through experimental simulation on the Aminer dataset,the BDTMCDIncreUpdate method has demonstrated outstanding performance in terms of accuracy(ACC)and normalized mutual information(NMI)metrics,achieving an accuracy rate of 90%and an NMI score of 0.85,which outperforms existing methods such as TClusInitUpdate and TKLClusUpdate in most scenarios.Therefore,the BDTMCDIncreUpdate method offers an innovative solution to the field of big data analysis,integrating distributed computing,incremental updates,and tensor-based multi-clustering techniques.It not only improves the efficiency and scalability in processing large-scale high-dimensional datasets but also has been validated for its effectiveness and accuracy through experiments.This method shows great potential in real-world applications where dynamic data growth is common,and it is of significant importance for advancing the development of data analysis technology.
文摘The use of privacy-enhanced facial recognition has increased in response to growing concerns about data securityand privacy in the digital age. This trend is spurred by rising demand for face recognition technology in a varietyof industries, including access control, law enforcement, surveillance, and internet communication. However,the growing usage of face recognition technology has created serious concerns about data monitoring and userprivacy preferences, especially in context-aware systems. In response to these problems, this study provides a novelframework that integrates sophisticated approaches such as Generative Adversarial Networks (GANs), Blockchain,and distributed computing to solve privacy concerns while maintaining exact face recognition. The framework’spainstaking design and execution strive to strike a compromise between precise face recognition and protectingpersonal data integrity in an increasingly interconnected environment. Using cutting-edge tools like Dlib for faceanalysis,Ray Cluster for distributed computing, and Blockchain for decentralized identity verification, the proposedsystem provides scalable and secure facial analysis while protecting user privacy. The study’s contributions includethe creation of a sustainable and scalable solution for privacy-aware face recognition, the implementation of flexibleprivacy computing approaches based on Blockchain networks, and the demonstration of higher performanceover previous methods. Specifically, the proposed StyleGAN model has an outstanding accuracy rate of 93.84%while processing high-resolution images from the CelebA-HQ dataset, beating other evaluated models such asProgressive GAN 90.27%, CycleGAN 89.80%, and MGAN 80.80%. With improvements in accuracy, speed, andprivacy protection, the framework has great promise for practical use in a variety of fields that need face recognitiontechnology. This study paves the way for future research in privacy-enhanced face recognition systems, emphasizingthe significance of using cutting-edge technology to meet rising privacy issues in digital identity.
文摘Distribution generation(DG)technology based on a variety of renewable energy technologies has developed rapidly.A large number of multi-type DG are connected to the distribution network(DN),resulting in a decline in the stability of DN operation.It is urgent to find a method that can effectively connect multi-energy DG to DN.photovoltaic(PV),wind power generation(WPG),fuel cell(FC),and micro gas turbine(MGT)are considered in this paper.A multi-objective optimization model was established based on the life cycle cost(LCC)of DG,voltage quality,voltage fluctuation,system network loss,power deviation of the tie-line,DG pollution emission index,and meteorological index weight of DN.Multi-objective artificial bee colony algorithm(MOABC)was used to determine the optimal location and capacity of the four kinds of DG access DN,and compared with the other three heuristic algorithms.Simulation tests based on IEEE 33 test node and IEEE 69 test node show that in IEEE 33 test node,the total voltage deviation,voltage fluctuation,and system network loss of DN decreased by 49.67%,7.47%and 48.12%,respectively,compared with that without DG configuration.In the IEEE 69 test node,the total voltage deviation,voltage fluctuation and system network loss of DN in the MOABC configuration scheme decreased by 54.98%,35.93%and 75.17%,respectively,compared with that without DG configuration,indicating that MOABC can reasonably plan the capacity and location of DG.Achieve the maximum trade-off between DG economy and DN operation stability.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.62173312,61922037,61873115,and 61803348in part by the National Major Scientific Instruments Development Project under Grant 61927807+6 种基金in part by the State Key Laboratory of Deep Buried Target Damage under Grant No.DXMBJJ2019-02in part by the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi under Grant 2020L0266in part by the Shanxi Province Science Foundation for Youths under Grant No.201701D221123in part by the Youth Academic North University of China under Grant No.QX201803in part by the Program for the Innovative Talents of Higher Education Institutions of Shanxiin part by the Shanxi“1331Project”Key Subjects Construction under Grant 1331KSCin part by the Supported by Shanxi Province Science Foundation for Excellent Youths。
文摘This paper addresses a multicircular circumnavigation control for UAVs with desired angular spacing around a nonstationary target.By defining a coordinated error relative to neighboring angular spacing,under the premise that target information is perfectly accessible by all nodes,a centralized circular enclosing control strategy is derived for multiple UAVs connected by an undirected graph to allow for formation behaviors concerning the moving target.Besides,to avoid the requirement of target’s states being accessible for each UAV,fixed-time distributed observers are introduced to acquire the state estimates in a fixed-time sense,and the upper boundary of settling time can be determined offline irrespective of initial properties,greatly releasing the burdensome communication traffic.Then,with the aid of fixed-time distributed observers,a distributed circular circumnavigation controller is derived to force all UAVs to collaboratively evolve along the preset circles while keeping a desired angular spacing.It is inferred from Lyapunov stability that all errors are demonstrated to be convergent.Simulations are offered to verify the utility of proposed protocol.
基金Project supported by the National Natural Science Foundation of China (Grant No.62073045)。
文摘We develop a policy of observer-based dynamic event-triggered state feedback control for distributed parameter systems over a mobile sensor-plus-actuator network.It is assumed that the mobile sensing devices that provide spatially averaged state measurements can be used to improve state estimation in the network.For the purpose of decreasing the update frequency of controller and unnecessary sampled data transmission, an efficient dynamic event-triggered control policy is constructed.In an event-triggered system, when an error signal exceeds a specified time-varying threshold, it indicates the occurrence of a typical event.The global asymptotic stability of the event-triggered closed-loop system and the boundedness of the minimum inter-event time can be guaranteed.Based on the linear quadratic optimal regulator, the actuator selects the optimal displacement only when an event occurs.A simulation example is finally used to verify that the effectiveness of such a control strategy can enhance the system performance.
基金This research is supported by the Science and Technology Program of Gansu Province(No.23JRRA880).
文摘With the current integration of distributed energy resources into the grid,the structure of distribution networks is becoming more complex.This complexity significantly expands the solution space in the optimization process for network reconstruction using intelligent algorithms.Consequently,traditional intelligent algorithms frequently encounter insufficient search accuracy and become trapped in local optima.To tackle this issue,a more advanced particle swarm optimization algorithm is proposed.To address the varying emphases at different stages of the optimization process,a dynamic strategy is implemented to regulate the social and self-learning factors.The Metropolis criterion is introduced into the simulated annealing algorithm to occasionally accept suboptimal solutions,thereby mitigating premature convergence in the population optimization process.The inertia weight is adjusted using the logistic mapping technique to maintain a balance between the algorithm’s global and local search abilities.The incorporation of the Pareto principle involves the consideration of network losses and voltage deviations as objective functions.A fuzzy membership function is employed for selecting the results.Simulation analysis is carried out on the restructuring of the distribution network,using the IEEE-33 node system and the IEEE-69 node system as examples,in conjunction with the integration of distributed energy resources.The findings demonstrate that,in comparison to other intelligent optimization algorithms,the proposed enhanced algorithm demonstrates a shorter convergence time and effectively reduces active power losses within the network.Furthermore,it enhances the amplitude of node voltages,thereby improving the stability of distribution network operations and power supply quality.Additionally,the algorithm exhibits a high level of generality and applicability.
基金supported in part by the the Natural Science Foundation of Shanghai(20ZR1421600)Research Fund of Guangxi Key Lab of Multi-Source Information Mining&Security(MIMS21-M-02).
文摘False data injection attack(FDIA)is an attack that affects the stability of grid cyber-physical system(GCPS)by evading the detecting mechanism of bad data.Existing FDIA detection methods usually employ complex neural networkmodels to detect FDIA attacks.However,they overlook the fact that FDIA attack samples at public-private network edges are extremely sparse,making it difficult for neural network models to obtain sufficient samples to construct a robust detection model.To address this problem,this paper designs an efficient sample generative adversarial model of FDIA attack in public-private network edge,which can effectively bypass the detectionmodel to threaten the power grid system.A generative adversarial network(GAN)framework is first constructed by combining residual networks(ResNet)with fully connected networks(FCN).Then,a sparse adversarial learning model is built by integrating the time-aligned data and normal data,which is used to learn the distribution characteristics between normal data and attack data through iterative confrontation.Furthermore,we introduce a Gaussian hybrid distributionmatrix by aggregating the network structure of attack data characteristics and normal data characteristics,which can connect and calculate FDIA data with normal characteristics.Finally,efficient FDIA attack samples can be sequentially generated through interactive adversarial learning.Extensive simulation experiments are conducted with IEEE 14-bus and IEEE 118-bus system data,and the results demonstrate that the generated attack samples of the proposed model can present superior performance compared to state-of-the-art models in terms of attack strength,robustness,and covert capability.
文摘This paper presents a game theory-based method for predicting the outcomes of negotiation and group decision-making problems. We propose an extension to the BDM model to address problems where actors’ positions are distributed over a position spectrum. We generalize the concept of position in the model to incorporate continuous positions for the actors, enabling them to have more flexibility in defining their targets. We explore different possible functions to study the role of the position function and discuss appropriate distance measures for computing the distance between the positions of actors. To validate the proposed extension, we demonstrate the trustworthiness of our model’s performance and interpretation by replicating the results based on data used in earlier studies.
基金supported by National Natural Sciences Foundation of China(No.62271165,62027802,62201307)the Guangdong Basic and Applied Basic Research Foundation(No.2023A1515030297)+2 种基金the Shenzhen Science and Technology Program ZDSYS20210623091808025Stable Support Plan Program GXWD20231129102638002the Major Key Project of PCL(No.PCL2024A01)。
文摘Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.
基金Project supported by Natural Science Foundation of Shandong Province,China (Grant Nos.ZR2021MF049,ZR2022LLZ012,and ZR2021LLZ001)。
文摘Quantum error correction is a crucial technology for realizing quantum computers.These computers achieve faulttolerant quantum computing by detecting and correcting errors using decoding algorithms.Quantum error correction using neural network-based machine learning methods is a promising approach that is adapted to physical systems without the need to build noise models.In this paper,we use a distributed decoding strategy,which effectively alleviates the problem of exponential growth of the training set required for neural networks as the code distance of quantum error-correcting codes increases.Our decoding algorithm is based on renormalization group decoding and recurrent neural network decoder.The recurrent neural network is trained through the ResNet architecture to improve its decoding accuracy.Then we test the decoding performance of our distributed strategy decoder,recurrent neural network decoder,and the classic minimum weight perfect matching(MWPM)decoder for rotated surface codes with different code distances under the circuit noise model,the thresholds of these three decoders are about 0.0052,0.0051,and 0.0049,respectively.Our results demonstrate that the distributed strategy decoder outperforms the other two decoders,achieving approximately a 5%improvement in decoding efficiency compared to the MWPM decoder and approximately a 2%improvement compared to the recurrent neural network decoder.
基金supported in part by the Shanghai Natural Science Foundation under the Grant 22ZR1407000.
文摘We are investigating the distributed optimization problem,where a network of nodes works together to minimize a global objective that is a finite sum of their stored local functions.Since nodes exchange optimization parameters through the wireless network,large-scale training models can create communication bottlenecks,resulting in slower training times.To address this issue,CHOCO-SGD was proposed,which allows compressing information with arbitrary precision without reducing the convergence rate for strongly convex objective functions.Nevertheless,most convex functions are not strongly convex(such as logistic regression or Lasso),which raises the question of whether this algorithm can be applied to non-strongly convex functions.In this paper,we provide the first theoretical analysis of the convergence rate of CHOCO-SGD on non-strongly convex objectives.We derive a sufficient condition,which limits the fidelity of compression,to guarantee convergence.Moreover,our analysis demonstrates that within the fidelity threshold,this algorithm can significantly reduce transmission burden while maintaining the same convergence rate order as its no-compression equivalent.Numerical experiments further validate the theoretical findings by demonstrating that CHOCO-SGD improves communication efficiency and keeps the same convergence rate order simultaneously.And experiments also show that the algorithm fails to converge with low compression fidelity and in time-varying topologies.Overall,our study offers valuable insights into the potential applicability of CHOCO-SGD for non-strongly convex objectives.Additionally,we provide practical guidelines for researchers seeking to utilize this algorithm in real-world scenarios.