Blockchain technology,with its attributes of decentralization,immutability,and traceability,has emerged as a powerful catalyst for enhancing traditional industries in terms of optimizing business processes.However,tra...Blockchain technology,with its attributes of decentralization,immutability,and traceability,has emerged as a powerful catalyst for enhancing traditional industries in terms of optimizing business processes.However,transaction performance and scalability has become the main challenges hindering the widespread adoption of blockchain.Due to its inability to meet the demands of high-frequency trading,blockchain cannot be adopted in many scenarios.To improve the transaction capacity,researchers have proposed some on-chain scaling technologies,including lightning networks,directed acyclic graph technology,state channels,and shardingmechanisms,inwhich sharding emerges as a potential scaling technology.Nevertheless,excessive cross-shard transactions and uneven shard workloads prevent the sharding mechanism from achieving the expected aim.This paper proposes a graphbased sharding scheme for public blockchain to efficiently balance the transaction distribution.Bymitigating crossshard transactions and evening-out workloads among shards,the scheme reduces transaction confirmation latency and enhances the transaction capacity of the blockchain.Therefore,the scheme can achieve a high-frequency transaction as well as a better blockchain scalability.Experiments results show that the scheme effectively reduces the cross-shard transaction ratio to a range of 35%-56%and significantly decreases the transaction confirmation latency to 6 s in a blockchain with no more than 25 shards.展开更多
The inverse and direct piezoelectric and circuit coupling are widely observed in advanced electro-mechanical systems such as piezoelectric energy harvesters.Existing strongly coupled analysis methods based on direct n...The inverse and direct piezoelectric and circuit coupling are widely observed in advanced electro-mechanical systems such as piezoelectric energy harvesters.Existing strongly coupled analysis methods based on direct numerical modeling for this phenomenon can be classified into partitioned or monolithic formulations.Each formulation has its advantages and disadvantages,and the choice depends on the characteristics of each coupled problem.This study proposes a new option:a coupled analysis strategy that combines the best features of the existing formulations,namely,the hybrid partitioned-monolithic method.The analysis of inverse piezoelectricity and the monolithic analysis of direct piezoelectric and circuit interaction are strongly coupled using a partitioned iterative hierarchical algorithm.In a typical benchmark problem of a piezoelectric energy harvester,this research compares the results from the proposed method to those from the conventional strongly coupled partitioned iterative method,discussing the accuracy,stability,and computational cost.The proposed hybrid concept is effective for coupled multi-physics problems,including various coupling conditions.展开更多
The presence of horizontal layered rocks in tunnel engineering significantly impacts the stability and strength of the surrounding rock mass,leading to floor heave in the tunnel.This study focused on preparing layered...The presence of horizontal layered rocks in tunnel engineering significantly impacts the stability and strength of the surrounding rock mass,leading to floor heave in the tunnel.This study focused on preparing layered specimens of rock-like material with varying thickness to investigate the failure behaviors of tunnel floors.The results indicate that thin-layered rock mass exhibits weak interlayer bonding,causing rock layers near the surface to buckle and break upwards when subjected to horizontal squeezing.With an increase in the layer thickness,a transition in failure mode occurs from upward buckling to shear failure along the plane,leading to a noticeable reduction in floor heave deformation.The primary cause of significant deformation in floor heave is upward buckling failure.To address this issue,the study proposes the installation of a partition wall in the middle of the floor to mitigate heave deformation of the rock layers.The results demonstrate that the partition wall has a considerable stabilizing effect on the floor,reducing the zone of buckling failure and minimizing floor heave deformation.It is crucial for the partition wall to be sufficiently high to prevent buckling failure and ensure stability.Through simulation calculations on an engineering example,it is confirmed that implementing a partition wall can effectively reduce floor heave and enhance the stability of tunnel floor.展开更多
Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP...Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP is an efficient solution for big data processing and analysis.However,a challenge for implementing RSP is determining an appropriate sample size for RSP data blocks.While a large sample size increases the burden of big data computation,a small size will lead to insufficient distribution information for RSP data blocks.To address this problem,this paper presents a novel density estimation-based method(DEM)to determine the optimal sample size for RSP data blocks.First,a theoretical sample size is calculated based on the multivariate Dvoretzky-Kiefer-Wolfowitz(DKW)inequality by using the fixed-point iteration(FPI)method.Second,a practical sample size is determined by minimizing the validation error of a kernel density estimator(KDE)constructed on RSP data blocks for an increasing sample size.Finally,a series of persuasive experiments are conducted to validate the feasibility,rationality,and effectiveness of DEM.Experimental results show that(1)the iteration function of the FPI method is convergent for calculating the theoretical sample size from the multivariate DKW inequality;(2)the KDE constructed on RSP data blocks with sample size determined by DEM can yield a good approximation of the probability density function(p.d.f);and(3)DEM provides more accurate sample sizes than the existing sample size determination methods from the perspective of p.d.f.estimation.This demonstrates that DEM is a viable approach to deal with the sample size determination problem for big data RSP implementation.展开更多
In this study, we propose an algorithm selection method based on coupling strength for the partitioned analysis ofstructure-piezoelectric-circuit coupling, which includes two types of coupling or inverse and direct pi...In this study, we propose an algorithm selection method based on coupling strength for the partitioned analysis ofstructure-piezoelectric-circuit coupling, which includes two types of coupling or inverse and direct piezoelectriccoupling and direct piezoelectric and circuit coupling. In the proposed method, implicit and explicit formulationsare used for strong and weak coupling, respectively. Three feasible partitioned algorithms are generated, namely(1) a strongly coupled algorithm that uses a fully implicit formulation for both types of coupling, (2) a weaklycoupled algorithm that uses a fully explicit formulation for both types of coupling, and (3) a partially stronglycoupled and partially weakly coupled algorithm that uses an implicit formulation and an explicit formulation forthe two types of coupling, respectively.Numerical examples using a piezoelectric energy harvester,which is a typicalstructure-piezoelectric-circuit coupling problem, demonstrate that the proposed method selects the most costeffectivealgorithm.展开更多
Aiming at the problemthat the traditional short-circuit current calculationmethod is not applicable to Distributed Generation(DG)accessing the distribution network,the paper proposes a short-circuit current partitioni...Aiming at the problemthat the traditional short-circuit current calculationmethod is not applicable to Distributed Generation(DG)accessing the distribution network,the paper proposes a short-circuit current partitioning calculation method considering the degree of voltage drop at the grid-connected point of DG.Firstly,the output characteristics of DG in the process of low voltage ride through are analyzed,and the equivalent output model of DG in the fault state is obtained.Secondly,by studying the network voltage distribution law after fault in distribution networks under different DG penetration rates,the degree of voltage drop at the grid-connected point of DG is used as a partition index to partition the distribution network.Then,iterative computation is performed within each partition,and data are transferred between partitions through split nodes to realize the fast partition calculation of short-circuit current for high proportion DG access to distribution network,which solves the problems of long iteration time and large calculation error of traditional short-circuit current.Finally,a 62-node real distribution network model containing a high proportion of DG access is constructed onMATLAB/Simulink,and the simulation verifies the effectiveness of the short-circuit current partitioning calculation method proposed in the paper,and its calculation speed is improved by 48.35%compared with the global iteration method.展开更多
Critical zone(CZ)plays a vital role in sustaining biodiversity and humanity.However,flux quantification within CZ,particularly in terms of subsurface hydrological partitioning,remains a significant challenge.This stud...Critical zone(CZ)plays a vital role in sustaining biodiversity and humanity.However,flux quantification within CZ,particularly in terms of subsurface hydrological partitioning,remains a significant challenge.This study focused on quantifying subsurface hydrological partitioning,specifically in an alpine mountainous area,and highlighted the important role of lateral flow during this process.Precipitation was usually classified as two parts into the soil:increased soil water content(SWC)and lateral flow out of the soil pit.It was found that 65%–88%precipitation contributed to lateral flow.The second common partitioning class showed an increase in SWC caused by both precipitation and lateral flow into the soil pit.In this case,lateral flow contributed to the SWC increase ranging from 43%to 74%,which was notably larger than the SWC increase caused by precipitation.On alpine meadows,lateral flow from the soil pit occurred when the shallow soil was wetter than the field capacity.This result highlighted the need for three-dimensional simulation between soil layers in Earth system models(ESMs).During evapotranspiration process,significant differences were observed in the classification of subsurface hydrological partitioning among different vegetation types.Due to tangled and aggregated fine roots in the surface soil on alpine meadows,the majority of subsurface responses involved lateral flow,which provided 98%–100%of evapotranspiration(ET).On grassland,there was a high probability(0.87),which ET was entirely provided by lateral flow.The main reason for underestimating transpiration through soil water dynamics in previous research was the neglect of lateral root water uptake.Furthermore,there was a probability of 0.12,which ET was entirely provided by SWC decrease on grassland.In this case,there was a high probability(0.98)that soil water responses only occurred at layer 2(10–20 cm),because grass roots mainly distributed in this soil layer,and grasses often used their deep roots for water uptake during ET.To improve the estimation of soil water dynamics and ET,we established a random forest(RF)model to simulate lateral flow and then corrected the community land model(CLM).RF model demonstrated good performance and led to significant improvements in CLM simulation.These findings enhance our understanding of subsurface hydrological partitioning and emphasize the importance of considering lateral flow in ESMs and hydrological research.展开更多
Here,we introduce a partitioned design method that is oriented toward airgap harmonic for permanent magnet vernier(PMV)motors.The method proposes the utilization of airgap flux harmonics as an effective bridge between...Here,we introduce a partitioned design method that is oriented toward airgap harmonic for permanent magnet vernier(PMV)motors.The method proposes the utilization of airgap flux harmonics as an effective bridge between the torque design region and the torque performances.To illustrate the efficacy of this method,a partitioned design PMV motor is presented and compared with the initial design.Firstly,the torque design region of the rotor is artfully divided into the torque enhancement region and ripple reduction region.Meanwhile,the main harmonics that generate output torque are chosen and enhanced,optimization.Moreover,the harmonics that generate torque ripple are selected and reduced based on torque harmonics optimization.Finally,the functions of the partitioned PMV motor torque are assessed based on the finite element method.By the purposeful design of these two regions,the output torque is strengthened while torque ripple is inhibited effectively,verifying the effectiveness and reasonability of the proposed design method.展开更多
With the large-scale development and utilization of renewable energy,industrial flexible loads,as a kind of loadside resource with strong regulation ability,provide new opportunities for the research on renewable ener...With the large-scale development and utilization of renewable energy,industrial flexible loads,as a kind of loadside resource with strong regulation ability,provide new opportunities for the research on renewable energy consumption problem in power systems.This paper proposes a two-layer active power optimization model based on industrial flexible loads for power grid partitioning,aiming at improving the line over-limit problem caused by renewable energy consumption in power grids with high proportion of renewable energy,and achieving the safe,stable and economical operation of power grids.Firstly,according to the evaluation index of renewable energy consumption characteristics of line active power,the power grid is divided into several partitions,and the interzone tie lines are taken as the optimization objects.Then,on the basis of partitioning,a two-layer active power optimization model considering the power constraints of industrial flexible loads is established.The upper-layer model optimizes the planned power of the inter-zone tie lines under the constraint of the minimum peak-valley difference within a day;the lower-layer model optimizes the regional source-load dispatching plan of each resource in each partition under the constraint of theminimumoperation cost of the partition,so as to reduce the line overlimit phenomenon caused by renewable energy consumption and save the electricity cost of industrial flexible loads.Finally,through simulation experiments,it is verified that the proposed model can effectively mobilize industrial flexible loads to participate in power grid operation and improve the economic stability of power grid.展开更多
We propose a symplectic partitioned Runge-Kutta (SPRK) method with eighth-order spatial accuracy based on the extended Hamiltonian system of the acoustic waveequation. Known as the eighth-order NSPRK method, this te...We propose a symplectic partitioned Runge-Kutta (SPRK) method with eighth-order spatial accuracy based on the extended Hamiltonian system of the acoustic waveequation. Known as the eighth-order NSPRK method, this technique uses an eighth-orderaccurate nearly analytic discrete (NAD) operator to discretize high-order spatial differentialoperators and employs a second-order SPRK method to discretize temporal derivatives.The stability criteria and numerical dispersion relations of the eighth-order NSPRK methodare given by a semi-analytical method and are tested by numerical experiments. We alsoshow the differences of the numerical dispersions between the eighth-order NSPRK methodand conventional numerical methods such as the fourth-order NSPRK method, the eighth-order Lax-Wendroff correction (LWC) method and the eighth-order staggered-grid (SG)method. The result shows that the ability of the eighth-order NSPRK method to suppress thenumerical dispersion is obviously superior to that of the conventional numerical methods. Inthe same computational environment, to eliminate visible numerical dispersions, the eighth-order NSPRK is approximately 2.5 times faster than the fourth-order NSPRK and 3.4 timesfaster than the fourth-order SPRK, and the memory requirement is only approximately47.17% of the fourth-order NSPRK method and 49.41% of the fourth-order SPRK method,which indicates the highest computational efficiency. Modeling examples for the two-layermodels such as the heterogeneous and Marmousi models show that the wavefields generatedby the eighth-order NSPRK method are very clear with no visible numerical dispersion.These numerical experiments illustrate that the eighth-order NSPRK method can effectivelysuppress numerical dispersion when coarse grids are adopted. Therefore, this methodcan greatly decrease computer memory requirement and accelerate the forward modelingproductivity. In general, the eighth-order NSPRK method has tremendous potential value forseismic exploration and seismology research.展开更多
基金supported by Shandong Provincial Key Research and Development Program of China(2021CXGC010107,2020CXGC010107)the Shandong Provincial Natural Science Foundation of China(ZR2020KF035)the New 20 Project of Higher Education of Jinan,China(202228017).
文摘Blockchain technology,with its attributes of decentralization,immutability,and traceability,has emerged as a powerful catalyst for enhancing traditional industries in terms of optimizing business processes.However,transaction performance and scalability has become the main challenges hindering the widespread adoption of blockchain.Due to its inability to meet the demands of high-frequency trading,blockchain cannot be adopted in many scenarios.To improve the transaction capacity,researchers have proposed some on-chain scaling technologies,including lightning networks,directed acyclic graph technology,state channels,and shardingmechanisms,inwhich sharding emerges as a potential scaling technology.Nevertheless,excessive cross-shard transactions and uneven shard workloads prevent the sharding mechanism from achieving the expected aim.This paper proposes a graphbased sharding scheme for public blockchain to efficiently balance the transaction distribution.Bymitigating crossshard transactions and evening-out workloads among shards,the scheme reduces transaction confirmation latency and enhances the transaction capacity of the blockchain.Therefore,the scheme can achieve a high-frequency transaction as well as a better blockchain scalability.Experiments results show that the scheme effectively reduces the cross-shard transaction ratio to a range of 35%-56%and significantly decreases the transaction confirmation latency to 6 s in a blockchain with no more than 25 shards.
基金supported by the Japan Society for the Promotion of Science,KAKENHI Grant No.23H00475.
文摘The inverse and direct piezoelectric and circuit coupling are widely observed in advanced electro-mechanical systems such as piezoelectric energy harvesters.Existing strongly coupled analysis methods based on direct numerical modeling for this phenomenon can be classified into partitioned or monolithic formulations.Each formulation has its advantages and disadvantages,and the choice depends on the characteristics of each coupled problem.This study proposes a new option:a coupled analysis strategy that combines the best features of the existing formulations,namely,the hybrid partitioned-monolithic method.The analysis of inverse piezoelectricity and the monolithic analysis of direct piezoelectric and circuit interaction are strongly coupled using a partitioned iterative hierarchical algorithm.In a typical benchmark problem of a piezoelectric energy harvester,this research compares the results from the proposed method to those from the conventional strongly coupled partitioned iterative method,discussing the accuracy,stability,and computational cost.The proposed hybrid concept is effective for coupled multi-physics problems,including various coupling conditions.
基金supported by the Key Program of National Natural Science Foundation of China(No.U23A202579)the National Natural Science Foundation of China(No.42277187,42007276,41972297)the Natural Science Foundation of Hebei Province(No.D2021202002)。
文摘The presence of horizontal layered rocks in tunnel engineering significantly impacts the stability and strength of the surrounding rock mass,leading to floor heave in the tunnel.This study focused on preparing layered specimens of rock-like material with varying thickness to investigate the failure behaviors of tunnel floors.The results indicate that thin-layered rock mass exhibits weak interlayer bonding,causing rock layers near the surface to buckle and break upwards when subjected to horizontal squeezing.With an increase in the layer thickness,a transition in failure mode occurs from upward buckling to shear failure along the plane,leading to a noticeable reduction in floor heave deformation.The primary cause of significant deformation in floor heave is upward buckling failure.To address this issue,the study proposes the installation of a partition wall in the middle of the floor to mitigate heave deformation of the rock layers.The results demonstrate that the partition wall has a considerable stabilizing effect on the floor,reducing the zone of buckling failure and minimizing floor heave deformation.It is crucial for the partition wall to be sufficiently high to prevent buckling failure and ensure stability.Through simulation calculations on an engineering example,it is confirmed that implementing a partition wall can effectively reduce floor heave and enhance the stability of tunnel floor.
基金This paper was supported by the National Natural Science Foundation of China(Grant No.61972261)the Natural Science Foundation of Guangdong Province(No.2023A1515011667)+1 种基金the Key Basic Research Foundation of Shenzhen(No.JCYJ20220818100205012)the Basic Research Foundation of Shenzhen(No.JCYJ20210324093609026)。
文摘Random sample partition(RSP)is a newly developed big data representation and management model to deal with big data approximate computation problems.Academic research and practical applications have confirmed that RSP is an efficient solution for big data processing and analysis.However,a challenge for implementing RSP is determining an appropriate sample size for RSP data blocks.While a large sample size increases the burden of big data computation,a small size will lead to insufficient distribution information for RSP data blocks.To address this problem,this paper presents a novel density estimation-based method(DEM)to determine the optimal sample size for RSP data blocks.First,a theoretical sample size is calculated based on the multivariate Dvoretzky-Kiefer-Wolfowitz(DKW)inequality by using the fixed-point iteration(FPI)method.Second,a practical sample size is determined by minimizing the validation error of a kernel density estimator(KDE)constructed on RSP data blocks for an increasing sample size.Finally,a series of persuasive experiments are conducted to validate the feasibility,rationality,and effectiveness of DEM.Experimental results show that(1)the iteration function of the FPI method is convergent for calculating the theoretical sample size from the multivariate DKW inequality;(2)the KDE constructed on RSP data blocks with sample size determined by DEM can yield a good approximation of the probability density function(p.d.f);and(3)DEM provides more accurate sample sizes than the existing sample size determination methods from the perspective of p.d.f.estimation.This demonstrates that DEM is a viable approach to deal with the sample size determination problem for big data RSP implementation.
基金the Japan Society for the Promotion of Science,KAKENHI Grant Nos.20H04199 and 23H00475.
文摘In this study, we propose an algorithm selection method based on coupling strength for the partitioned analysis ofstructure-piezoelectric-circuit coupling, which includes two types of coupling or inverse and direct piezoelectriccoupling and direct piezoelectric and circuit coupling. In the proposed method, implicit and explicit formulationsare used for strong and weak coupling, respectively. Three feasible partitioned algorithms are generated, namely(1) a strongly coupled algorithm that uses a fully implicit formulation for both types of coupling, (2) a weaklycoupled algorithm that uses a fully explicit formulation for both types of coupling, and (3) a partially stronglycoupled and partially weakly coupled algorithm that uses an implicit formulation and an explicit formulation forthe two types of coupling, respectively.Numerical examples using a piezoelectric energy harvester,which is a typicalstructure-piezoelectric-circuit coupling problem, demonstrate that the proposed method selects the most costeffectivealgorithm.
基金funded by the National Natural Science Foundation of China(52077004)Anhui Electric Power Company of the State Grid(52120021N00L).
文摘Aiming at the problemthat the traditional short-circuit current calculationmethod is not applicable to Distributed Generation(DG)accessing the distribution network,the paper proposes a short-circuit current partitioning calculation method considering the degree of voltage drop at the grid-connected point of DG.Firstly,the output characteristics of DG in the process of low voltage ride through are analyzed,and the equivalent output model of DG in the fault state is obtained.Secondly,by studying the network voltage distribution law after fault in distribution networks under different DG penetration rates,the degree of voltage drop at the grid-connected point of DG is used as a partition index to partition the distribution network.Then,iterative computation is performed within each partition,and data are transferred between partitions through split nodes to realize the fast partition calculation of short-circuit current for high proportion DG access to distribution network,which solves the problems of long iteration time and large calculation error of traditional short-circuit current.Finally,a 62-node real distribution network model containing a high proportion of DG access is constructed onMATLAB/Simulink,and the simulation verifies the effectiveness of the short-circuit current partitioning calculation method proposed in the paper,and its calculation speed is improved by 48.35%compared with the global iteration method.
基金funded by the National Natural Science Foundation of China(42371022,42030501,41877148).
文摘Critical zone(CZ)plays a vital role in sustaining biodiversity and humanity.However,flux quantification within CZ,particularly in terms of subsurface hydrological partitioning,remains a significant challenge.This study focused on quantifying subsurface hydrological partitioning,specifically in an alpine mountainous area,and highlighted the important role of lateral flow during this process.Precipitation was usually classified as two parts into the soil:increased soil water content(SWC)and lateral flow out of the soil pit.It was found that 65%–88%precipitation contributed to lateral flow.The second common partitioning class showed an increase in SWC caused by both precipitation and lateral flow into the soil pit.In this case,lateral flow contributed to the SWC increase ranging from 43%to 74%,which was notably larger than the SWC increase caused by precipitation.On alpine meadows,lateral flow from the soil pit occurred when the shallow soil was wetter than the field capacity.This result highlighted the need for three-dimensional simulation between soil layers in Earth system models(ESMs).During evapotranspiration process,significant differences were observed in the classification of subsurface hydrological partitioning among different vegetation types.Due to tangled and aggregated fine roots in the surface soil on alpine meadows,the majority of subsurface responses involved lateral flow,which provided 98%–100%of evapotranspiration(ET).On grassland,there was a high probability(0.87),which ET was entirely provided by lateral flow.The main reason for underestimating transpiration through soil water dynamics in previous research was the neglect of lateral root water uptake.Furthermore,there was a probability of 0.12,which ET was entirely provided by SWC decrease on grassland.In this case,there was a high probability(0.98)that soil water responses only occurred at layer 2(10–20 cm),because grass roots mainly distributed in this soil layer,and grasses often used their deep roots for water uptake during ET.To improve the estimation of soil water dynamics and ET,we established a random forest(RF)model to simulate lateral flow and then corrected the community land model(CLM).RF model demonstrated good performance and led to significant improvements in CLM simulation.These findings enhance our understanding of subsurface hydrological partitioning and emphasize the importance of considering lateral flow in ESMs and hydrological research.
基金supported in part by the Natural Science Foundation of China under Grant 51991385,Grant 52177046。
文摘Here,we introduce a partitioned design method that is oriented toward airgap harmonic for permanent magnet vernier(PMV)motors.The method proposes the utilization of airgap flux harmonics as an effective bridge between the torque design region and the torque performances.To illustrate the efficacy of this method,a partitioned design PMV motor is presented and compared with the initial design.Firstly,the torque design region of the rotor is artfully divided into the torque enhancement region and ripple reduction region.Meanwhile,the main harmonics that generate output torque are chosen and enhanced,optimization.Moreover,the harmonics that generate torque ripple are selected and reduced based on torque harmonics optimization.Finally,the functions of the partitioned PMV motor torque are assessed based on the finite element method.By the purposeful design of these two regions,the output torque is strengthened while torque ripple is inhibited effectively,verifying the effectiveness and reasonability of the proposed design method.
基金supported by State Grid Corporation of China Project“Research and Application of Key Technologies for Active Power Control in Regional Power Grid with High Penetration of Distributed Renewable Generation”(5108-202316044A-1-1-ZN).
文摘With the large-scale development and utilization of renewable energy,industrial flexible loads,as a kind of loadside resource with strong regulation ability,provide new opportunities for the research on renewable energy consumption problem in power systems.This paper proposes a two-layer active power optimization model based on industrial flexible loads for power grid partitioning,aiming at improving the line over-limit problem caused by renewable energy consumption in power grids with high proportion of renewable energy,and achieving the safe,stable and economical operation of power grids.Firstly,according to the evaluation index of renewable energy consumption characteristics of line active power,the power grid is divided into several partitions,and the interzone tie lines are taken as the optimization objects.Then,on the basis of partitioning,a two-layer active power optimization model considering the power constraints of industrial flexible loads is established.The upper-layer model optimizes the planned power of the inter-zone tie lines under the constraint of the minimum peak-valley difference within a day;the lower-layer model optimizes the regional source-load dispatching plan of each resource in each partition under the constraint of theminimumoperation cost of the partition,so as to reduce the line overlimit phenomenon caused by renewable energy consumption and save the electricity cost of industrial flexible loads.Finally,through simulation experiments,it is verified that the proposed model can effectively mobilize industrial flexible loads to participate in power grid operation and improve the economic stability of power grid.
基金This research was supported by the National Natural Science Foundation of China (Nos. 41230210 and 41204074), the Science Foundation of the Education Department of Yunnan Province (No. 2013Z152), and Statoil Company (Contract No. 4502502663).
文摘We propose a symplectic partitioned Runge-Kutta (SPRK) method with eighth-order spatial accuracy based on the extended Hamiltonian system of the acoustic waveequation. Known as the eighth-order NSPRK method, this technique uses an eighth-orderaccurate nearly analytic discrete (NAD) operator to discretize high-order spatial differentialoperators and employs a second-order SPRK method to discretize temporal derivatives.The stability criteria and numerical dispersion relations of the eighth-order NSPRK methodare given by a semi-analytical method and are tested by numerical experiments. We alsoshow the differences of the numerical dispersions between the eighth-order NSPRK methodand conventional numerical methods such as the fourth-order NSPRK method, the eighth-order Lax-Wendroff correction (LWC) method and the eighth-order staggered-grid (SG)method. The result shows that the ability of the eighth-order NSPRK method to suppress thenumerical dispersion is obviously superior to that of the conventional numerical methods. Inthe same computational environment, to eliminate visible numerical dispersions, the eighth-order NSPRK is approximately 2.5 times faster than the fourth-order NSPRK and 3.4 timesfaster than the fourth-order SPRK, and the memory requirement is only approximately47.17% of the fourth-order NSPRK method and 49.41% of the fourth-order SPRK method,which indicates the highest computational efficiency. Modeling examples for the two-layermodels such as the heterogeneous and Marmousi models show that the wavefields generatedby the eighth-order NSPRK method are very clear with no visible numerical dispersion.These numerical experiments illustrate that the eighth-order NSPRK method can effectivelysuppress numerical dispersion when coarse grids are adopted. Therefore, this methodcan greatly decrease computer memory requirement and accelerate the forward modelingproductivity. In general, the eighth-order NSPRK method has tremendous potential value forseismic exploration and seismology research.