Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss pos...Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework...Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.展开更多
This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online ide...This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.展开更多
The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the intera...The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.展开更多
With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that consid...With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.展开更多
We introduce a factorized Smith method(FSM)for solving large-scale highranked J-Stein equations within the banded-plus-low-rank structure framework.To effectively reduce both computational complexity and storage requi...We introduce a factorized Smith method(FSM)for solving large-scale highranked J-Stein equations within the banded-plus-low-rank structure framework.To effectively reduce both computational complexity and storage requirements,we develop techniques including deflation and shift,partial truncation and compression,as well as redesign the residual computation and termination condition.Numerical examples demonstrate that the FSM outperforms the Smith method implemented with a hierarchical HODLR structured toolkit in terms of CPU time.展开更多
Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for rese...Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.展开更多
Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve ...Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve the accuracy and efficiency of eye diagnosis.However;the research on intelligent eye diagnosis still faces many challenges;including the lack of standardized and precisely labeled data;multi-modal information analysis;and artificial in-telligence models for syndrome differentiation.The widespread application of AI models in medicine provides new insights and opportunities for the research of eye diagnosis intelli-gence.This study elaborates on the three key technologies of AI models in the intelligent ap-plication of TCM eye diagnosis;and explores the implications for the research of eye diagno-sis intelligence.First;a database concerning eye diagnosis was established based on self-su-pervised learning so as to solve the issues related to the lack of standardized and precisely la-beled data.Next;the cross-modal understanding and generation of deep neural network models to address the problem of lacking multi-modal information analysis.Last;the build-ing of data-driven models for eye diagnosis to tackle the issue of the absence of syndrome dif-ferentiation models.In summary;research on intelligent eye diagnosis has great potential to be applied the surge of AI model applications.展开更多
The deformation and fracture evolution mechanisms of the strata overlying mines mined using sublevel caving were studied via numerical simulations.Moreover,an expression for the normal force acting on the side face of...The deformation and fracture evolution mechanisms of the strata overlying mines mined using sublevel caving were studied via numerical simulations.Moreover,an expression for the normal force acting on the side face of a steeply dipping superimposed cantilever beam in the surrounding rock was deduced based on limit equilibrium theory.The results show the following:(1)surface displacement above metal mines with steeply dipping discontinuities shows significant step characteristics,and(2)the behavior of the strata as they fail exhibits superimposition characteristics.Generally,failure first occurs in certain superimposed strata slightly far from the goaf.Subsequently,with the constant downward excavation of the orebody,the superimposed strata become damaged both upwards away from and downwards toward the goaf.This process continues until the deep part of the steeply dipping superimposed strata forms a large-scale deep fracture plane that connects with the goaf.The deep fracture plane generally makes an angle of 12°-20°with the normal to the steeply dipping discontinuities.The effect of the constant outward transfer of strata movement due to the constant outward failure of the superimposed strata in the metal mines with steeply dipping discontinuities causes the scope of the strata movement in these mines to be larger than expected.The strata in the metal mines with steeply dipping discontinuities mainly show flexural toppling failure.However,the steeply dipping structural strata near the goaf mainly exhibit shear slipping failure,in which case the mechanical model used to describe them can be simplified by treating them as steeply dipping superimposed cantilever beams.By taking the steeply dipping superimposed cantilever beam that first experiences failure as the key stratum,the failure scope of the strata(and criteria for the stability of metal mines with steeply dipping discontinuities mined using sublevel caving)can be obtained via iterative computations from the key stratum,moving downward toward and upwards away from the goaf.展开更多
The financial aspects of large-scale engineering construction projects profoundly influence their success.Strengthening cost control and establishing a scientific financial evaluation system can enhance the project’s...The financial aspects of large-scale engineering construction projects profoundly influence their success.Strengthening cost control and establishing a scientific financial evaluation system can enhance the project’s economic benefits,minimize unnecessary costs,and provide decision-makers with a robust financial foundation.Additionally,implementing an effective cash flow control mechanism and conducting a comprehensive assessment of potential project risks can ensure financial stability and mitigate the risk of fund shortages.Developing a practical and feasible fundraising plan,along with stringent fund management practices,can prevent fund wastage and optimize fund utilization efficiency.These measures not only facilitate smooth project progression and improve project management efficiency but also enhance the project’s economic and social outcomes.展开更多
The global energy transition is a widespread phenomenon that requires international exchange of experiences and mutual learning.Germany’s success in its first phase of energy transition can be attributed to its adopt...The global energy transition is a widespread phenomenon that requires international exchange of experiences and mutual learning.Germany’s success in its first phase of energy transition can be attributed to its adoption of smart energy technology and implementation of electricity futures and spot marketization,which enabled the achievement of multiple energy spatial–temporal complementarities and overall grid balance through energy conversion and reconversion technologies.While China can draw from Germany’s experience to inform its own energy transition efforts,its 11-fold higher annual electricity consumption requires a distinct approach.We recommend a clean energy system based on smart sector coupling(ENSYSCO)as a suitable pathway for achieving sustainable energy in China,given that renewable energy is expected to guarantee 85%of China’s energy production by 2060,requiring significant future electricity storage capacity.Nonetheless,renewable energy storage remains a significant challenge.We propose four large-scale underground energy storage methods based on ENSYSCO to address this challenge,while considering China’s national conditions.These proposals have culminated in pilot projects for large-scale underground energy storage in China,which we believe is a necessary choice for achieving carbon neutrality in China and enabling efficient and safe grid integration of renewable energy within the framework of ENSYSCO.展开更多
This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly m...This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly mounted on a shared platform with both horizontally and vertically interlaced modules.Each module consists of a moderate/flexible number of array elements with the inter-element distance typically in the order of the signal wavelength,while different modules are separated by the relatively large inter-module distance for convenience of practical deployment.By accurately modelling the signal amplitudes and phases,as well as projected apertures across all modular elements,we analyse the near-field signal-to-noise ratio(SNR)performance for modular XL-array communications.Based on the non-uniform spherical wave(NUSW)modelling,the closed-form SNR expression is derived in terms of key system parameters,such as the overall modular array size,distances of adjacent modules along all dimensions,and the user's three-dimensional(3D)location.In addition,with the number of modules in different dimensions increasing infinitely,the asymptotic SNR scaling laws are revealed.Furthermore,we show that our proposed near-field modelling and performance analysis include the results for existing array architectures/modelling as special cases,e.g.,the collocated XL-array architecture,the uniform plane wave(UPW)based far-field modelling,and the modular extremely large-scale uniform linear array(XL-ULA)of onedimension.Extensive simulation results are presented to validate our findings.展开更多
Considering the large diameter effect of piles,the influence of different pile-soil analysis methods on the design of monopile foundations for offshore wind turbines has become an urgent problem to be solved.Three dif...Considering the large diameter effect of piles,the influence of different pile-soil analysis methods on the design of monopile foundations for offshore wind turbines has become an urgent problem to be solved.Three different pile-soil models were used to study a large 10 MW monopile wind turbine.By modeling the three models in the SACS software,this paper analyzed the motion response of the overall structure under the conditions of wind and waves.According to the given working conditions,this paper concludes that under the condition of independent wind,the average value of the tower top x-displacement of the rigid connection method is the smalle st,and the standard deviation is the smallest under the condition of independent wave.The results obtained by the p-y curve method are the most conservative.展开更多
CO_(2) electroreduction(CO_(2) ER)to high value-added chemicals is considered as a promising technology to achieve sustainable carbon neutralization.By virtue of the progressive research in recent years aiming at desi...CO_(2) electroreduction(CO_(2) ER)to high value-added chemicals is considered as a promising technology to achieve sustainable carbon neutralization.By virtue of the progressive research in recent years aiming at design and understanding of catalytic materials and electrolyte systems,the CO_(2) ER performance(such as current density,selectivity,stability,CO_(2) conversion,etc.)has been continually increased.Unfortunately,there has been relatively little attention paid to the large-scale CO 2 electrolyzers,which stand just as one obstacle,alongside series-parallel integration,challenging the practical application of this infant technology.In this review,the latest progress on the structures of low-temperature CO_(2) electrolyzers and scale-up studies was systematically overviewed.The influence of the CO_(2) electrolyzer configurations,such as the flow channel design,gas diffusion electrode(GDE)and ion exchange membrane(IEM),on the CO_(2) ER performance was further discussed.The review could provide inspiration for the design of large-scale CO_(2) electrolyzers so as to accelerate the industrial application of CO_(2) ER technology.展开更多
In order to improve the ductility of commercial WE43 alloy and reduce its cost,a Mg-3Y-2Gd-1Nd-0.4Zr alloy with a low amount of rare earths was developed and prepared by sand casting with a differential pressure casti...In order to improve the ductility of commercial WE43 alloy and reduce its cost,a Mg-3Y-2Gd-1Nd-0.4Zr alloy with a low amount of rare earths was developed and prepared by sand casting with a differential pressure casting system.Its microstructure,mechanical properties and fracture behaviors in the as-cast,solution-treated and as-aged states were evaluated.It is found that the aged alloy exhibited excellent comprehensive mechanical properties owing to the fine dense plate-shapedβ'precipitates formed on prismatic habits during aging at 200℃for 192 hrs after solution-treated at 500℃for 24 hrs.Its ultimate tensile strength,yield strength,and elongation at ambient temperature reach to 319±10 MPa,202±2 MPa and 8.7±0.3%as well as 230±4 MPa,155±1 MPa and 16.0±0.5%at 250℃.The fracture mode of as-aged alloy was transferred from cleavage at room temperature to quasi-cleavage and ductile fracture at the test temperature 300℃.The properties of large-scale components fabricated using the developed Mg-3Y-2Gd-1Nd-0.4Zr alloy are better than those of commercial WE43 alloy,suggesting that the new developed alloy is a good candidate to fabricate the large complex thin-walled components.展开更多
Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requ...Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.展开更多
We examine the possibility of applying the baryonic acoustic oscillation reconstruction method to improve the neutrino massΣm_νconstraint.Thanks to the Gaussianization of the process,we demonstrate that the reconstr...We examine the possibility of applying the baryonic acoustic oscillation reconstruction method to improve the neutrino massΣm_νconstraint.Thanks to the Gaussianization of the process,we demonstrate that the reconstruction algorithm could improve the measurement accuracy by roughly a factor of two.On the other hand,the reconstruction process itself becomes a source of systematic error.While the algorithm is supposed to produce the displacement field from a density distribution,various approximations cause the reconstructed output to deviate on intermediate scales.Nevertheless,it is still possible to benefit from this Gaussianized field,given that we can carefully calibrate the“transfer function”between the reconstruction output and theoretical displacement divergence from simulations.The limitation of this approach is then set by the numerical stability of this transfer function.With an ensemble of simulations,we show that such systematic error could become comparable to statistical uncertainties for a DESI-like survey and be safely neglected for other less ambitious surveys.展开更多
The strict and high-standard requirements for the safety and stability ofmajor engineering systems make it a tough challenge for large-scale finite element modal analysis.At the same time,realizing the systematic anal...The strict and high-standard requirements for the safety and stability ofmajor engineering systems make it a tough challenge for large-scale finite element modal analysis.At the same time,realizing the systematic analysis of the entire large structure of these engineering systems is extremely meaningful in practice.This article proposes a multilevel hierarchical parallel algorithm for large-scale finite element modal analysis to reduce the parallel computational efficiency loss when using heterogeneous multicore distributed storage computers in solving large-scale finite element modal analysis.Based on two-level partitioning and four-transformation strategies,the proposed algorithm not only improves the memory access rate through the sparsely distributed storage of a large amount of data but also reduces the solution time by reducing the scale of the generalized characteristic equation(GCEs).Moreover,a multilevel hierarchical parallelization approach is introduced during the computational procedure to enable the separation of the communication of inter-nodes,intra-nodes,heterogeneous core groups(HCGs),and inside HCGs through mapping computing tasks to various hardware layers.This method can efficiently achieve load balancing at different layers and significantly improve the communication rate through hierarchical communication.Therefore,it can enhance the efficiency of parallel computing of large-scale finite element modal analysis by fully exploiting the architecture characteristics of heterogeneous multicore clusters.Finally,typical numerical experiments were used to validate the correctness and efficiency of the proposedmethod.Then a parallel modal analysis example of the cross-river tunnel with over ten million degrees of freedom(DOFs)was performed,and ten-thousand core processors were applied to verify the feasibility of the algorithm.展开更多
文摘Assessment of past-climate simulations of regional climate models(RCMs)is important for understanding the reliability of RCMs when used to project future regional climate.Here,we assess the performance and discuss possible causes of biases in a WRF-based RCM with a grid spacing of 50 km,named WRFG,from the North American Regional Climate Change Assessment Program(NARCCAP)in simulating wet season precipitation over the Central United States for a period when observational data are available.The RCM reproduces key features of the precipitation distribution characteristics during late spring to early summer,although it tends to underestimate the magnitude of precipitation.This dry bias is partially due to the model’s lack of skill in simulating nocturnal precipitation related to the lack of eastward propagating convective systems in the simulation.Inaccuracy in reproducing large-scale circulation and environmental conditions is another contributing factor.The too weak simulated pressure gradient between the Rocky Mountains and the Gulf of Mexico results in weaker southerly winds in between,leading to a reduction of warm moist air transport from the Gulf to the Central Great Plains.The simulated low-level horizontal convergence fields are less favorable for upward motion than in the NARR and hence,for the development of moist convection as well.Therefore,a careful examination of an RCM’s deficiencies and the identification of the source of errors are important when using the RCM to project precipitation changes in future climate scenarios.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
文摘Accurate positioning is one of the essential requirements for numerous applications of remote sensing data,especially in the event of a noisy or unreliable satellite signal.Toward this end,we present a novel framework for aircraft geo-localization in a large range that only requires a downward-facing monocular camera,an altimeter,a compass,and an open-source Vector Map(VMAP).The algorithm combines the matching and particle filter methods.Shape vector and correlation between two building contour vectors are defined,and a coarse-to-fine building vector matching(CFBVM)method is proposed in the matching stage,for which the original matching results are described by the Gaussian mixture model(GMM).Subsequently,an improved resampling strategy is designed to reduce computing expenses with a huge number of initial particles,and a credibility indicator is designed to avoid location mistakes in the particle filter stage.An experimental evaluation of the approach based on flight data is provided.On a flight at a height of 0.2 km over a flight distance of 2 km,the aircraft is geo-localized in a reference map of 11,025 km~2using 0.09 km~2aerial images without any prior information.The absolute localization error is less than 10 m.
基金supported by the State Grid Science&Technology Project(5100-202114296A-0-0-00).
文摘This article introduces the concept of load aggregation,which involves a comprehensive analysis of loads to acquire their external characteristics for the purpose of modeling and analyzing power systems.The online identification method is a computer-involved approach for data collection,processing,and system identification,commonly used for adaptive control and prediction.This paper proposes a method for dynamically aggregating large-scale adjustable loads to support high proportions of new energy integration,aiming to study the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction methods.The experiment selected 300 central air conditioners as the research subject and analyzed their regulation characteristics,economic efficiency,and comfort.The experimental results show that as the adjustment time of the air conditioner increases from 5 minutes to 35 minutes,the stable adjustment quantity during the adjustment period decreases from 28.46 to 3.57,indicating that air conditioning loads can be controlled over a long period and have better adjustment effects in the short term.Overall,the experimental results of this paper demonstrate that analyzing the aggregation characteristics of regional large-scale adjustable loads using online identification techniques and feature extraction algorithms is effective.
基金supported in part by the Central Government Guides Local Science and TechnologyDevelopment Funds(Grant No.YDZJSX2021A038)in part by theNational Natural Science Foundation of China under(Grant No.61806138)in part by the China University Industry-University-Research Collaborative Innovation Fund(Future Network Innovation Research and Application Project)(Grant 2021FNA04014).
文摘The large-scale multi-objective optimization algorithm(LSMOA),based on the grouping of decision variables,is an advanced method for handling high-dimensional decision variables.However,in practical problems,the interaction among decision variables is intricate,leading to large group sizes and suboptimal optimization effects;hence a large-scale multi-objective optimization algorithm based on weighted overlapping grouping of decision variables(MOEAWOD)is proposed in this paper.Initially,the decision variables are perturbed and categorized into convergence and diversity variables;subsequently,the convergence variables are subdivided into groups based on the interactions among different decision variables.If the size of a group surpasses the set threshold,that group undergoes a process of weighting and overlapping grouping.Specifically,the interaction strength is evaluated based on the interaction frequency and number of objectives among various decision variables.The decision variable with the highest interaction in the group is identified and disregarded,and the remaining variables are then reclassified into subgroups.Finally,the decision variable with the strongest interaction is added to each subgroup.MOEAWOD minimizes the interactivity between different groups and maximizes the interactivity of decision variables within groups,which contributed to the optimized direction of convergence and diversity exploration with different groups.MOEAWOD was subjected to testing on 18 benchmark large-scale optimization problems,and the experimental results demonstrate the effectiveness of our methods.Compared with the other algorithms,our method is still at an advantage.
基金The work was supported by Humanities and Social Sciences Fund of the Ministry of Education(No.22YJA630119)the National Natural Science Foundation of China(No.71971051)Natural Science Foundation of Hebei Province(No.G2021501004).
文摘With the development of big data and social computing,large-scale group decisionmaking(LGDM)is nowmerging with social networks.Using social network analysis(SNA),this study proposes an LGDM consensus model that considers the trust relationship among decisionmakers(DMs).In the process of consensusmeasurement:the social network is constructed according to the social relationship among DMs,and the Louvain method is introduced to classify social networks to form subgroups.In this study,the weights of each decision maker and each subgroup are computed by comprehensive network weights and trust weights.In the process of consensus improvement:A feedback mechanism with four identification and two direction rules is designed to guide the consensus of the improvement process.Based on the trust relationship among DMs,the preferences are modified,and the corresponding social network is updated to accelerate the consensus.Compared with the previous research,the proposedmodel not only allows the subgroups to be reconstructed and updated during the adjustment process,but also improves the accuracy of the adjustment by the feedbackmechanism.Finally,an example analysis is conducted to verify the effectiveness and flexibility of the proposed method.Moreover,compared with previous studies,the superiority of the proposed method in solving the LGDM problem is highlighted.
基金Supported partly by NSF of China(Grant No.11801163)NSF of Hunan Province(Grant Nos.2021JJ50032,2023JJ50164 and 2023JJ50165)Degree&Postgraduate Reform Project of Hunan University of Technology and Hunan Province(Grant Nos.JGYB23009 and 2024JGYB210).
文摘We introduce a factorized Smith method(FSM)for solving large-scale highranked J-Stein equations within the banded-plus-low-rank structure framework.To effectively reduce both computational complexity and storage requirements,we develop techniques including deflation and shift,partial truncation and compression,as well as redesign the residual computation and termination condition.Numerical examples demonstrate that the FSM outperforms the Smith method implemented with a hierarchical HODLR structured toolkit in terms of CPU time.
文摘Background A task assigned to space exploration satellites involves detecting the physical environment within a certain space.However,space detection data are complex and abstract.These data are not conducive for researchers'visual perceptions of the evolution and interaction of events in the space environment.Methods A time-series dynamic data sampling method for large-scale space was proposed for sample detection data in space and time,and the corresponding relationships between data location features and other attribute features were established.A tone-mapping method based on statistical histogram equalization was proposed and applied to the final attribute feature data.The visualization process is optimized for rendering by merging materials,reducing the number of patches,and performing other operations.Results The results of sampling,feature extraction,and uniform visualization of the detection data of complex types,long duration spans,and uneven spatial distributions were obtained.The real-time visualization of large-scale spatial structures using augmented reality devices,particularly low-performance devices,was also investigated.Conclusions The proposed visualization system can reconstruct the three-dimensional structure of a large-scale space,express the structure and changes in the spatial environment using augmented reality,and assist in intuitively discovering spatial environmental events and evolutionary rules.
基金National Natural Science Foundation of China(82274265 and 82274588)Hunan University of Traditional Chinese Medicine Research Unveiled Marshal Programs(2022XJJB003).
文摘Eye diagnosis is a method for inspecting systemic diseases and syndromes by observing the eyes.With the development of intelligent diagnosis in traditional Chinese medicine(TCM);artificial intelligence(AI)can improve the accuracy and efficiency of eye diagnosis.However;the research on intelligent eye diagnosis still faces many challenges;including the lack of standardized and precisely labeled data;multi-modal information analysis;and artificial in-telligence models for syndrome differentiation.The widespread application of AI models in medicine provides new insights and opportunities for the research of eye diagnosis intelli-gence.This study elaborates on the three key technologies of AI models in the intelligent ap-plication of TCM eye diagnosis;and explores the implications for the research of eye diagno-sis intelligence.First;a database concerning eye diagnosis was established based on self-su-pervised learning so as to solve the issues related to the lack of standardized and precisely la-beled data.Next;the cross-modal understanding and generation of deep neural network models to address the problem of lacking multi-modal information analysis.Last;the build-ing of data-driven models for eye diagnosis to tackle the issue of the absence of syndrome dif-ferentiation models.In summary;research on intelligent eye diagnosis has great potential to be applied the surge of AI model applications.
基金Financial support for this work was provided by the Youth Fund Program of the National Natural Science Foundation of China (No. 42002292)the General Program of the National Natural Science Foundation of China (No. 42377175)the General Program of the Hubei Provincial Natural Science Foundation, China (No. 2023AFB631)
文摘The deformation and fracture evolution mechanisms of the strata overlying mines mined using sublevel caving were studied via numerical simulations.Moreover,an expression for the normal force acting on the side face of a steeply dipping superimposed cantilever beam in the surrounding rock was deduced based on limit equilibrium theory.The results show the following:(1)surface displacement above metal mines with steeply dipping discontinuities shows significant step characteristics,and(2)the behavior of the strata as they fail exhibits superimposition characteristics.Generally,failure first occurs in certain superimposed strata slightly far from the goaf.Subsequently,with the constant downward excavation of the orebody,the superimposed strata become damaged both upwards away from and downwards toward the goaf.This process continues until the deep part of the steeply dipping superimposed strata forms a large-scale deep fracture plane that connects with the goaf.The deep fracture plane generally makes an angle of 12°-20°with the normal to the steeply dipping discontinuities.The effect of the constant outward transfer of strata movement due to the constant outward failure of the superimposed strata in the metal mines with steeply dipping discontinuities causes the scope of the strata movement in these mines to be larger than expected.The strata in the metal mines with steeply dipping discontinuities mainly show flexural toppling failure.However,the steeply dipping structural strata near the goaf mainly exhibit shear slipping failure,in which case the mechanical model used to describe them can be simplified by treating them as steeply dipping superimposed cantilever beams.By taking the steeply dipping superimposed cantilever beam that first experiences failure as the key stratum,the failure scope of the strata(and criteria for the stability of metal mines with steeply dipping discontinuities mined using sublevel caving)can be obtained via iterative computations from the key stratum,moving downward toward and upwards away from the goaf.
文摘The financial aspects of large-scale engineering construction projects profoundly influence their success.Strengthening cost control and establishing a scientific financial evaluation system can enhance the project’s economic benefits,minimize unnecessary costs,and provide decision-makers with a robust financial foundation.Additionally,implementing an effective cash flow control mechanism and conducting a comprehensive assessment of potential project risks can ensure financial stability and mitigate the risk of fund shortages.Developing a practical and feasible fundraising plan,along with stringent fund management practices,can prevent fund wastage and optimize fund utilization efficiency.These measures not only facilitate smooth project progression and improve project management efficiency but also enhance the project’s economic and social outcomes.
基金Henan Institute for Chinese Development Strategy of Engineering&Technology(No.2022HENZDA02)the Science&Technology Department of Sichuan Province(No.2021YFH0010)。
文摘The global energy transition is a widespread phenomenon that requires international exchange of experiences and mutual learning.Germany’s success in its first phase of energy transition can be attributed to its adoption of smart energy technology and implementation of electricity futures and spot marketization,which enabled the achievement of multiple energy spatial–temporal complementarities and overall grid balance through energy conversion and reconversion technologies.While China can draw from Germany’s experience to inform its own energy transition efforts,its 11-fold higher annual electricity consumption requires a distinct approach.We recommend a clean energy system based on smart sector coupling(ENSYSCO)as a suitable pathway for achieving sustainable energy in China,given that renewable energy is expected to guarantee 85%of China’s energy production by 2060,requiring significant future electricity storage capacity.Nonetheless,renewable energy storage remains a significant challenge.We propose four large-scale underground energy storage methods based on ENSYSCO to address this challenge,while considering China’s national conditions.These proposals have culminated in pilot projects for large-scale underground energy storage in China,which we believe is a necessary choice for achieving carbon neutrality in China and enabling efficient and safe grid integration of renewable energy within the framework of ENSYSCO.
基金supported by the National Key R&D Program of China with Grant number 2019YFB1803400the National Natural Science Foundation of China under Grant number 62071114the Fundamental Research Funds for the Central Universities of China under grant numbers 3204002004A2 and 2242022k30005。
文摘This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly mounted on a shared platform with both horizontally and vertically interlaced modules.Each module consists of a moderate/flexible number of array elements with the inter-element distance typically in the order of the signal wavelength,while different modules are separated by the relatively large inter-module distance for convenience of practical deployment.By accurately modelling the signal amplitudes and phases,as well as projected apertures across all modular elements,we analyse the near-field signal-to-noise ratio(SNR)performance for modular XL-array communications.Based on the non-uniform spherical wave(NUSW)modelling,the closed-form SNR expression is derived in terms of key system parameters,such as the overall modular array size,distances of adjacent modules along all dimensions,and the user's three-dimensional(3D)location.In addition,with the number of modules in different dimensions increasing infinitely,the asymptotic SNR scaling laws are revealed.Furthermore,we show that our proposed near-field modelling and performance analysis include the results for existing array architectures/modelling as special cases,e.g.,the collocated XL-array architecture,the uniform plane wave(UPW)based far-field modelling,and the modular extremely large-scale uniform linear array(XL-ULA)of onedimension.Extensive simulation results are presented to validate our findings.
基金financially supported by the Open Research Fund of Hunan Provincial Key Laboratory of Key Technology on Hydropower Development (Grant No.PKLHD202003)the National Natural Science Foundation of China (Grant Nos.52071058 and 51939002)+1 种基金the National Natural Science Foundation of Liaoning Province (Grant No.2022-KF-18-01)Fundamental Research Funds for the Central University (Grant No.DUT20ZD219)。
文摘Considering the large diameter effect of piles,the influence of different pile-soil analysis methods on the design of monopile foundations for offshore wind turbines has become an urgent problem to be solved.Three different pile-soil models were used to study a large 10 MW monopile wind turbine.By modeling the three models in the SACS software,this paper analyzed the motion response of the overall structure under the conditions of wind and waves.According to the given working conditions,this paper concludes that under the condition of independent wind,the average value of the tower top x-displacement of the rigid connection method is the smalle st,and the standard deviation is the smallest under the condition of independent wave.The results obtained by the p-y curve method are the most conservative.
基金supported by National Key R&D Program of China(2020YFA0710200)the National Natural Science Foundation of China(21838010,22122814)+2 种基金the Youth Innovation Promotion Association of the Chinese Academy of Sciences(2018064)State Key Laboratory of Multiphase complex systems,Institute of Process Engineering,Chinese Academy of Sciences(No.MPCS-2022-A-03)Innovation Academy for Green Manufacture Institute,Chinese Academy of Science(IAGM2020C14).
文摘CO_(2) electroreduction(CO_(2) ER)to high value-added chemicals is considered as a promising technology to achieve sustainable carbon neutralization.By virtue of the progressive research in recent years aiming at design and understanding of catalytic materials and electrolyte systems,the CO_(2) ER performance(such as current density,selectivity,stability,CO_(2) conversion,etc.)has been continually increased.Unfortunately,there has been relatively little attention paid to the large-scale CO 2 electrolyzers,which stand just as one obstacle,alongside series-parallel integration,challenging the practical application of this infant technology.In this review,the latest progress on the structures of low-temperature CO_(2) electrolyzers and scale-up studies was systematically overviewed.The influence of the CO_(2) electrolyzer configurations,such as the flow channel design,gas diffusion electrode(GDE)and ion exchange membrane(IEM),on the CO_(2) ER performance was further discussed.The review could provide inspiration for the design of large-scale CO_(2) electrolyzers so as to accelerate the industrial application of CO_(2) ER technology.
基金This work was funded by the National Natural Science Foundation of China(No.U2037601 and No.52074183)The authors appreciate Ge Chen,Wenbin Zou as well as Shiwei Wang for preparing the alloys,Wenyu Liu as well as Xuehao Zheng from ZKKF(Beijing)Science&Technology Co.,Ltd for the TEM measurement,Gert Wiese as well as Petra Fischer for SEM and hardness measurement and Yunting Li from the Instrument Analysis Center of Shanghai Jiao Tong University(China)for SEM measurement.Lixiang Yang also gratefully thanks the China Scholarship Council(201906230111)for awarding a fellowship to support his study stay at Helmholtz-Zentrum Geesthacht.
文摘In order to improve the ductility of commercial WE43 alloy and reduce its cost,a Mg-3Y-2Gd-1Nd-0.4Zr alloy with a low amount of rare earths was developed and prepared by sand casting with a differential pressure casting system.Its microstructure,mechanical properties and fracture behaviors in the as-cast,solution-treated and as-aged states were evaluated.It is found that the aged alloy exhibited excellent comprehensive mechanical properties owing to the fine dense plate-shapedβ'precipitates formed on prismatic habits during aging at 200℃for 192 hrs after solution-treated at 500℃for 24 hrs.Its ultimate tensile strength,yield strength,and elongation at ambient temperature reach to 319±10 MPa,202±2 MPa and 8.7±0.3%as well as 230±4 MPa,155±1 MPa and 16.0±0.5%at 250℃.The fracture mode of as-aged alloy was transferred from cleavage at room temperature to quasi-cleavage and ductile fracture at the test temperature 300℃.The properties of large-scale components fabricated using the developed Mg-3Y-2Gd-1Nd-0.4Zr alloy are better than those of commercial WE43 alloy,suggesting that the new developed alloy is a good candidate to fabricate the large complex thin-walled components.
基金funded by the National Natural Science Foundation of China Youth Project(61603127).
文摘Traditional models for semantic segmentation in point clouds primarily focus on smaller scales.However,in real-world applications,point clouds often exhibit larger scales,leading to heavy computational and memory requirements.The key to handling large-scale point clouds lies in leveraging random sampling,which offers higher computational efficiency and lower memory consumption compared to other sampling methods.Nevertheless,the use of random sampling can potentially result in the loss of crucial points during the encoding stage.To address these issues,this paper proposes cross-fusion self-attention network(CFSA-Net),a lightweight and efficient network architecture specifically designed for directly processing large-scale point clouds.At the core of this network is the incorporation of random sampling alongside a local feature extraction module based on cross-fusion self-attention(CFSA).This module effectively integrates long-range contextual dependencies between points by employing hierarchical position encoding(HPC).Furthermore,it enhances the interaction between each point’s coordinates and feature information through cross-fusion self-attention pooling,enabling the acquisition of more comprehensive geometric information.Finally,a residual optimization(RO)structure is introduced to extend the receptive field of individual points by stacking hierarchical position encoding and cross-fusion self-attention pooling,thereby reducing the impact of information loss caused by random sampling.Experimental results on the Stanford Large-Scale 3D Indoor Spaces(S3DIS),Semantic3D,and SemanticKITTI datasets demonstrate the superiority of this algorithm over advanced approaches such as RandLA-Net and KPConv.These findings underscore the excellent performance of CFSA-Net in large-scale 3D semantic segmentation.
基金the support from the science research grants from the China Manned Space Project with NO.CMS-CSST-2021-B01supported by the World Premier International Research Center Initiative(WPI),MEXT,Japan+12 种基金the Ontario Research Fund:Research Excellence Program(ORF-RE)Natural Sciences and Engineering Research Council of Canada(NSERC)[funding reference number RGPIN-2019-067,CRD 523638-201,555585-20]Canadian Institute for Advanced Research(CIFAR)Canadian Foundation for Innovation(CFI)the National Natural Science Foundation of China(NSFC,Grant No.11929301)Simons FoundationThoth Technology IncAlexander von Humboldt Foundationthe Niagara supercomputers at the SciNet HPC Consortiumthe Canada Foundation for Innovationthe Government of OntarioOntario Research Fund—Research Excellencethe University of Toronto。
文摘We examine the possibility of applying the baryonic acoustic oscillation reconstruction method to improve the neutrino massΣm_νconstraint.Thanks to the Gaussianization of the process,we demonstrate that the reconstruction algorithm could improve the measurement accuracy by roughly a factor of two.On the other hand,the reconstruction process itself becomes a source of systematic error.While the algorithm is supposed to produce the displacement field from a density distribution,various approximations cause the reconstructed output to deviate on intermediate scales.Nevertheless,it is still possible to benefit from this Gaussianized field,given that we can carefully calibrate the“transfer function”between the reconstruction output and theoretical displacement divergence from simulations.The limitation of this approach is then set by the numerical stability of this transfer function.With an ensemble of simulations,we show that such systematic error could become comparable to statistical uncertainties for a DESI-like survey and be safely neglected for other less ambitious surveys.
基金supported by the National Natural Science Foundation of China(Grant No.11772192).
文摘The strict and high-standard requirements for the safety and stability ofmajor engineering systems make it a tough challenge for large-scale finite element modal analysis.At the same time,realizing the systematic analysis of the entire large structure of these engineering systems is extremely meaningful in practice.This article proposes a multilevel hierarchical parallel algorithm for large-scale finite element modal analysis to reduce the parallel computational efficiency loss when using heterogeneous multicore distributed storage computers in solving large-scale finite element modal analysis.Based on two-level partitioning and four-transformation strategies,the proposed algorithm not only improves the memory access rate through the sparsely distributed storage of a large amount of data but also reduces the solution time by reducing the scale of the generalized characteristic equation(GCEs).Moreover,a multilevel hierarchical parallelization approach is introduced during the computational procedure to enable the separation of the communication of inter-nodes,intra-nodes,heterogeneous core groups(HCGs),and inside HCGs through mapping computing tasks to various hardware layers.This method can efficiently achieve load balancing at different layers and significantly improve the communication rate through hierarchical communication.Therefore,it can enhance the efficiency of parallel computing of large-scale finite element modal analysis by fully exploiting the architecture characteristics of heterogeneous multicore clusters.Finally,typical numerical experiments were used to validate the correctness and efficiency of the proposedmethod.Then a parallel modal analysis example of the cross-river tunnel with over ten million degrees of freedom(DOFs)was performed,and ten-thousand core processors were applied to verify the feasibility of the algorithm.