Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In exist...Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.展开更多
Three-dimensional normal grain growth was appropriately simulated using a Potts model Monte Carlo algorithm. The quasi-stationary grain size distribution obtained from simulation agreed well with the experimental resu...Three-dimensional normal grain growth was appropriately simulated using a Potts model Monte Carlo algorithm. The quasi-stationary grain size distribution obtained from simulation agreed well with the experimental result of pure iron. The Weibull function with a parameter β=2.77 and the Yu-Liu function with a parameter v =2.71 fit the quasi-stationary grain size distribution well. The grain volume distribution is a function that decreased exponentially with increasing grain volume. The distribution of boundary area of grains has a peak at S/〈S〉=0.5, where S is the boundary area of a grain and 〈S〉 is the mean boundary area of all grains in the system. The lognormal function fits the face number distribution well and the peak of the face number distribution is f=10. The mean radius off-faced grains is not proportional to the face number, but appears to be related by a curve convex upward. In the 2D cross-section, both the perimeter law and the Aboav-Weaire law are observed to hold.展开更多
The process of entrainment-mixing between cumulus clouds and the ambient air is important for the development of cumulus clouds.Accurately obtaining the entrainment rate(λ)is particularly important for its parameteri...The process of entrainment-mixing between cumulus clouds and the ambient air is important for the development of cumulus clouds.Accurately obtaining the entrainment rate(λ)is particularly important for its parameterization within the overall cumulus parameterization scheme.In this study,an improved bulk-plume method is proposed by solving the equations of two conserved variables simultaneously to calculateλof cumulus clouds in a large-eddy simulation.The results demonstrate that the improved bulk-plume method is more reliable than the traditional bulk-plume method,becauseλ,as calculated from the improved method,falls within the range ofλvalues obtained from the traditional method using different conserved variables.The probability density functions ofλfor all data,different times,and different heights can be well-fitted by a log-normal distribution,which supports the assumed stochastic entrainment process in previous studies.Further analysis demonstrate that the relationship betweenλand the vertical velocity is better than other thermodynamic/dynamical properties;thus,the vertical velocity is recommended as the primary influencing factor for the parameterization ofλin the future.The results of this study enhance the theoretical understanding ofλand its influencing factors and shed new light on the development ofλparameterization.展开更多
The Makran accretionary wedge has the smallest subduction angle among any accretionary prism in the world. The factors controlling the spacing and morphological development of its deep thrust faults, as well as the fo...The Makran accretionary wedge has the smallest subduction angle among any accretionary prism in the world. The factors controlling the spacing and morphological development of its deep thrust faults, as well as the formation mechanism of shallow normal faults, remain unclear. Meanwhile, the factors affecting the continuity of plane faults must be comprehensively discussed. Clarifying the development characteristics and deformation mechanisms of the Makran accretionary wedge is crucial to effectively guide the exploration of gas hydrate deposits in the area. This study aims to interpret seismic data to identify typical structures in the Makran accretionary wedge, including deep imbricate thrust faults, shallow and small normal faults, wedge-shaped piggyback basins, mud diapirs with fuzzy and disorderly characteristics of reflection, décollements with a northward tilt of 1° – 2°, and large seamounts. Physical simulation-based experiments are performed to comprehensively analyze the results of the plane, section, and slices of the wedge. Results reveal that the distances between and shapes of thrust faults in the deep parts of the Makran accretionary wedge are controlled by the bottom décollement. The uplift of the thrust fault-related folds and the upwelling of the mud diapirs primarily contribute to the formation of small normal faults in the shallow part of the area. The mud diapirs originate from plastic material at the bottom, while those that have developed in the area near the trench are larger. Seamounts and mud diapirs break the continuity of fault plane distribution.展开更多
With the current integration of distributed energy resources into the grid,the structure of distribution networks is becoming more complex.This complexity significantly expands the solution space in the optimization p...With the current integration of distributed energy resources into the grid,the structure of distribution networks is becoming more complex.This complexity significantly expands the solution space in the optimization process for network reconstruction using intelligent algorithms.Consequently,traditional intelligent algorithms frequently encounter insufficient search accuracy and become trapped in local optima.To tackle this issue,a more advanced particle swarm optimization algorithm is proposed.To address the varying emphases at different stages of the optimization process,a dynamic strategy is implemented to regulate the social and self-learning factors.The Metropolis criterion is introduced into the simulated annealing algorithm to occasionally accept suboptimal solutions,thereby mitigating premature convergence in the population optimization process.The inertia weight is adjusted using the logistic mapping technique to maintain a balance between the algorithm’s global and local search abilities.The incorporation of the Pareto principle involves the consideration of network losses and voltage deviations as objective functions.A fuzzy membership function is employed for selecting the results.Simulation analysis is carried out on the restructuring of the distribution network,using the IEEE-33 node system and the IEEE-69 node system as examples,in conjunction with the integration of distributed energy resources.The findings demonstrate that,in comparison to other intelligent optimization algorithms,the proposed enhanced algorithm demonstrates a shorter convergence time and effectively reduces active power losses within the network.Furthermore,it enhances the amplitude of node voltages,thereby improving the stability of distribution network operations and power supply quality.Additionally,the algorithm exhibits a high level of generality and applicability.展开更多
This study is to understand the impact of operating conditions, especially initial operation temperature (T<sub>ini</sub>) which is set in a high temperature range, on the temperature profile of the interf...This study is to understand the impact of operating conditions, especially initial operation temperature (T<sub>ini</sub>) which is set in a high temperature range, on the temperature profile of the interface between the polymer electrolyte membrane (PEM) and the catalyst layer at the cathode (i.e., the reaction surface) in a single cell of polymer electrolyte fuel cell (PEFC). A 1D multi-plate heat transfer model based on the temperature data of the separator measured using the thermograph in a power generation experiment was developed to evaluate the reaction surface temperature (T<sub>react</sub>). In addition, to validate the proposed heat transfer model, T<sub>react</sub> obtained from the model was compared with that from the 3D numerical simulation using CFD software COMSOL Multiphysics which solves the continuity equation, Brinkman equation, Maxwell-Stefan equation, Butler-Volmer equation as well as heat transfer equation. As a result, the temperature gap between the results obtained by 1D heat transfer model and those obtained by 3D numerical simulation is below approximately 0.5 K. The simulation results show the change in the molar concentration of O<sub>2</sub> and H<sub>2</sub>O from the inlet to the outlet is more even with the increase in T<sub>ini</sub> due to the lower performance of O<sub>2</sub> reduction reaction. The change in the current density from the inlet to the outlet is more even with the increase in T<sub>ini</sub> and the value of current density is smaller with the increase in T<sub>ini </sub>due to the increase in ohmic over-potential and concentration over-potential. It is revealed that the change in T<sub>react</sub> from the inlet to the outlet is more even with the increase in T<sub>ini</sub> irrespective of heat transfer model. This is because the generated heat from the power generation is lower with the increase in T<sub>ini </sub>due to the lower performance of O<sub>2</sub> reduction reaction.展开更多
This work aims to analyse the actions that companies working in large-scale distribution carry along their value chains to minimise impacts on climate change.Companies operating in this field are aware that it is less...This work aims to analyse the actions that companies working in large-scale distribution carry along their value chains to minimise impacts on climate change.Companies operating in this field are aware that it is less effective to act directly on the core processes and need to involve the upstream value chain in their carbon reduction strategy.These businesses,in fact,need to focus on the indirect GHG(Greenhouse Gases)emissions and depend on how suppliers manage their impacts.In this sector,virtuous companies collaborate with their suppliers to get involved in a common path of quantifying and cutting said impacts together.This aspect is particularly relevant in the case of large-scale retailers.However,the process is not immediate since the supply chain is usually very dense and diverse,for instance,adopting various approaches that do not always coincide.In any case,the key aspect is mapping these suppliers:one of the tools mostly used for this purpose is the survey,as a quick instrument able to reach hundreds of suppliers at the same time,receiving a fast and standardized response,which can easily be processed to form a comprehensive and harmonized mapping of the results as the first step for the subsequent implementation of mitigation strategies.展开更多
A solution scheme is proposed in this paper for an existing RTDHT system to simulate large-scale finite element (FE) numerical substructures. The analysis of the FE numerical substructure is split into response anal...A solution scheme is proposed in this paper for an existing RTDHT system to simulate large-scale finite element (FE) numerical substructures. The analysis of the FE numerical substructure is split into response analysis and signal generation tasks, and executed in two different target computers in real-time. One target computer implements the response analysis task, wherein a large time-step is used to solve the FE substructure, and another target computer implements the signal generation task, wherein an interpolation program is used to generate control signals in a small time-step to meet the input demand of the controller. By using this strategy, the scale of the FE numerical substructure simulation may be increased significantly. The proposed scheme is initially verified by two FE numerical substructure models with 98 and 1240 degrees of freedom (DOFs). Thereafter, RTDHTs of a single frame-foundation structure are implemented where the foundation, considered as the numerical substructure, is simulated by the FE model with 1240 DOFs. Good agreements between the results of the RTDHT and those from the FE analysis in ABAQUS are obtained.展开更多
Aiming at the problems of unreliable data transmission,poor steadiness,nonsupport of complex data types,direct couple between data transmission and exchange,a high-level method based on advanced message queuing protoc...Aiming at the problems of unreliable data transmission,poor steadiness,nonsupport of complex data types,direct couple between data transmission and exchange,a high-level method based on advanced message queuing protocol( AMQP) is proposed to integrate naval distributed tactical training simulation system after serious consideration with current information exchange features of military combat system. Transferring layer in traditional user datagram protocol is implemented by publishing and subscribing scheme of message middleware. By creating message model to standardize message structure,integration architecture is formulated to resolve potential information security risks from inconsistent data type and express data transmission. Meanwhile,a communication model is put forward based on AMQP,which is in the center position of the whole transmission framework and responsible for reliably transferring battlefield data among subsystems. Experiments show that the method can accurately post amounts of data to the subscriber without error and loss,and can get excellent real-time performance of data exchange.展开更多
Simulation of a class of delay stochastic system with distributed parameter is discussed. Difference schemes for the numerical computation of delay stochastic system are obtained. The precision of the difference schem...Simulation of a class of delay stochastic system with distributed parameter is discussed. Difference schemes for the numerical computation of delay stochastic system are obtained. The precision of the difference scheme and the efficiency of the difference scheme in simulation of delay stochastic system with distributed parameter are analyzed. Examples are given to illustrate the application of the method.展开更多
We analyze a large-scale molecular dynamics simulation of work hardening in a model system of a ductile solid. With tensile loading, we observe emission of thousands of dislocations from two sharp cracks. The dislocat...We analyze a large-scale molecular dynamics simulation of work hardening in a model system of a ductile solid. With tensile loading, we observe emission of thousands of dislocations from two sharp cracks. The dislocations interact in a complex way, revealing three fundamental mechanisms of work-hardening in this ductile material. These are (1) dislocation cutting processes, jog formation and generation of trails of point defects; (2) activation of secondary slip systems by Frank-Read and cross-slip mechanisms; and (3) formation of sessile dislocations such as Lomer-Cottrell locks. We report the discovery of a new class of point defects referred to as trail of partial point defects, which could play an important role in situations when partial dislocations dominate plasticity. Another important result of the present work is the rediscovery of the Fleischer-mechanism of cross-slip of partial dislocations that was theoretically proposed more than 50 years ago, and is now, for the first time, confirmed by atomistic simulation. On the typical time scale of molecular dynamics simulations, the dislocations self-organize into a complex sessile defect topology. Our analysis illustrates numerous mechanisms formerly only conjectured in textbooks and observed indirectly in experiments. It is the first time that such a rich set of fundamental phenomena have been revealed in a single computer simulation, and its dynamical evolution has been studied. The present study exemplifies the simulation and analysis of the complex nonlinear dynamics of a many-particle system during failure using ultra-large scale computing.展开更多
Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interacti...Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.展开更多
With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by...With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by expanding the traditional loop-equation theory through utilization of the advantages of the graph theory in efficiency. The utilization of the spanning tree technique from graph theory makes the proposed algorithm efficient in calculation and simple to use for computer coding. The algorithms for topological generation and practical implementations are presented in detail in this paper. Through the application to a practical urban system, the consumption of the CPU time and computation memory were decreased while the accuracy was greatly enhanced compared with the present existing methods.展开更多
In the large-scale distributed hardware-in-the-loop radar simulation system based on HLA, a new solution of processing after acquisition is proposed, which separates the software subsystem from the hardware jammer sub...In the large-scale distributed hardware-in-the-loop radar simulation system based on HLA, a new solution of processing after acquisition is proposed, which separates the software subsystem from the hardware jammer subsystem by a response database, so as to settle the problem, that the software subsystem can not meet the real-time need of the hardware, with very little increment of code. And the data completeness and feasibility of this solution are discussed.展开更多
In order to improve the efficiency of data distributed management service in distributed interactive simulation based on high level architecture (HLA) and to reduce the network traffic and save the system resource, th...In order to improve the efficiency of data distributed management service in distributed interactive simulation based on high level architecture (HLA) and to reduce the network traffic and save the system resource, the approaches of multicast grouping in HLA-based distributed interactive simulation are discussed. Then a new dynamic multicast grouping approach is proposed. This approach is based on the current publication and subscription region in the process of simulation. The results of simulation experiment show that this approach can significantly reduce the message overhead and use fewer multicast groups.展开更多
The dynamics of secondary large-scale structures in electron-temperature-gradient (ETG) turbulence is investigated based on gyrofluid simulations in sheared slab geometry. It is found that structural bifurcation to ...The dynamics of secondary large-scale structures in electron-temperature-gradient (ETG) turbulence is investigated based on gyrofluid simulations in sheared slab geometry. It is found that structural bifurcation to zonal flow dominated or streamer-like states depends on the spectral anisotropy of turbulent ETG fluctuation, which is governed by the magnetic shear. The turbulent electron transport is suppressed by enhanced zonal flows. However, it is still low even if the streamer is formed in ETG turbulence with strong shears. It is shown that the low transport may be related to the secondary excitation of poloidal long-wavelength mode due to the beat wave of the most unstable components or a modulation instability. This large-scale structure with a low frequency and a long wavelength may saturate, or at least contribute to the saturation of ETG fluctuations through a poloidal mode coupling. The result suggests a low fluctuation level in ETG turbulence.展开更多
Simulation has become the evaluation method of choice for many areas of distributing computing research. Simulation has been applied successfully for modeling small and large complex systems and understanding their be...Simulation has become the evaluation method of choice for many areas of distributing computing research. Simulation has been applied successfully for modeling small and large complex systems and understanding their behavior, especially in the area of distributed systems or parallel environment. The aim of my research is to study and qualitative analysis of simulation on a single server & on distributed environment and finding the related issues & its comparison.展开更多
This paper investigates large-scale distributed system design. It looks at features, main design considerations and provides the Netflix API, Cassandra and Oracle as examples of such systems. Moreover, the paper inves...This paper investigates large-scale distributed system design. It looks at features, main design considerations and provides the Netflix API, Cassandra and Oracle as examples of such systems. Moreover, the paper investigates the challenges of designing, developing, deploying, and maintaining such systems, in regard to the features presented. Finally, the paper discusses aspects of available solutions and current practices to challenges that large-scale distributed systems face.展开更多
Evapotranspiration(ET)is the key to the water cycle process and an important factor for studying near-surface water and heat balance.Accurately estimating ET is significant for hydrology,meteorology,ecology,agricultur...Evapotranspiration(ET)is the key to the water cycle process and an important factor for studying near-surface water and heat balance.Accurately estimating ET is significant for hydrology,meteorology,ecology,agriculture,etc..This paper simulates ET in the Madu River Basin of Three Gorges Reservoir Area of China during 2009-2018 based on the Soil and Water Assessment Tool(SWAT)model,which was calibrated and validated using the MODIS(Moderate-resolution Imaging Spectroradiometer)/Terra Net ET 8-Day L4 Global 500 m SIN Grid(MOD16A2)dataset and measured ET.Two calibration strategies(lumped calibration(LC)and spatially distributed calibration(SDC))were used.The basin was divided into 34 sub-basins,and the coefficient of determination(R^(2))and NashSutcliffe efficiency coefficient(NSE)of each sub-basin were greater than 0.6 in both the calibration and validation periods.The R2 and NSE were higher in the validation period than those in the calibration period.Compared with the measured ET,the accuracy of the model on the daily scale is:R^(2)=0.704 and NSE=0.759(SDC results).The model simulation accuracy of LC and SDC for the sub-basin scale was R^(2)=0.857,R^(2)=0.862(monthly)and R^(2)=0.227,R^(2)=0.404(annually),respectively;for the whole basin scale was R^(2)=0.902,R^(2)=0.900(monthly)and R^(2)=0.507 and R^(2)=0.519(annually),respectively.The model performed acceptably,and SDC performed the best,indicating that remote sensing data can be used for SWAT model calibration.During 2009-2018,ET generally increased in the Madu River Basin(SDC results,7.21 mm/yr),with a multiyear average value of 734.37 mm/yr.The annual ET change rate for the sub-basin was relatively low upstream and downstream.The linear correlation analysis between ET and meteorological factors shows that on the monthly scale,precipitation,solar radiation and daily maximum and minimum temperature were significantly correlated with ET;annually,solar radiation and wind speed had a moderate correlation with ET.The correlation between maximum temperature and ET is best on the monthly scale(Pearson correlation coefficient R=0.945),which may means that the increasing ET originating from increasing temperature(global warming).However,the sub-basins near Shennongjia Nature Reserve that are in upstream have a negative ET change rate,which means that ET decreases in these sub-basins,indicating that the’Evaporation Paradox’exists in these sub-basins.This study explored the potential of remote-sensing-based ET data for hydrological model calibration and provides a decision-making reference for water resource management in the Madu River Basin.展开更多
基金supported by National Natural Sciences Foundation of China(No.62271165,62027802,62201307)the Guangdong Basic and Applied Basic Research Foundation(No.2023A1515030297)+2 种基金the Shenzhen Science and Technology Program ZDSYS20210623091808025Stable Support Plan Program GXWD20231129102638002the Major Key Project of PCL(No.PCL2024A01)。
文摘Due to the restricted satellite payloads in LEO mega-constellation networks(LMCNs),remote sensing image analysis,online learning and other big data services desirably need onboard distributed processing(OBDP).In existing technologies,the efficiency of big data applications(BDAs)in distributed systems hinges on the stable-state and low-latency links between worker nodes.However,LMCNs with high-dynamic nodes and long-distance links can not provide the above conditions,which makes the performance of OBDP hard to be intuitively measured.To bridge this gap,a multidimensional simulation platform is indispensable that can simulate the network environment of LMCNs and put BDAs in it for performance testing.Using STK's APIs and parallel computing framework,we achieve real-time simulation for thousands of satellite nodes,which are mapped as application nodes through software defined network(SDN)and container technologies.We elaborate the architecture and mechanism of the simulation platform,and take the Starlink and Hadoop as realistic examples for simulations.The results indicate that LMCNs have dynamic end-to-end latency which fluctuates periodically with the constellation movement.Compared to ground data center networks(GDCNs),LMCNs deteriorate the computing and storage job throughput,which can be alleviated by the utilization of erasure codes and data flow scheduling of worker nodes.
基金supported by the National Natural Science Foundation of China (No.50671010)
文摘Three-dimensional normal grain growth was appropriately simulated using a Potts model Monte Carlo algorithm. The quasi-stationary grain size distribution obtained from simulation agreed well with the experimental result of pure iron. The Weibull function with a parameter β=2.77 and the Yu-Liu function with a parameter v =2.71 fit the quasi-stationary grain size distribution well. The grain volume distribution is a function that decreased exponentially with increasing grain volume. The distribution of boundary area of grains has a peak at S/〈S〉=0.5, where S is the boundary area of a grain and 〈S〉 is the mean boundary area of all grains in the system. The lognormal function fits the face number distribution well and the peak of the face number distribution is f=10. The mean radius off-faced grains is not proportional to the face number, but appears to be related by a curve convex upward. In the 2D cross-section, both the perimeter law and the Aboav-Weaire law are observed to hold.
基金supported by the National Natural Science Foundation of China(Grant Nos.42175099,42027804,42075073)the Innovative Project of Postgraduates in Jiangsu Province in 2023(Grant No.KYCX23_1319)+3 种基金supported by the National Natural Science Foundation of China(Grant No.42205080)the Natural Science Foundation of Sichuan(Grant No.2023YFS0442)the Research Fund of Civil Aviation Flight University of China(Grant No.J2022-037)supported by the National Key Scientific and Technological Infrastructure project“Earth System Science Numerical Simulator Facility”(Earth Lab)。
文摘The process of entrainment-mixing between cumulus clouds and the ambient air is important for the development of cumulus clouds.Accurately obtaining the entrainment rate(λ)is particularly important for its parameterization within the overall cumulus parameterization scheme.In this study,an improved bulk-plume method is proposed by solving the equations of two conserved variables simultaneously to calculateλof cumulus clouds in a large-eddy simulation.The results demonstrate that the improved bulk-plume method is more reliable than the traditional bulk-plume method,becauseλ,as calculated from the improved method,falls within the range ofλvalues obtained from the traditional method using different conserved variables.The probability density functions ofλfor all data,different times,and different heights can be well-fitted by a log-normal distribution,which supports the assumed stochastic entrainment process in previous studies.Further analysis demonstrate that the relationship betweenλand the vertical velocity is better than other thermodynamic/dynamical properties;thus,the vertical velocity is recommended as the primary influencing factor for the parameterization ofλin the future.The results of this study enhance the theoretical understanding ofλand its influencing factors and shed new light on the development ofλparameterization.
基金funded by the National Natural Science Foundation of China(No.42076069).
文摘The Makran accretionary wedge has the smallest subduction angle among any accretionary prism in the world. The factors controlling the spacing and morphological development of its deep thrust faults, as well as the formation mechanism of shallow normal faults, remain unclear. Meanwhile, the factors affecting the continuity of plane faults must be comprehensively discussed. Clarifying the development characteristics and deformation mechanisms of the Makran accretionary wedge is crucial to effectively guide the exploration of gas hydrate deposits in the area. This study aims to interpret seismic data to identify typical structures in the Makran accretionary wedge, including deep imbricate thrust faults, shallow and small normal faults, wedge-shaped piggyback basins, mud diapirs with fuzzy and disorderly characteristics of reflection, décollements with a northward tilt of 1° – 2°, and large seamounts. Physical simulation-based experiments are performed to comprehensively analyze the results of the plane, section, and slices of the wedge. Results reveal that the distances between and shapes of thrust faults in the deep parts of the Makran accretionary wedge are controlled by the bottom décollement. The uplift of the thrust fault-related folds and the upwelling of the mud diapirs primarily contribute to the formation of small normal faults in the shallow part of the area. The mud diapirs originate from plastic material at the bottom, while those that have developed in the area near the trench are larger. Seamounts and mud diapirs break the continuity of fault plane distribution.
基金This research is supported by the Science and Technology Program of Gansu Province(No.23JRRA880).
文摘With the current integration of distributed energy resources into the grid,the structure of distribution networks is becoming more complex.This complexity significantly expands the solution space in the optimization process for network reconstruction using intelligent algorithms.Consequently,traditional intelligent algorithms frequently encounter insufficient search accuracy and become trapped in local optima.To tackle this issue,a more advanced particle swarm optimization algorithm is proposed.To address the varying emphases at different stages of the optimization process,a dynamic strategy is implemented to regulate the social and self-learning factors.The Metropolis criterion is introduced into the simulated annealing algorithm to occasionally accept suboptimal solutions,thereby mitigating premature convergence in the population optimization process.The inertia weight is adjusted using the logistic mapping technique to maintain a balance between the algorithm’s global and local search abilities.The incorporation of the Pareto principle involves the consideration of network losses and voltage deviations as objective functions.A fuzzy membership function is employed for selecting the results.Simulation analysis is carried out on the restructuring of the distribution network,using the IEEE-33 node system and the IEEE-69 node system as examples,in conjunction with the integration of distributed energy resources.The findings demonstrate that,in comparison to other intelligent optimization algorithms,the proposed enhanced algorithm demonstrates a shorter convergence time and effectively reduces active power losses within the network.Furthermore,it enhances the amplitude of node voltages,thereby improving the stability of distribution network operations and power supply quality.Additionally,the algorithm exhibits a high level of generality and applicability.
文摘This study is to understand the impact of operating conditions, especially initial operation temperature (T<sub>ini</sub>) which is set in a high temperature range, on the temperature profile of the interface between the polymer electrolyte membrane (PEM) and the catalyst layer at the cathode (i.e., the reaction surface) in a single cell of polymer electrolyte fuel cell (PEFC). A 1D multi-plate heat transfer model based on the temperature data of the separator measured using the thermograph in a power generation experiment was developed to evaluate the reaction surface temperature (T<sub>react</sub>). In addition, to validate the proposed heat transfer model, T<sub>react</sub> obtained from the model was compared with that from the 3D numerical simulation using CFD software COMSOL Multiphysics which solves the continuity equation, Brinkman equation, Maxwell-Stefan equation, Butler-Volmer equation as well as heat transfer equation. As a result, the temperature gap between the results obtained by 1D heat transfer model and those obtained by 3D numerical simulation is below approximately 0.5 K. The simulation results show the change in the molar concentration of O<sub>2</sub> and H<sub>2</sub>O from the inlet to the outlet is more even with the increase in T<sub>ini</sub> due to the lower performance of O<sub>2</sub> reduction reaction. The change in the current density from the inlet to the outlet is more even with the increase in T<sub>ini</sub> and the value of current density is smaller with the increase in T<sub>ini </sub>due to the increase in ohmic over-potential and concentration over-potential. It is revealed that the change in T<sub>react</sub> from the inlet to the outlet is more even with the increase in T<sub>ini</sub> irrespective of heat transfer model. This is because the generated heat from the power generation is lower with the increase in T<sub>ini </sub>due to the lower performance of O<sub>2</sub> reduction reaction.
文摘This work aims to analyse the actions that companies working in large-scale distribution carry along their value chains to minimise impacts on climate change.Companies operating in this field are aware that it is less effective to act directly on the core processes and need to involve the upstream value chain in their carbon reduction strategy.These businesses,in fact,need to focus on the indirect GHG(Greenhouse Gases)emissions and depend on how suppliers manage their impacts.In this sector,virtuous companies collaborate with their suppliers to get involved in a common path of quantifying and cutting said impacts together.This aspect is particularly relevant in the case of large-scale retailers.However,the process is not immediate since the supply chain is usually very dense and diverse,for instance,adopting various approaches that do not always coincide.In any case,the key aspect is mapping these suppliers:one of the tools mostly used for this purpose is the survey,as a quick instrument able to reach hundreds of suppliers at the same time,receiving a fast and standardized response,which can easily be processed to form a comprehensive and harmonized mapping of the results as the first step for the subsequent implementation of mitigation strategies.
基金National Natural Science Foundation under Grant Nos.51179093,91215301 and 41274106the Specialized Research Fund for the Doctoral Program of Higher Education under Grant No.20130002110032Tsinghua University Initiative Scientific Research Program under Grant No.20131089285
文摘A solution scheme is proposed in this paper for an existing RTDHT system to simulate large-scale finite element (FE) numerical substructures. The analysis of the FE numerical substructure is split into response analysis and signal generation tasks, and executed in two different target computers in real-time. One target computer implements the response analysis task, wherein a large time-step is used to solve the FE substructure, and another target computer implements the signal generation task, wherein an interpolation program is used to generate control signals in a small time-step to meet the input demand of the controller. By using this strategy, the scale of the FE numerical substructure simulation may be increased significantly. The proposed scheme is initially verified by two FE numerical substructure models with 98 and 1240 degrees of freedom (DOFs). Thereafter, RTDHTs of a single frame-foundation structure are implemented where the foundation, considered as the numerical substructure, is simulated by the FE model with 1240 DOFs. Good agreements between the results of the RTDHT and those from the FE analysis in ABAQUS are obtained.
基金Supported by the National Natural Science Foundation of China(No.61401496)
文摘Aiming at the problems of unreliable data transmission,poor steadiness,nonsupport of complex data types,direct couple between data transmission and exchange,a high-level method based on advanced message queuing protocol( AMQP) is proposed to integrate naval distributed tactical training simulation system after serious consideration with current information exchange features of military combat system. Transferring layer in traditional user datagram protocol is implemented by publishing and subscribing scheme of message middleware. By creating message model to standardize message structure,integration architecture is formulated to resolve potential information security risks from inconsistent data type and express data transmission. Meanwhile,a communication model is put forward based on AMQP,which is in the center position of the whole transmission framework and responsible for reliably transferring battlefield data among subsystems. Experiments show that the method can accurately post amounts of data to the subscriber without error and loss,and can get excellent real-time performance of data exchange.
文摘Simulation of a class of delay stochastic system with distributed parameter is discussed. Difference schemes for the numerical computation of delay stochastic system are obtained. The precision of the difference scheme and the efficiency of the difference scheme in simulation of delay stochastic system with distributed parameter are analyzed. Examples are given to illustrate the application of the method.
文摘We analyze a large-scale molecular dynamics simulation of work hardening in a model system of a ductile solid. With tensile loading, we observe emission of thousands of dislocations from two sharp cracks. The dislocations interact in a complex way, revealing three fundamental mechanisms of work-hardening in this ductile material. These are (1) dislocation cutting processes, jog formation and generation of trails of point defects; (2) activation of secondary slip systems by Frank-Read and cross-slip mechanisms; and (3) formation of sessile dislocations such as Lomer-Cottrell locks. We report the discovery of a new class of point defects referred to as trail of partial point defects, which could play an important role in situations when partial dislocations dominate plasticity. Another important result of the present work is the rediscovery of the Fleischer-mechanism of cross-slip of partial dislocations that was theoretically proposed more than 50 years ago, and is now, for the first time, confirmed by atomistic simulation. On the typical time scale of molecular dynamics simulations, the dislocations self-organize into a complex sessile defect topology. Our analysis illustrates numerous mechanisms formerly only conjectured in textbooks and observed indirectly in experiments. It is the first time that such a rich set of fundamental phenomena have been revealed in a single computer simulation, and its dynamical evolution has been studied. The present study exemplifies the simulation and analysis of the complex nonlinear dynamics of a many-particle system during failure using ultra-large scale computing.
基金This work was supported in part by the National Natural Science Foundation of China(61772493)the CAAI-Huawei MindSpore Open Fund(CAAIXSJLJJ-2020-004B)+4 种基金the Natural Science Foundation of Chongqing(China)(cstc2019jcyjjqX0013)Chongqing Research Program of Technology Innovation and Application(cstc2019jscx-fxydX0024,cstc2019jscx-fxydX0027,cstc2018jszx-cyzdX0041)Guangdong Province Universities and College Pearl River Scholar Funded Scheme(2019)the Pioneer Hundred Talents Program of Chinese Academy of Sciencesthe Deanship of Scientific Research(DSR)at King Abdulaziz University(G-21-135-38).
文摘Protein-protein interactions are of great significance for human to understand the functional mechanisms of proteins.With the rapid development of high-throughput genomic technologies,massive protein-protein interaction(PPI)data have been generated,making it very difficult to analyze them efficiently.To address this problem,this paper presents a distributed framework by reimplementing one of state-of-the-art algorithms,i.e.,CoFex,using MapReduce.To do so,an in-depth analysis of its limitations is conducted from the perspectives of efficiency and memory consumption when applying it for large-scale PPI data analysis and prediction.Respective solutions are then devised to overcome these limitations.In particular,we adopt a novel tree-based data structure to reduce the heavy memory consumption caused by the huge sequence information of proteins.After that,its procedure is modified by following the MapReduce framework to take the prediction task distributively.A series of extensive experiments have been conducted to evaluate the performance of our framework in terms of both efficiency and accuracy.Experimental results well demonstrate that the proposed framework can considerably improve its computational efficiency by more than two orders of magnitude while retaining the same high accuracy.
文摘With the purpose of making calculation more efficient in practical hydraulic simulations, an improved algorithm was proposed and was applied in the practical water distribution field. This methodology was developed by expanding the traditional loop-equation theory through utilization of the advantages of the graph theory in efficiency. The utilization of the spanning tree technique from graph theory makes the proposed algorithm efficient in calculation and simple to use for computer coding. The algorithms for topological generation and practical implementations are presented in detail in this paper. Through the application to a practical urban system, the consumption of the CPU time and computation memory were decreased while the accuracy was greatly enhanced compared with the present existing methods.
基金the Ministerial Level Advanced Research Foundation
文摘In the large-scale distributed hardware-in-the-loop radar simulation system based on HLA, a new solution of processing after acquisition is proposed, which separates the software subsystem from the hardware jammer subsystem by a response database, so as to settle the problem, that the software subsystem can not meet the real-time need of the hardware, with very little increment of code. And the data completeness and feasibility of this solution are discussed.
文摘In order to improve the efficiency of data distributed management service in distributed interactive simulation based on high level architecture (HLA) and to reduce the network traffic and save the system resource, the approaches of multicast grouping in HLA-based distributed interactive simulation are discussed. Then a new dynamic multicast grouping approach is proposed. This approach is based on the current publication and subscription region in the process of simulation. The results of simulation experiment show that this approach can significantly reduce the message overhead and use fewer multicast groups.
基金supported in part by the National Natural Science Foundation of China(Nos.10135020 and 10575032)
文摘The dynamics of secondary large-scale structures in electron-temperature-gradient (ETG) turbulence is investigated based on gyrofluid simulations in sheared slab geometry. It is found that structural bifurcation to zonal flow dominated or streamer-like states depends on the spectral anisotropy of turbulent ETG fluctuation, which is governed by the magnetic shear. The turbulent electron transport is suppressed by enhanced zonal flows. However, it is still low even if the streamer is formed in ETG turbulence with strong shears. It is shown that the low transport may be related to the secondary excitation of poloidal long-wavelength mode due to the beat wave of the most unstable components or a modulation instability. This large-scale structure with a low frequency and a long wavelength may saturate, or at least contribute to the saturation of ETG fluctuations through a poloidal mode coupling. The result suggests a low fluctuation level in ETG turbulence.
文摘Simulation has become the evaluation method of choice for many areas of distributing computing research. Simulation has been applied successfully for modeling small and large complex systems and understanding their behavior, especially in the area of distributed systems or parallel environment. The aim of my research is to study and qualitative analysis of simulation on a single server & on distributed environment and finding the related issues & its comparison.
文摘This paper investigates large-scale distributed system design. It looks at features, main design considerations and provides the Netflix API, Cassandra and Oracle as examples of such systems. Moreover, the paper investigates the challenges of designing, developing, deploying, and maintaining such systems, in regard to the features presented. Finally, the paper discusses aspects of available solutions and current practices to challenges that large-scale distributed systems face.
基金Under the auspices of National Natural Science Foundation of China(No.42271167)Open Fund of Hubei Key Laboratory of Critical Zone Evolution(No.CZE2022F03)。
文摘Evapotranspiration(ET)is the key to the water cycle process and an important factor for studying near-surface water and heat balance.Accurately estimating ET is significant for hydrology,meteorology,ecology,agriculture,etc..This paper simulates ET in the Madu River Basin of Three Gorges Reservoir Area of China during 2009-2018 based on the Soil and Water Assessment Tool(SWAT)model,which was calibrated and validated using the MODIS(Moderate-resolution Imaging Spectroradiometer)/Terra Net ET 8-Day L4 Global 500 m SIN Grid(MOD16A2)dataset and measured ET.Two calibration strategies(lumped calibration(LC)and spatially distributed calibration(SDC))were used.The basin was divided into 34 sub-basins,and the coefficient of determination(R^(2))and NashSutcliffe efficiency coefficient(NSE)of each sub-basin were greater than 0.6 in both the calibration and validation periods.The R2 and NSE were higher in the validation period than those in the calibration period.Compared with the measured ET,the accuracy of the model on the daily scale is:R^(2)=0.704 and NSE=0.759(SDC results).The model simulation accuracy of LC and SDC for the sub-basin scale was R^(2)=0.857,R^(2)=0.862(monthly)and R^(2)=0.227,R^(2)=0.404(annually),respectively;for the whole basin scale was R^(2)=0.902,R^(2)=0.900(monthly)and R^(2)=0.507 and R^(2)=0.519(annually),respectively.The model performed acceptably,and SDC performed the best,indicating that remote sensing data can be used for SWAT model calibration.During 2009-2018,ET generally increased in the Madu River Basin(SDC results,7.21 mm/yr),with a multiyear average value of 734.37 mm/yr.The annual ET change rate for the sub-basin was relatively low upstream and downstream.The linear correlation analysis between ET and meteorological factors shows that on the monthly scale,precipitation,solar radiation and daily maximum and minimum temperature were significantly correlated with ET;annually,solar radiation and wind speed had a moderate correlation with ET.The correlation between maximum temperature and ET is best on the monthly scale(Pearson correlation coefficient R=0.945),which may means that the increasing ET originating from increasing temperature(global warming).However,the sub-basins near Shennongjia Nature Reserve that are in upstream have a negative ET change rate,which means that ET decreases in these sub-basins,indicating that the’Evaporation Paradox’exists in these sub-basins.This study explored the potential of remote-sensing-based ET data for hydrological model calibration and provides a decision-making reference for water resource management in the Madu River Basin.