Performance models provide insightful perspectives to predict performance and to propose optimization guidance.Although there has been much researches,pinpointing bottlenecks of various memory access patterns and reac...Performance models provide insightful perspectives to predict performance and to propose optimization guidance.Although there has been much researches,pinpointing bottlenecks of various memory access patterns and reaching high accurate prediction of both regular and irregular programs on various hardware configurations are still not trivial.This work proposes a novel model called process-RAM-feedback(PRF)to quantify the overhead of computation and data transmission time on general-purpose multi-core processors.The PRF model predicts the cost of instruction for singlecore by a directed acyclic graph(DAG)and the transmission time of memory access between each memory hierarchy through a newly designed cache simulator.By using performance modeling and feedback optimization method,this paper uses PRF model to analyze and optimize convolution,sparse matrix-vector multiplication and sn-sweep as case study for covering with typical regular kernel to irregular and data dependence.Through the PRF model,it obtains optimization guidance with various sparsity structures,algorithm designs,and instruction sets support on different data sizes.展开更多
Stone mastic asphalt(SMA)has not been widely used in the pavement industry,and there are no detailed design specifications for this type of asphalt.Therefore,long-term behavior properties of this pavement type are not...Stone mastic asphalt(SMA)has not been widely used in the pavement industry,and there are no detailed design specifications for this type of asphalt.Therefore,long-term behavior properties of this pavement type are not accessible widely,and no model has been established for SMA regarding its performance.The main purpose of this study was to incorporate expert experience(using the Markov-chain process)and data from field experiments to propose a model for SMA performance using the Bayesian approach.The implementation of these sources resulted in a well-organized method to develop a performance model for SMA pavements,which did not have a long-term data.Finally,a linear performance model was established to calculate the SMA service life.The service life of SMA can be predicted explicitly according to the developed performance model which has been validated using a new set of data.展开更多
Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pa...In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.展开更多
Since the launch of the Google Earth Engine(GEE)cloud platform in 2010,it has been widely used,leading to a wealth of valuable information.However,the potential of GEE for forest resource management has not been fully...Since the launch of the Google Earth Engine(GEE)cloud platform in 2010,it has been widely used,leading to a wealth of valuable information.However,the potential of GEE for forest resource management has not been fully exploited.To extract dominant woody plant species,GEE combined Sen-tinel-1(S1)and Sentinel-2(S2)data with the addition of the National Forest Resources Inventory(NFRI)and topographic data,resulting in a 10 m resolution multimodal geospatial dataset for subtropical forests in southeast China.Spectral and texture features,red-edge bands,and vegetation indices of S1 and S2 data were computed.A hierarchical model obtained information on forest distribution and area and the dominant woody plant species.The results suggest that combining data sources from the S1 winter and S2 yearly ranges enhances accuracy in forest distribution and area extraction compared to using either data source independently.Similarly,for dominant woody species recognition,using S1 winter and S2 data across all four seasons was accurate.Including terrain factors and removing spatial correlation from NFRI sample points further improved the recognition accuracy.The optimal forest extraction achieved an overall accuracy(OA)of 97.4%and a maplevel image classification efficacy(MICE)of 96.7%.OA and MICE were 83.6%and 80.7%for dominant species extraction,respectively.The high accuracy and efficacy values indicate that the hierarchical recognition model based on multimodal remote sensing data performed extremely well for extracting information about dominant woody plant species.Visualizing the results using the GEE application allows for an intuitive display of forest and species distribution,offering significant convenience for forest resource monitoring.展开更多
Building a post-layout simulation performance model is essential in closing the loop of analog circuits, but it is a challenging task because of the high-dimensional space and expensive simulation cost. To facilitate ...Building a post-layout simulation performance model is essential in closing the loop of analog circuits, but it is a challenging task because of the high-dimensional space and expensive simulation cost. To facilitate efficient modeling, this paper proposes a Global Mapping Model Fusion(GMMF) technique. The key idea of GMMF is to reuse the schematic-level model trained by the Artificial Neural Network(ANN) algorithm, and combine it with few mapping coefficients to build the post-simulation model. Furthermore, as an efficient global optimization algorithm,differential evolution is applied to determine the optimal mapping coefficients with few samples. In GMMF, only a small number of mapping coefficients are unknown, so the number of post-layout samples needed is significantly reduced. To enhance practical utility of the proposed GMMF technique, two specific mapping relations, i.e., linear or weakly no-linear and nonlinear, are carefully considered in this paper. We conduct experiments on two topologies of two-stage operational amplifier and comparator in different commercial processes. All the simulation data for modeling are obtained from a parametric design framework. A more than 5 runtime speedup is achieved over ANN without surrendering any accuracy.展开更多
This paper presents a new model to study the static performances of a GaN metal epitaxial-semiconductor field effect transistor(MESFET) based on the metal-semiconductor interface state of the Schottky junction.The I...This paper presents a new model to study the static performances of a GaN metal epitaxial-semiconductor field effect transistor(MESFET) based on the metal-semiconductor interface state of the Schottky junction.The I-V performances of MESFET under different channel lengths and different operating systems(pinch-off or not) have been achieved by our model, which strictly depended on the electrical parameters, such as the drain-gate capacity Cgd, the source-gate capacity C;, the transconductance, and the conductance. To determine the accuracy of our model, root-mean-square(RMS) errors were calculated. In the experiment, the experimental data agree with our model. Also, the minimum value of the electrical parameter has been calculated to get the maximum cut-off frequency for the GaN MESFET.展开更多
This paper examines the performance of an atmospheric general circulation model (AGCM) developed at the State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics, Institute of ...This paper examines the performance of an atmospheric general circulation model (AGCM) developed at the State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics, Institute of Atmospheric Physics (LASG/IAP). It is a spectral model truncated at R42(2.8125°long×1.66°lat) resolution and with nine vertical levels, and referred to as R42L9/LASG hereafter. It is also the new version of atmospheric component model R15L9 of the global ocean-atmosphere-land system (GOALS/LASG). A 40-year simulation in which the model is forced with the climatological monthly mean sea surface temperature is compared with the 40-year (1958-97) U.S. National Center for Environmental Prediction (NGEP) global reanalysis and the 22-year (1979-2000) Xie-Arkin monthly precipitation climatology. The mean DJF and JJA geographical distributions of precipitation, sea level pressure, 500-hPa geopotential height, 850-hPa and 200-hPa zonal wind, and other fields averaged for the last 30-year integration of the R42L9 model are analyzed. Results show that the model reproduces well the observed basic patterns, particularly precipitation over the East Asian region. Comparing the new model with R15L9/LASG, the old version with coarse resolution (nearly 7.5°long×4.5°lat), shows an obvious improvement in the simulation of regional climate, especially precipitation. The weaknesses in simulation and future improvements of the model are also discussed.展开更多
In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental ...In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments), we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology) model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO) method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.展开更多
The landing buffer is an important problem in the research on bionic locust jumping robots, and the different modes of landing and buffering can affect the dynamic performance of the buffering process significantly. B...The landing buffer is an important problem in the research on bionic locust jumping robots, and the different modes of landing and buffering can affect the dynamic performance of the buffering process significantly. Based on an experimental observation, the different modes of landing and buffering are determined, which include the different numbers of landing legs and different motion modes of legs in the buffering process. Then a bionic locust mechanism is established, and the springs are used to replace the leg muscles to achieve a buffering effect. To reveal the dynamic performance in the buffering process of the bionic locust mechanism, a dynamic model is established with different modes of landing and buffering. In particular, to analyze the buffering process conveniently, an equivalent vibration dynamic model of the bionic locust mechanism is proposed.Given the support forces of the ground to the leg links, which can be obtained from the dynamic model, the spring forces of the legs and the impact resistance of each leg are the important parameters affecting buffering performance, and evaluation principles for buffering performance are proposed according to the aforementioned parameters. Based on the dynamic model and these evaluation principles, the buffering performances are analyzed and compared in different modes of landing and buffering on a horizontal plane and an inclined plane. The results show that the mechanism with the ends of the legs sliding can obtain a better dynamic performance. This study offers primary theories for buffering dynamics and an evaluation of landing buffer performance,and it establishes a theoretical basis for studies and engineering applications.展开更多
Evaluating the adaptability of cantilever boring machine(CBM) through in-depth excavation and analysis of tunnel excavation data and rock mass parameters is the premise of mechanical design and efficient excavation in...Evaluating the adaptability of cantilever boring machine(CBM) through in-depth excavation and analysis of tunnel excavation data and rock mass parameters is the premise of mechanical design and efficient excavation in the field of underground space engineering.This paper presented a case study of tunnelling performance prediction method of CBM in sedimentary hard-rock tunnel of Karst landform type by using tunneling data and surrounding rock parameters.The uniaxial compressive strength(UCS),rock integrity factor(Kv),basic quality index([BQ]),rock quality index RQD,brazilian tensile strength(BTS) and brittleness index(BI) were introduced to construct a performance prediction database based on the hard-rock tunnel of Guiyang Metro Line 1 and Line 3,and then established the performance prediction model of cantilever boring machine.Then the deep belief network(DBN) was introduced into the performance prediction model,and the reliability of performance prediction model was verified by combining with engineering data.The study showed that the influence degree of surrounding rock parameters on the tunneling performance of the cantilever boring machine is UCS > [BQ] > BTS >RQD > Kv > BI.The performance prediction model shows that the instantaneous cutting rate(ICR) has a good correlation with the surrounding rock parameters,and the predicting model accuracy is related to the reliability of construction data.The prediction of limestone and dolomite sections of Line 3 based on the DBN performance prediction model shows that the measured ICR and predicted ICR is consistent and the built performance prediction model is reliable.The research results have theoretical reference significance for the applicability analysis and mechanical selection of cantilever boring machine for hard rock tunnel.展开更多
Because radiation belt electrons can pose a potential threat to the safety of satellites orbiting in space,it is of great importance to develop a reliable model that can predict the highly dynamic variations in outer ...Because radiation belt electrons can pose a potential threat to the safety of satellites orbiting in space,it is of great importance to develop a reliable model that can predict the highly dynamic variations in outer radiation belt electron fluxes.In the present study,we develop a forecast model of radiation belt electron fluxes based on the data assimilation method,in terms of Van Allen Probe measurements combined with three-dimensional radiation belt numerical simulations.Our forecast model can cover the entire outer radiation belt with a high temporal resolution(1 hour)and a spatial resolution of 0.25 L over a wide range of both electron energy(0.1-5.0 MeV)and pitch angle(5°-90°).On the basis of this model,we forecast hourly electron fluxes for the next 1,2,and 3 days during an intense geomagnetic storm and evaluate the corresponding prediction performance.Our model can reasonably predict the stormtime evolution of radiation belt electrons with high prediction efficiency(up to~0.8-1).The best prediction performance is found for~0.3-3 MeV electrons at L=~3.25-4.5,which extends to higher L and lower energies with increasing pitch angle.Our results demonstrate that the forecast model developed can be a powerful tool to predict the spatiotemporal changes in outer radiation belt electron fluxes,and the model has both scientific significance and practical implications.展开更多
General-purpose processor (GPP) is an important platform for fast Fourier transform (FFT),due to its flexibility,reliability and practicality.FFT is a representative application intensive in both computation and m...General-purpose processor (GPP) is an important platform for fast Fourier transform (FFT),due to its flexibility,reliability and practicality.FFT is a representative application intensive in both computation and memory access,optimizing the FFT performance of a GPP also benefits the performances of many other applications.To facilitate the analysis of FFT,this paper proposes a theoretical model of the FFT processing.The model gives out a tight lower bound of the runtime of FFT on a GPP,and guides the architecture optimization for GPP as well.Based on the model,two theorems on optimization of architecture parameters are deduced,which refer to the lower bounds of register number and memory bandwidth.Experimental results on different processor architectures (including Intel Core i7 and Godson-3B) validate the performance model.The above investigations were adopted in the development of Godson-3B,which is an industrial GPP.The optimization techniques deduced from our performance model improve the FFT performance by about 40%,while incurring only 0.8% additional area cost.Consequently,Godson-3B solves the 1024-point single-precision complex FFT in 0.368 μs with about 40 Watt power consumption,and has the highest performance-per-watt in complex FFT among processors as far as we know.This work could benefit optimization of other GPPs as well.展开更多
Evaluating performance of individual features of WiMAX technology is a topic of widespread discussion. Currently, there is no quantitative way of measuring WiMAX technology so that wireless operators can meet their de...Evaluating performance of individual features of WiMAX technology is a topic of widespread discussion. Currently, there is no quantitative way of measuring WiMAX technology so that wireless operators can meet their design objectives. This paper outlines a set of design criteria for WiMAX and provides a decision-making aid that ranks the importance of criteria using Analytic Hierarchy Process (AHP). This ranking should sufficiently reflect market expectations of the relative importance of various design criteria. A model integrating AHP priorities with enhanced Data Envelopment Analysis (DEA) is the basis for formulating a technological value in simple, comparable format. A case study is provided to show how this technological value is used to evaluate a three year network deployment plan. In the future, this model could be extended to WiMAX equipment suppliers for the purpose of validating performance targets of individual criteria, and enhancing supplier roadmaps for future network development.展开更多
Seismic isolation effectively reduces seismic demands on building structures by isolating the superstructure from ground vibrations during earthquakes.However,isolation strategies give less attention to acceleration-s...Seismic isolation effectively reduces seismic demands on building structures by isolating the superstructure from ground vibrations during earthquakes.However,isolation strategies give less attention to acceleration-sensitive systems or equipment.Meanwhile,as the isolation layer’s displacement grows,the stiffness and frequency of traditional rolling and sliding isolation bearings increases,potentially causing self-centering and resonance concerns.As a result,a new conical pendulum bearing has been selected for acceleration-sensitive equipment to increase self-centering capacity,and additional viscous dampers are incorporated to enhance system damping.Moreover,the theoretical formula for conical pendulum bearings is supplied to analyze the device’s dynamic parameters,and shake table experiments are used to determine the proposed device’s isolation efficiency under various conditions.According to the test results,the newly proposed devices have remarkable isolation performance in terms of minimizing both acceleration and displacement responses.Finally,a numerical model of the isolation system is provided for further research,and the accuracy is demonstrated by the aforementioned experiments.展开更多
To improve the power consumption of parallel applications at the runtime, modern processors provide frequency scaling and power limiting capabilities. In this work, a runtime strategy is proposed to maximize energy sa...To improve the power consumption of parallel applications at the runtime, modern processors provide frequency scaling and power limiting capabilities. In this work, a runtime strategy is proposed to maximize energy savings under a given performance degradation. Machine learning techniques were utilized to develop performance models which would provide accurate performance prediction with change in operating core-uncore frequency. Experiments, performed on a node (28 cores) of a modern computing platform showed significant energy savings of as much as 26% with performance degradation of as low as 5% under the proposed strategy compared with the execution in the unlimited power case.展开更多
We compare the ability of coupled global climate models from the phases 5 and 6 of the Coupled Model Intercomparison Project(CMIP5 and CMIP6,respectively)in simulating the temperature and precipitation climatology and...We compare the ability of coupled global climate models from the phases 5 and 6 of the Coupled Model Intercomparison Project(CMIP5 and CMIP6,respectively)in simulating the temperature and precipitation climatology and interannual variability over China for the period 1961–2005 and the climatological East Asian monsoon for the period 1979–2005.All 92 models are able to simulate the geographical distribution of the above variables reasonably well.Compared with earlier CMIP5 models,current CMIP6 models have nationally weaker cold biases,a similar nationwide overestimation of precipitation and a weaker underestimation of the southeast–northwest precipitation gradient,a comparable overestimation of the spatial variability of the interannual variability,and a similar underestimation of the strength of winter monsoon over northern Asia.Pairwise comparison indicates that models have improved from CMIP5 to CMIP6 for climatological temperature and precipitation and winter monsoon but display little improvement for the interannual temperature and precipitation variability and summer monsoon.The ability of models relates to their horizontal resolutions in certain aspects.Both the multi-model arithmetic mean and median display similar skills and outperform most of the individual models in all considered aspects.展开更多
Because of vehicle's external disturbances and model uncertainties,robust control algorithms have obtained popularity in vehicle stability control.The robust control usually gives up performance in order to guarantee...Because of vehicle's external disturbances and model uncertainties,robust control algorithms have obtained popularity in vehicle stability control.The robust control usually gives up performance in order to guarantee the robustness of the control algorithm,therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness.The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties.In order to separate the design process of model tracking from the robustness design process,the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization.Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm,on the basis of a nonlinear vehicle simulation model with a magic tyre model.Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance,which can enhance the vehicle stability and handling,regardless of variations of the vehicle model parameters and the external crosswind interferences.Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.展开更多
Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Ar...Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Arctic multiyear sea ice,changes in newly formed sea ice indicate more thermodynamic and dynamic information on Arctic atmosphere–ocean–ice interaction and northern mid–high latitude atmospheric teleconnections. Here, we use a large multimodel ensemble from phase 6 of the Coupled Model Intercomparison Project(CMIP6) to investigate future changes in wintertime newly formed Arctic sea ice. The commonly used model-democracy approach that gives equal weight to each model essentially assumes that all models are independent and equally plausible, which contradicts with the fact that there are large interdependencies in the ensemble and discrepancies in models' performances in reproducing observations. Therefore, instead of using the arithmetic mean of well-performing models or all available models for projections like in previous studies, we employ a newly developed model weighting scheme that weights all models in the ensemble with consideration of their performance and independence to provide more reliable projections. Model democracy leads to evident bias and large intermodel spread in CMIP6 projections of newly formed Arctic sea ice. However, we show that both the bias and the intermodel spread can be effectively reduced by the weighting scheme. Projections from the weighted models indicate that wintertime newly formed Arctic sea ice is likely to increase dramatically until the middle of this century regardless of the emissions scenario.Thereafter, it may decrease(or remain stable) if the Arctic warming crosses a threshold(or is extensively constrained).展开更多
Cochlodinium polykrikoides is a notoriously harmful algal species that inflicts severe damage on the aquacultures of the coastal seas of Korea and Japan. Information on their expected movement tracks and boundaries of...Cochlodinium polykrikoides is a notoriously harmful algal species that inflicts severe damage on the aquacultures of the coastal seas of Korea and Japan. Information on their expected movement tracks and boundaries of influence is very useful and important for the effective establishment of a reduction plan. In general, the information is supported by a red-tide(a.k.a algal bloom) model. The performance of the model is highly dependent on the accuracy of parameters, which are the coefficients of functions approximating the biological growth and loss patterns of the C. polykrikoides. These parameters have been estimated using the bioassay data composed of growth-limiting factor and net growth rate value pairs. In the case of the C. polykrikoides, the parameters are different from each other in accordance with the used data because the bioassay data are sufficient compared to the other algal species. The parameters estimated by one specific dataset can be viewed as locally-optimized because they are adjusted only by that dataset. In cases where the other one data set is used, the estimation error might be considerable. In this study, the parameters are estimated by all available data sets without the use of only one specific data set and thus can be considered globally optimized. The cost function for the optimization is defined as the integrated mean squared estimation error, i.e., the difference between the values of the experimental and estimated rates. Based on quantitative error analysis, the root-mean squared errors of the global parameters show smaller values, approximately 25%–50%, than the values of the local parameters. In addition, bias is removed completely in the case of the globally estimated parameters. The parameter sets can be used as the reference default values of a red-tide model because they are optimal and representative. However, additional tuning of the parameters using the in-situ monitoring data is highly required.As opposed to the bioassay data, it is necessary because the bioassay data have limitations in terms of the in-situ coastal conditions.展开更多
基金Supported by the National Key Research and Development Program of China(No.2017YFB0202105,2016YFB0201305,2016YFB0200803,2016YFB0200300)the National Natural Science Foundation of China(No.61521092,91430218,31327901,61472395,61432018).
文摘Performance models provide insightful perspectives to predict performance and to propose optimization guidance.Although there has been much researches,pinpointing bottlenecks of various memory access patterns and reaching high accurate prediction of both regular and irregular programs on various hardware configurations are still not trivial.This work proposes a novel model called process-RAM-feedback(PRF)to quantify the overhead of computation and data transmission time on general-purpose multi-core processors.The PRF model predicts the cost of instruction for singlecore by a directed acyclic graph(DAG)and the transmission time of memory access between each memory hierarchy through a newly designed cache simulator.By using performance modeling and feedback optimization method,this paper uses PRF model to analyze and optimize convolution,sparse matrix-vector multiplication and sn-sweep as case study for covering with typical regular kernel to irregular and data dependence.Through the PRF model,it obtains optimization guidance with various sparsity structures,algorithm designs,and instruction sets support on different data sizes.
文摘Stone mastic asphalt(SMA)has not been widely used in the pavement industry,and there are no detailed design specifications for this type of asphalt.Therefore,long-term behavior properties of this pavement type are not accessible widely,and no model has been established for SMA regarding its performance.The main purpose of this study was to incorporate expert experience(using the Markov-chain process)and data from field experiments to propose a model for SMA performance using the Bayesian approach.The implementation of these sources resulted in a well-organized method to develop a performance model for SMA pavements,which did not have a long-term data.Finally,a linear performance model was established to calculate the SMA service life.The service life of SMA can be predicted explicitly according to the developed performance model which has been validated using a new set of data.
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.
文摘In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.
基金supported by the National Technology Extension Fund of Forestry,Forest Vegetation Carbon Storage Monitoring Technology Based on Watershed Algorithm ([2019]06)Fundamental Research Funds for the Central Universities (No.PTYX202107).
文摘Since the launch of the Google Earth Engine(GEE)cloud platform in 2010,it has been widely used,leading to a wealth of valuable information.However,the potential of GEE for forest resource management has not been fully exploited.To extract dominant woody plant species,GEE combined Sen-tinel-1(S1)and Sentinel-2(S2)data with the addition of the National Forest Resources Inventory(NFRI)and topographic data,resulting in a 10 m resolution multimodal geospatial dataset for subtropical forests in southeast China.Spectral and texture features,red-edge bands,and vegetation indices of S1 and S2 data were computed.A hierarchical model obtained information on forest distribution and area and the dominant woody plant species.The results suggest that combining data sources from the S1 winter and S2 yearly ranges enhances accuracy in forest distribution and area extraction compared to using either data source independently.Similarly,for dominant woody species recognition,using S1 winter and S2 data across all four seasons was accurate.Including terrain factors and removing spatial correlation from NFRI sample points further improved the recognition accuracy.The optimal forest extraction achieved an overall accuracy(OA)of 97.4%and a maplevel image classification efficacy(MICE)of 96.7%.OA and MICE were 83.6%and 80.7%for dominant species extraction,respectively.The high accuracy and efficacy values indicate that the hierarchical recognition model based on multimodal remote sensing data performed extremely well for extracting information about dominant woody plant species.Visualizing the results using the GEE application allows for an intuitive display of forest and species distribution,offering significant convenience for forest resource monitoring.
基金supported by the National Key Technology Research and Development Program (Nos.2018YFB2202701 and 2019YFB2205003)the National Major Research Program from Ministry of Science and Technology of China (No. 2016YFA0201903)Science and Technology Program from Beijing Science and Technology Commission (No. Z201100004220003)。
文摘Building a post-layout simulation performance model is essential in closing the loop of analog circuits, but it is a challenging task because of the high-dimensional space and expensive simulation cost. To facilitate efficient modeling, this paper proposes a Global Mapping Model Fusion(GMMF) technique. The key idea of GMMF is to reuse the schematic-level model trained by the Artificial Neural Network(ANN) algorithm, and combine it with few mapping coefficients to build the post-simulation model. Furthermore, as an efficient global optimization algorithm,differential evolution is applied to determine the optimal mapping coefficients with few samples. In GMMF, only a small number of mapping coefficients are unknown, so the number of post-layout samples needed is significantly reduced. To enhance practical utility of the proposed GMMF technique, two specific mapping relations, i.e., linear or weakly no-linear and nonlinear, are carefully considered in this paper. We conduct experiments on two topologies of two-stage operational amplifier and comparator in different commercial processes. All the simulation data for modeling are obtained from a parametric design framework. A more than 5 runtime speedup is achieved over ANN without surrendering any accuracy.
基金Project supported by the Taiyuan Institute of Technology School Foundation
文摘This paper presents a new model to study the static performances of a GaN metal epitaxial-semiconductor field effect transistor(MESFET) based on the metal-semiconductor interface state of the Schottky junction.The I-V performances of MESFET under different channel lengths and different operating systems(pinch-off or not) have been achieved by our model, which strictly depended on the electrical parameters, such as the drain-gate capacity Cgd, the source-gate capacity C;, the transconductance, and the conductance. To determine the accuracy of our model, root-mean-square(RMS) errors were calculated. In the experiment, the experimental data agree with our model. Also, the minimum value of the electrical parameter has been calculated to get the maximum cut-off frequency for the GaN MESFET.
文摘This paper examines the performance of an atmospheric general circulation model (AGCM) developed at the State Key Laboratory of Numerical Modeling for Atmospheric Sciences and Geophysical Fluid Dynamics, Institute of Atmospheric Physics (LASG/IAP). It is a spectral model truncated at R42(2.8125°long×1.66°lat) resolution and with nine vertical levels, and referred to as R42L9/LASG hereafter. It is also the new version of atmospheric component model R15L9 of the global ocean-atmosphere-land system (GOALS/LASG). A 40-year simulation in which the model is forced with the climatological monthly mean sea surface temperature is compared with the 40-year (1958-97) U.S. National Center for Environmental Prediction (NGEP) global reanalysis and the 22-year (1979-2000) Xie-Arkin monthly precipitation climatology. The mean DJF and JJA geographical distributions of precipitation, sea level pressure, 500-hPa geopotential height, 850-hPa and 200-hPa zonal wind, and other fields averaged for the last 30-year integration of the R42L9 model are analyzed. Results show that the model reproduces well the observed basic patterns, particularly precipitation over the East Asian region. Comparing the new model with R15L9/LASG, the old version with coarse resolution (nearly 7.5°long×4.5°lat), shows an obvious improvement in the simulation of regional climate, especially precipitation. The weaknesses in simulation and future improvements of the model are also discussed.
基金supported by the National Basic Research Program of China (the 973 Program,Grant No.2010CB951102)the National Supporting Plan Program of China (Grants No.2007BAB28B01 and 2008BAB42B03)the National Natural Science Foundation of China (Grant No. 50709042),and the Regional Water Theme in the Water for a Healthy Country Flagship
文摘In order to assess the effects of calibration data series length on the performance and optimal parameter values of a hydrological model in ungauged or data-limited catchments (data are non-continuous and fragmental in some catchments), we used non-continuous calibration periods for more independent streamflow data for SIMHYD (simple hydrology) model calibration. Nash-Sutcliffe efficiency and percentage water balance error were used as performance measures. The particle swarm optimization (PSO) method was used to calibrate the rainfall-runoff models. Different lengths of data series ranging from one year to ten years, randomly sampled, were used to study the impact of calibration data series length. Fifty-five relatively unimpaired catchments located all over Australia with daily precipitation, potential evapotranspiration, and streamflow data were tested to obtain more general conclusions. The results show that longer calibration data series do not necessarily result in better model performance. In general, eight years of data are sufficient to obtain steady estimates of model performance and parameters for the SIMHYD model. It is also shown that most humid catchments require fewer calibration data to obtain a good performance and stable parameter values. The model performs better in humid and semi-humid catchments than in arid catchments. Our results may have useful and interesting implications for the efficiency of using limited observation data for hydrological model calibration in different climates.
基金supported by the National Natural Science Foundation of China (Grant 51375035)the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant 20121102110021)
文摘The landing buffer is an important problem in the research on bionic locust jumping robots, and the different modes of landing and buffering can affect the dynamic performance of the buffering process significantly. Based on an experimental observation, the different modes of landing and buffering are determined, which include the different numbers of landing legs and different motion modes of legs in the buffering process. Then a bionic locust mechanism is established, and the springs are used to replace the leg muscles to achieve a buffering effect. To reveal the dynamic performance in the buffering process of the bionic locust mechanism, a dynamic model is established with different modes of landing and buffering. In particular, to analyze the buffering process conveniently, an equivalent vibration dynamic model of the bionic locust mechanism is proposed.Given the support forces of the ground to the leg links, which can be obtained from the dynamic model, the spring forces of the legs and the impact resistance of each leg are the important parameters affecting buffering performance, and evaluation principles for buffering performance are proposed according to the aforementioned parameters. Based on the dynamic model and these evaluation principles, the buffering performances are analyzed and compared in different modes of landing and buffering on a horizontal plane and an inclined plane. The results show that the mechanism with the ends of the legs sliding can obtain a better dynamic performance. This study offers primary theories for buffering dynamics and an evaluation of landing buffer performance,and it establishes a theoretical basis for studies and engineering applications.
基金National Natural Science Foundation of China (Grant No.52178393)the Science and Technology Innovation Team of Shaanxi Innovation Capability Support Plan (Grant No.2020TD005)Science and Technology Innovation Project of China Railway Construction Bridge Engineering Bureau Group Co.,Ltd.(Grant No.DQJ-2020-B07)。
文摘Evaluating the adaptability of cantilever boring machine(CBM) through in-depth excavation and analysis of tunnel excavation data and rock mass parameters is the premise of mechanical design and efficient excavation in the field of underground space engineering.This paper presented a case study of tunnelling performance prediction method of CBM in sedimentary hard-rock tunnel of Karst landform type by using tunneling data and surrounding rock parameters.The uniaxial compressive strength(UCS),rock integrity factor(Kv),basic quality index([BQ]),rock quality index RQD,brazilian tensile strength(BTS) and brittleness index(BI) were introduced to construct a performance prediction database based on the hard-rock tunnel of Guiyang Metro Line 1 and Line 3,and then established the performance prediction model of cantilever boring machine.Then the deep belief network(DBN) was introduced into the performance prediction model,and the reliability of performance prediction model was verified by combining with engineering data.The study showed that the influence degree of surrounding rock parameters on the tunneling performance of the cantilever boring machine is UCS > [BQ] > BTS >RQD > Kv > BI.The performance prediction model shows that the instantaneous cutting rate(ICR) has a good correlation with the surrounding rock parameters,and the predicting model accuracy is related to the reliability of construction data.The prediction of limestone and dolomite sections of Line 3 based on the DBN performance prediction model shows that the measured ICR and predicted ICR is consistent and the built performance prediction model is reliable.The research results have theoretical reference significance for the applicability analysis and mechanical selection of cantilever boring machine for hard rock tunnel.
基金supported by the National Natural Science Foundation of China (Grant Nos. 42025404, 42188101, and 42241143)the National Key R&D Program of China (Grant Nos. 2022YFF0503700 and 2022YFF0503900)+1 种基金the B-type Strategic Priority Program of the Chinese Academy of Sciences (Grant No. XDB41000000)the Fundamental Research Funds for the Central Universities (Grant No. 2042022kf1012)
文摘Because radiation belt electrons can pose a potential threat to the safety of satellites orbiting in space,it is of great importance to develop a reliable model that can predict the highly dynamic variations in outer radiation belt electron fluxes.In the present study,we develop a forecast model of radiation belt electron fluxes based on the data assimilation method,in terms of Van Allen Probe measurements combined with three-dimensional radiation belt numerical simulations.Our forecast model can cover the entire outer radiation belt with a high temporal resolution(1 hour)and a spatial resolution of 0.25 L over a wide range of both electron energy(0.1-5.0 MeV)and pitch angle(5°-90°).On the basis of this model,we forecast hourly electron fluxes for the next 1,2,and 3 days during an intense geomagnetic storm and evaluate the corresponding prediction performance.Our model can reasonably predict the stormtime evolution of radiation belt electrons with high prediction efficiency(up to~0.8-1).The best prediction performance is found for~0.3-3 MeV electrons at L=~3.25-4.5,which extends to higher L and lower energies with increasing pitch angle.Our results demonstrate that the forecast model developed can be a powerful tool to predict the spatiotemporal changes in outer radiation belt electron fluxes,and the model has both scientific significance and practical implications.
基金supported by the National Science and Technology Major Project under Grant Nos.2009ZX01028-002-003,2009ZX01029-001-003,2010ZX01036-001-002the National Natural Science Foundation of China under Grant Nos.61050002,61003064,60921002
文摘General-purpose processor (GPP) is an important platform for fast Fourier transform (FFT),due to its flexibility,reliability and practicality.FFT is a representative application intensive in both computation and memory access,optimizing the FFT performance of a GPP also benefits the performances of many other applications.To facilitate the analysis of FFT,this paper proposes a theoretical model of the FFT processing.The model gives out a tight lower bound of the runtime of FFT on a GPP,and guides the architecture optimization for GPP as well.Based on the model,two theorems on optimization of architecture parameters are deduced,which refer to the lower bounds of register number and memory bandwidth.Experimental results on different processor architectures (including Intel Core i7 and Godson-3B) validate the performance model.The above investigations were adopted in the development of Godson-3B,which is an industrial GPP.The optimization techniques deduced from our performance model improve the FFT performance by about 40%,while incurring only 0.8% additional area cost.Consequently,Godson-3B solves the 1024-point single-precision complex FFT in 0.368 μs with about 40 Watt power consumption,and has the highest performance-per-watt in complex FFT among processors as far as we know.This work could benefit optimization of other GPPs as well.
文摘Evaluating performance of individual features of WiMAX technology is a topic of widespread discussion. Currently, there is no quantitative way of measuring WiMAX technology so that wireless operators can meet their design objectives. This paper outlines a set of design criteria for WiMAX and provides a decision-making aid that ranks the importance of criteria using Analytic Hierarchy Process (AHP). This ranking should sufficiently reflect market expectations of the relative importance of various design criteria. A model integrating AHP priorities with enhanced Data Envelopment Analysis (DEA) is the basis for formulating a technological value in simple, comparable format. A case study is provided to show how this technological value is used to evaluate a three year network deployment plan. In the future, this model could be extended to WiMAX equipment suppliers for the purpose of validating performance targets of individual criteria, and enhancing supplier roadmaps for future network development.
基金Scientific Research Fund of Institute of Engineering Mechanics,CEA under Grant No.2019A03Scientific Research Fund of Institute of Engineering Mechanics,CEA under Grant No.2021D12National Key R&D Program of China under No.2018YFC1504404。
文摘Seismic isolation effectively reduces seismic demands on building structures by isolating the superstructure from ground vibrations during earthquakes.However,isolation strategies give less attention to acceleration-sensitive systems or equipment.Meanwhile,as the isolation layer’s displacement grows,the stiffness and frequency of traditional rolling and sliding isolation bearings increases,potentially causing self-centering and resonance concerns.As a result,a new conical pendulum bearing has been selected for acceleration-sensitive equipment to increase self-centering capacity,and additional viscous dampers are incorporated to enhance system damping.Moreover,the theoretical formula for conical pendulum bearings is supplied to analyze the device’s dynamic parameters,and shake table experiments are used to determine the proposed device’s isolation efficiency under various conditions.According to the test results,the newly proposed devices have remarkable isolation performance in terms of minimizing both acceleration and displacement responses.Finally,a numerical model of the isolation system is provided for further research,and the accuracy is demonstrated by the aforementioned experiments.
文摘To improve the power consumption of parallel applications at the runtime, modern processors provide frequency scaling and power limiting capabilities. In this work, a runtime strategy is proposed to maximize energy savings under a given performance degradation. Machine learning techniques were utilized to develop performance models which would provide accurate performance prediction with change in operating core-uncore frequency. Experiments, performed on a node (28 cores) of a modern computing platform showed significant energy savings of as much as 26% with performance degradation of as low as 5% under the proposed strategy compared with the execution in the unlimited power case.
文摘We compare the ability of coupled global climate models from the phases 5 and 6 of the Coupled Model Intercomparison Project(CMIP5 and CMIP6,respectively)in simulating the temperature and precipitation climatology and interannual variability over China for the period 1961–2005 and the climatological East Asian monsoon for the period 1979–2005.All 92 models are able to simulate the geographical distribution of the above variables reasonably well.Compared with earlier CMIP5 models,current CMIP6 models have nationally weaker cold biases,a similar nationwide overestimation of precipitation and a weaker underestimation of the southeast–northwest precipitation gradient,a comparable overestimation of the spatial variability of the interannual variability,and a similar underestimation of the strength of winter monsoon over northern Asia.Pairwise comparison indicates that models have improved from CMIP5 to CMIP6 for climatological temperature and precipitation and winter monsoon but display little improvement for the interannual temperature and precipitation variability and summer monsoon.The ability of models relates to their horizontal resolutions in certain aspects.Both the multi-model arithmetic mean and median display similar skills and outperform most of the individual models in all considered aspects.
基金Supported by National Natural Science Foundation of China(Grant No.51375009)PhD Research Foundation of Liaocheng University,China(Grant No.318051523)Tsinghua University Initiative Scientific Research Program,China
文摘Because of vehicle's external disturbances and model uncertainties,robust control algorithms have obtained popularity in vehicle stability control.The robust control usually gives up performance in order to guarantee the robustness of the control algorithm,therefore an improved robust internal model control(IMC) algorithm blending model tracking and internal model control is put forward for active steering system in order to reach high performance of yaw rate tracking with certain robustness.The proposed algorithm inherits the good model tracking ability of the IMC control and guarantees robustness to model uncertainties.In order to separate the design process of model tracking from the robustness design process,the improved 2 degree of freedom(DOF) robust internal model controller structure is given from the standard Youla parameterization.Simulations of double lane change maneuver and those of crosswind disturbances are conducted for evaluating the robust control algorithm,on the basis of a nonlinear vehicle simulation model with a magic tyre model.Results show that the established 2-DOF robust IMC method has better model tracking ability and a guaranteed level of robustness and robust performance,which can enhance the vehicle stability and handling,regardless of variations of the vehicle model parameters and the external crosswind interferences.Contradiction between performance and robustness of active steering control algorithm is solved and higher control performance with certain robustness to model uncertainties is obtained.
基金supported by the Chinese–Norwegian Collaboration Projects within Climate Systems jointly funded by the National Key Research and Development Program of China (Grant No.2022YFE0106800)the Research Council of Norway funded project,MAPARC (Grant No.328943)+2 种基金the support from the Research Council of Norway funded project,COMBINED (Grant No.328935)the National Natural Science Foundation of China (Grant No.42075030)the Postgraduate Research and Practice Innovation Program of Jiangsu Province (KYCX23_1314)。
文摘Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Arctic multiyear sea ice,changes in newly formed sea ice indicate more thermodynamic and dynamic information on Arctic atmosphere–ocean–ice interaction and northern mid–high latitude atmospheric teleconnections. Here, we use a large multimodel ensemble from phase 6 of the Coupled Model Intercomparison Project(CMIP6) to investigate future changes in wintertime newly formed Arctic sea ice. The commonly used model-democracy approach that gives equal weight to each model essentially assumes that all models are independent and equally plausible, which contradicts with the fact that there are large interdependencies in the ensemble and discrepancies in models' performances in reproducing observations. Therefore, instead of using the arithmetic mean of well-performing models or all available models for projections like in previous studies, we employ a newly developed model weighting scheme that weights all models in the ensemble with consideration of their performance and independence to provide more reliable projections. Model democracy leads to evident bias and large intermodel spread in CMIP6 projections of newly formed Arctic sea ice. However, we show that both the bias and the intermodel spread can be effectively reduced by the weighting scheme. Projections from the weighted models indicate that wintertime newly formed Arctic sea ice is likely to increase dramatically until the middle of this century regardless of the emissions scenario.Thereafter, it may decrease(or remain stable) if the Arctic warming crosses a threshold(or is extensively constrained).
基金The part of the project "Development of Korea Operational Oceanographic System(KOOS),Phase 2",funded by the Ministry of Oceans and Fisheries,Koreathe part of the project entitled "Cooperative Project on Korea-China Bilateral Committee on Ocean Science",funded by the Ministry of Oceans and Fisheries,Korea and China-Korea Joint Research Ocean Research Center
文摘Cochlodinium polykrikoides is a notoriously harmful algal species that inflicts severe damage on the aquacultures of the coastal seas of Korea and Japan. Information on their expected movement tracks and boundaries of influence is very useful and important for the effective establishment of a reduction plan. In general, the information is supported by a red-tide(a.k.a algal bloom) model. The performance of the model is highly dependent on the accuracy of parameters, which are the coefficients of functions approximating the biological growth and loss patterns of the C. polykrikoides. These parameters have been estimated using the bioassay data composed of growth-limiting factor and net growth rate value pairs. In the case of the C. polykrikoides, the parameters are different from each other in accordance with the used data because the bioassay data are sufficient compared to the other algal species. The parameters estimated by one specific dataset can be viewed as locally-optimized because they are adjusted only by that dataset. In cases where the other one data set is used, the estimation error might be considerable. In this study, the parameters are estimated by all available data sets without the use of only one specific data set and thus can be considered globally optimized. The cost function for the optimization is defined as the integrated mean squared estimation error, i.e., the difference between the values of the experimental and estimated rates. Based on quantitative error analysis, the root-mean squared errors of the global parameters show smaller values, approximately 25%–50%, than the values of the local parameters. In addition, bias is removed completely in the case of the globally estimated parameters. The parameter sets can be used as the reference default values of a red-tide model because they are optimal and representative. However, additional tuning of the parameters using the in-situ monitoring data is highly required.As opposed to the bioassay data, it is necessary because the bioassay data have limitations in terms of the in-situ coastal conditions.