Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the curr...Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.展开更多
To study the anti-explosion protection effect of polyurea coating on reinforced concrete box girder,two segmental girder specimens were made at a scale of 1:3,numbered as G(without polyurea coating)and PCG(with polyur...To study the anti-explosion protection effect of polyurea coating on reinforced concrete box girder,two segmental girder specimens were made at a scale of 1:3,numbered as G(without polyurea coating)and PCG(with polyurea coating).The failure characteristics and dynamic responses of the specimens were compared through conducting explosion tests.The reliability of the numerical simulation using LS-DYNA software was verified by the test results.The effects of different scaled distances,reinforcement ratios,concrete strengths,coating thicknesses and ranges of polyurea were studied.The results show that the polyurea coating can effectively enhance the anti-explosion performance of the girder.The top plate of middle chamber in specimen G forms an elliptical penetrating hole,while that in specimen PCG only shows a very slight local dent.The peak vertical displacement and residual displacement of PCG decrease by 74.8% and 73.7%,respectively,compared with those of specimen G.For the TNT explosion with small equivalent,the polyurea coating has a more significant protective effect on reducing the size of fracture.With the increase of TNT equivalent,the protective effect of polyurea on reducing girder displacement becomes more significant.The optimal reinforcement ratio,concrete strength,thickness and range of polyurea coating were also drawn.展开更多
Currently,more than ten ultrahigh arch dams have been constructed or are being constructed in China.Safety control is essential to long-term operation of these dams.This study employed the flexibility coefficient and ...Currently,more than ten ultrahigh arch dams have been constructed or are being constructed in China.Safety control is essential to long-term operation of these dams.This study employed the flexibility coefficient and plastic complementary energy norm to assess the structural safety of arch dams.A comprehensive analysis was conducted,focusing on differences among conventional methods in characterizing the structural behavior of the Xiaowan arch dam in China.Subsequently,the spatiotemporal characteristics of the measured performance of the Xiaowan dam were explored,including periodicity,convergence,and time-effect characteristics.These findings revealed the governing mechanism of main factors.Furthermore,a heterogeneous spatial panel vector model was developed,considering both common factors and specific factors affecting the safety and performance of arch dams.This model aims to comprehensively illustrate spatial heterogeneity between the entire structure and local regions,introducing a specific effect quantity to characterize local deformation differences.Ultimately,the proposed model was applied to the Xiaowan arch dam,accurately quantifying the spatiotemporal heterogeneity of dam performance.Additionally,the spatiotemporal distri-bution characteristics of environmental load effects on different parts of the dam were reasonably interpreted.Validation of the model prediction enhances its credibility,leading to the formulation of health diagnosis criteria for future long-term operation of the Xiaowan dam.The findings not only enhance the predictive ability and timely control of ultrahigh arch dams'performance but also provide a crucial basis for assessing the effectiveness of engineering treatment measures.展开更多
Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
Predicting students’academic achievements is an essential issue in education,which can benefit many stakeholders,for instance,students,teachers,managers,etc.Compared with online courses such asMOOCs,students’academi...Predicting students’academic achievements is an essential issue in education,which can benefit many stakeholders,for instance,students,teachers,managers,etc.Compared with online courses such asMOOCs,students’academicrelateddata in the face-to-face physical teaching environment is usually sparsity,and the sample size is relativelysmall.It makes building models to predict students’performance accurately in such an environment even morechallenging.This paper proposes a Two-WayNeuralNetwork(TWNN)model based on the bidirectional recurrentneural network and graph neural network to predict students’next semester’s course performance using only theirprevious course achievements.Extensive experiments on a real dataset show that our model performs better thanthe baselines in many indicators.展开更多
In this paper,a detailed model of a photovoltaic(PV)panel is used to study the accumulation of dust on solar panels.The presence of dust diminishes the incident light intensity penetrating the panel’s cover glass,as ...In this paper,a detailed model of a photovoltaic(PV)panel is used to study the accumulation of dust on solar panels.The presence of dust diminishes the incident light intensity penetrating the panel’s cover glass,as it increases the reflection of light by particles.This phenomenon,commonly known as the“soiling effect”,presents a significant challenge to PV systems on a global scale.Two basic models of the equivalent circuits of a solar cell can be found,namely the single-diode model and the two-diode models.The limitation of efficiency data in manufacturers’datasheets has encouraged us to develop an equivalent electrical model that is efficient under dust conditions,integrated with optical transmittance considerations to investigate the soiling effect.The proposed approach is based on the use of experimental current-voltage(I-V)characteristics with simulated data using MATLAB/Simulink.Our research outcomes underscores the feasibility of accurately quantifying the reduction in energy production resulting from soiling by assessing the optical transmittance of accumulated dust on the surface of PV glass.展开更多
The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decoup...The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model.展开更多
Dielectric elastomers(DEs)require balanced electric actuation performance and mechanical integrity under applied voltages.Incorporating high dielectric particles as fillers provides extensive design space to optimize ...Dielectric elastomers(DEs)require balanced electric actuation performance and mechanical integrity under applied voltages.Incorporating high dielectric particles as fillers provides extensive design space to optimize concentration,morphology,and distribution for improved actuation performance and material modulus.This study presents an integrated framework combining finite element modeling(FEM)and deep learning to optimize the microstructure of DE composites.FEM first calculates actuation performance and the effective modulus across varied filler combinations,with these data used to train a convolutional neural network(CNN).Integrating the CNN into a multi-objective genetic algorithm generates designs with enhanced actuation performance and material modulus compared to the conventional optimization approach based on FEM approach within the same time.This framework harnesses artificial intelligence to navigate vast design possibilities,enabling optimized microstructures for high-performance DE composites.展开更多
High Mountain Asia(HMA),recognized as a third pole,needs regular and intense studies as it is susceptible to climate change.An accurate and high-resolution Digital Elevation Model(DEM)for this region enables us to ana...High Mountain Asia(HMA),recognized as a third pole,needs regular and intense studies as it is susceptible to climate change.An accurate and high-resolution Digital Elevation Model(DEM)for this region enables us to analyze it in a 3D environment and understand its intricate role as the Water Tower of Asia.The science teams of NASA realized an 8-m DEM using satellite stereo imagery for HMA,termed HMA 8-m DEM.In this research,we assessed the vertical accuracy of HMA 8-m DEM using reference elevations from ICESat-2 geolocated photons at three test sites of varied topography and land covers.Inferences were made from statistical quantifiers and elevation profiles.For the world’s highest mountain,Mount Everest,and its surroundings,Root Mean Squared Error(RMSE)and Mean Absolute Error(MAE)resulted in 1.94 m and 1.66 m,respectively;however,a uniform positive bias observed in the elevation profiles indicates the seasonal snow cover change will dent the accurate estimation of the elevation in this sort of test sites.The second test site containing gentle slopes with forest patches has exhibited the Digital Surface Model(DSM)features with RMSE and MAE of 0.58 m and 0.52 m,respectively.The third test site,situated in the Zanda County of the Qinghai-Tibet,is a relatively flat terrain bed,mostly bare earth with sudden river cuts,and has minimal errors with RMSE and MAE of 0.32 m and 0.29 m,respectively,and with a negligible bias.Additionally,in one more test site,the feasibility of detecting the glacial lakes was tested,which resulted in exhibiting a flat surface over the surface of the lakes,indicating the potential of HMA 8-m DEM for deriving the hydrological parameters.The results accrued in this investigation confirm that the HMA 8-m DEM has the best vertical accuracy and should be of high use for analyzing natural hazards and monitoring glacier surfaces.展开更多
Faced with increasing global soil degradation,spatially explicit data on cropland soil organic matter(SOM)provides crucial data for soil carbon pool accounting,cropland quality assessment and the formulation of effect...Faced with increasing global soil degradation,spatially explicit data on cropland soil organic matter(SOM)provides crucial data for soil carbon pool accounting,cropland quality assessment and the formulation of effective management policies.As a spatial information prediction technique,digital soil mapping(DSM)has been widely used to spatially map soil information at different scales.However,the accuracy of digital SOM maps for cropland is typically lower than for other land cover types due to the inherent difficulty in precisely quantifying human disturbance.To overcome this limitation,this study systematically assessed a framework of“information extractionfeature selection-model averaging”for improving model performance in mapping cropland SOM using 462 cropland soil samples collected in Guangzhou,China in 2021.The results showed that using the framework of dynamic information extraction,feature selection and model averaging could efficiently improve the accuracy of the final predictions(R^(2):0.48 to 0.53)without having obviously negative impacts on uncertainty.Quantifying the dynamic information of the environment was an efficient way to generate covariates that are linearly and nonlinearly related to SOM,which improved the R^(2)of random forest from 0.44 to 0.48 and the R^(2)of extreme gradient boosting from 0.37to 0.43.Forward recursive feature selection(FRFS)is recommended when there are relatively few environmental covariates(<200),whereas Boruta is recommended when there are many environmental covariates(>500).The Granger-Ramanathan model averaging approach could improve the prediction accuracy and average uncertainty.When the structures of initial prediction models are similar,increasing in the number of averaging models did not have significantly positive effects on the final predictions.Given the advantages of these selected strategies over information extraction,feature selection and model averaging have a great potential for high-accuracy soil mapping at any scales,so this approach can provide more reliable references for soil conservation policy-making.展开更多
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
This study embarks on a comprehensive examination of optimization techniques within GPU-based parallel programming models,pivotal for advancing high-performance computing(HPC).Emphasizing the transition of GPUs from g...This study embarks on a comprehensive examination of optimization techniques within GPU-based parallel programming models,pivotal for advancing high-performance computing(HPC).Emphasizing the transition of GPUs from graphic-centric processors to versatile computing units,it delves into the nuanced optimization of memory access,thread management,algorithmic design,and data structures.These optimizations are critical for exploiting the parallel processing capabilities of GPUs,addressingboth the theoretical frameworks and practical implementations.By integrating advanced strategies such as memory coalescing,dynamic scheduling,and parallel algorithmic transformations,this research aims to significantly elevate computational efficiency and throughput.The findings underscore the potential of optimized GPU programming to revolutionize computational tasks across various domains,highlighting a pathway towards achieving unparalleled processing power and efficiency in HPC environments.The paper not only contributes to the academic discourse on GPU optimization but also provides actionable insights for developers,fostering advancements in computational sciences and technology.展开更多
安全生产事故往往由多组织交互、多因素耦合造成,事故原因涉及多个组织。为预防和遏制多组织生产安全事故的发生,基于系统理论事故建模与过程模型(Systems-Theory Accident Modeling and Process,STAMP)、24Model,构建一种用于多组织事...安全生产事故往往由多组织交互、多因素耦合造成,事故原因涉及多个组织。为预防和遏制多组织生产安全事故的发生,基于系统理论事故建模与过程模型(Systems-Theory Accident Modeling and Process,STAMP)、24Model,构建一种用于多组织事故分析的方法,并以青岛石油爆炸事故为例进行事故原因分析。结果显示:STAMP-24Model可以分组织,分层次且有效、全面、详细地分析涉及多个组织的事故原因,探究多组织之间的交互关系;对事故进行动态演化分析,可得到各组织不安全动作耦合关系与形成的事故失效链及管控失效路径,进而为预防多组织事故提供思路和参考。展开更多
The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathemati...The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathematical simplification or empirical data fitting.However,the lack of standard model labels is a challenge in the optimal selection process.To solve this problem,a general three-level evaluation system for the model selection performance is proposed,including model selection accuracy index based on simulation data,fit goodness indexs based on the optimally selected model,and evaluation index based on the supporting performance to its third-party.The three-level evaluation system can more comprehensively and accurately describe the selection performance of the radar clutter model in different ways,and can be popularized and applied to the evaluation of other similar characterization model selection.展开更多
Light levels determine regeneration in stands and a key concern is how to regulate the light environment of different stand types to the requirements of the understory.In this study,we selected three stands typical in...Light levels determine regeneration in stands and a key concern is how to regulate the light environment of different stand types to the requirements of the understory.In this study,we selected three stands typical in south China(a Cryptomeria japonica plantation,a Quercus acutissima plantation,and a mixed stand of both)and three thinning intensities to determine the best understory light environ-ment for 3-year-old Phoebe bournei seedlings.The canopy structure,understory light environment,and photosynthe-sis and growth indicators were assessed following thin-ning.Thinning improved canopy structure and understory light availability of each stand;species composition was the reason for differences in the understory light environ-ment.Under the same thinning intensity,the mixed stand had the greatest light radiation and most balanced spectral composition.P.bournei photosynthesis and growth were closely related to the light environment;all three stands required heavy thinning to create an effective and sustained understory light environment.In a suitable understory light environment,the efficiency of light interception,absorption,and use by seedlings was enhanced,resulting in a higher carbon assimilation the main limiting factor was stomatal conductance.As a shade-avoidance signal,red/far-red radia-tion is a critical factor driving changes in photosynthesis and growth of P.bournei seedlings,and a reduction increased light absorption and use capacity and height:diameter ratios.The growth advantage transformed from diameter to height,enabling seedlings to access more light.Our findings suggest that the regeneration of shade-tolerant species such as P.bournei could be enhanced if a targeted approach to thinning based on stand type was adopted.展开更多
Significant progress has been made in computational imaging(CI),in which deep convolutional neural networks(CNNs)have demonstrated that sparse speckle patterns can be reconstructed.However,due to the limited“local”k...Significant progress has been made in computational imaging(CI),in which deep convolutional neural networks(CNNs)have demonstrated that sparse speckle patterns can be reconstructed.However,due to the limited“local”kernel size of the convolutional operator,for the spatially dense patterns,such as the generic face images,the performance of CNNs is limited.Here,we propose a“non-local”model,termed the Speckle-Transformer(SpT)UNet,for speckle feature extraction of generic face images.It is worth noting that the lightweight SpT UNet reveals a high efficiency and strong comparative performance with Pearson Correlation Coefficient(PCC),and structural similarity measure(SSIM)exceeding 0.989,and 0.950,respectively.展开更多
Satellite records show that the extent and thickness of sea ice in the Arctic Ocean have significantly decreased since the early 1970s.The prediction of sea ice is highly important,but accurate simulation of sea ice v...Satellite records show that the extent and thickness of sea ice in the Arctic Ocean have significantly decreased since the early 1970s.The prediction of sea ice is highly important,but accurate simulation of sea ice variations remains highly challenging.For improving model performance,sensitivity experiments were conducted using the coupled ocean and sea ice model(NEMO-LIM),and the simulation results were compared against satellite observations.Moreover,the contribution ratios of dynamic and thermodynamic processes to sea ice variations were analyzed.The results show that the performance of the model in reconstructing the spatial distribution of Arctic sea ice is highly sensitive to ice strength decay constant(C^(rhg)).By reducing the C^(rhg) constant,the sea ice compressive strength increases,leading to improved simulated sea ice states.The contribution of thermodynamic processes to sea ice melting was reduced due to less deformation and fracture of sea ice with increased compressive strength.Meanwhile,dynamic processes constrained more sea ice to the central Arctic Ocean and contributed to the increases in ice concentration,reducing the simulation bias in the central Arctic Ocean in summer.The root mean square error(RMSE)between modeled and the CryoSat-2/SMOS satellite observed ice thickness was reduced in the compressive strength-enhanced model solution.The ice thickness,especially of multiyear thick ice,was also reduced and matched with the satellite observation better in the freezing season.These provide an essential foundation on exploring the response of the marine ecosystem and biogeochemical cycling to sea ice changes.展开更多
This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly m...This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly mounted on a shared platform with both horizontally and vertically interlaced modules.Each module consists of a moderate/flexible number of array elements with the inter-element distance typically in the order of the signal wavelength,while different modules are separated by the relatively large inter-module distance for convenience of practical deployment.By accurately modelling the signal amplitudes and phases,as well as projected apertures across all modular elements,we analyse the near-field signal-to-noise ratio(SNR)performance for modular XL-array communications.Based on the non-uniform spherical wave(NUSW)modelling,the closed-form SNR expression is derived in terms of key system parameters,such as the overall modular array size,distances of adjacent modules along all dimensions,and the user's three-dimensional(3D)location.In addition,with the number of modules in different dimensions increasing infinitely,the asymptotic SNR scaling laws are revealed.Furthermore,we show that our proposed near-field modelling and performance analysis include the results for existing array architectures/modelling as special cases,e.g.,the collocated XL-array architecture,the uniform plane wave(UPW)based far-field modelling,and the modular extremely large-scale uniform linear array(XL-ULA)of onedimension.Extensive simulation results are presented to validate our findings.展开更多
基金supported by Ministry of Science and Technology of China (Grant No. 2018YFA0606501)National Natural Science Foundation of China (Grant No. 42075037)+1 种基金Key Laboratory Open Research Program of Xinjiang Science and Technology Department (Grant No. 2022D04009)the National Key Scientific and Technological Infrastructure project “Earth System Numerical Simulation Facility” (EarthLab)。
文摘Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.
基金the Natural Science Foundation of Jiangsu Province(Grant No.BK20200494)China Postdoctoral Science Foundation(Grant No.2021M701725)+3 种基金Jiangsu Postdoctoral Research Funding Program(Grant No.2021K522C)Fundamental Research Funds for the Central Universities(Grant No.30919011246)National Natural Science Foundation of China(Grant No.52278188)Natural Science Foundation of Jiangsu Province(Grant No.BK20211196)。
文摘To study the anti-explosion protection effect of polyurea coating on reinforced concrete box girder,two segmental girder specimens were made at a scale of 1:3,numbered as G(without polyurea coating)and PCG(with polyurea coating).The failure characteristics and dynamic responses of the specimens were compared through conducting explosion tests.The reliability of the numerical simulation using LS-DYNA software was verified by the test results.The effects of different scaled distances,reinforcement ratios,concrete strengths,coating thicknesses and ranges of polyurea were studied.The results show that the polyurea coating can effectively enhance the anti-explosion performance of the girder.The top plate of middle chamber in specimen G forms an elliptical penetrating hole,while that in specimen PCG only shows a very slight local dent.The peak vertical displacement and residual displacement of PCG decrease by 74.8% and 73.7%,respectively,compared with those of specimen G.For the TNT explosion with small equivalent,the polyurea coating has a more significant protective effect on reducing the size of fracture.With the increase of TNT equivalent,the protective effect of polyurea on reducing girder displacement becomes more significant.The optimal reinforcement ratio,concrete strength,thickness and range of polyurea coating were also drawn.
基金supported by the National Natural Science Foundation of China(Grant No.52079046).
文摘Currently,more than ten ultrahigh arch dams have been constructed or are being constructed in China.Safety control is essential to long-term operation of these dams.This study employed the flexibility coefficient and plastic complementary energy norm to assess the structural safety of arch dams.A comprehensive analysis was conducted,focusing on differences among conventional methods in characterizing the structural behavior of the Xiaowan arch dam in China.Subsequently,the spatiotemporal characteristics of the measured performance of the Xiaowan dam were explored,including periodicity,convergence,and time-effect characteristics.These findings revealed the governing mechanism of main factors.Furthermore,a heterogeneous spatial panel vector model was developed,considering both common factors and specific factors affecting the safety and performance of arch dams.This model aims to comprehensively illustrate spatial heterogeneity between the entire structure and local regions,introducing a specific effect quantity to characterize local deformation differences.Ultimately,the proposed model was applied to the Xiaowan arch dam,accurately quantifying the spatiotemporal heterogeneity of dam performance.Additionally,the spatiotemporal distri-bution characteristics of environmental load effects on different parts of the dam were reasonably interpreted.Validation of the model prediction enhances its credibility,leading to the formulation of health diagnosis criteria for future long-term operation of the Xiaowan dam.The findings not only enhance the predictive ability and timely control of ultrahigh arch dams'performance but also provide a crucial basis for assessing the effectiveness of engineering treatment measures.
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.
基金the National Natural Science Foundation of China under Grant Nos.U2268204,62172061 and 61662017National Key R&D Program of China under Grant Nos.2020YFB1711800 and 2020YFB1707900+1 种基金the Science and Technology Project of Sichuan Province under Grant Nos.2022YFG0155,2022YFG0157,2021GFW019,2021YFG0152,2021YFG0025,2020YFG0322the Guangxi Natural Science Foundation Project under Grant No.2021GXNSFAA220074.
文摘Predicting students’academic achievements is an essential issue in education,which can benefit many stakeholders,for instance,students,teachers,managers,etc.Compared with online courses such asMOOCs,students’academicrelateddata in the face-to-face physical teaching environment is usually sparsity,and the sample size is relativelysmall.It makes building models to predict students’performance accurately in such an environment even morechallenging.This paper proposes a Two-WayNeuralNetwork(TWNN)model based on the bidirectional recurrentneural network and graph neural network to predict students’next semester’s course performance using only theirprevious course achievements.Extensive experiments on a real dataset show that our model performs better thanthe baselines in many indicators.
文摘In this paper,a detailed model of a photovoltaic(PV)panel is used to study the accumulation of dust on solar panels.The presence of dust diminishes the incident light intensity penetrating the panel’s cover glass,as it increases the reflection of light by particles.This phenomenon,commonly known as the“soiling effect”,presents a significant challenge to PV systems on a global scale.Two basic models of the equivalent circuits of a solar cell can be found,namely the single-diode model and the two-diode models.The limitation of efficiency data in manufacturers’datasheets has encouraged us to develop an equivalent electrical model that is efficient under dust conditions,integrated with optical transmittance considerations to investigate the soiling effect.The proposed approach is based on the use of experimental current-voltage(I-V)characteristics with simulated data using MATLAB/Simulink.Our research outcomes underscores the feasibility of accurately quantifying the reduction in energy production resulting from soiling by assessing the optical transmittance of accumulated dust on the surface of PV glass.
文摘The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFB3707803)the National Natural Science Foundation of China(Grant Nos.12072179 and 11672168)+1 种基金the Key Research Project of Zhejiang Lab(Grant No.2021PE0AC02)Shanghai Engineering Research Center for Inte-grated Circuits and Advanced Display Materials.
文摘Dielectric elastomers(DEs)require balanced electric actuation performance and mechanical integrity under applied voltages.Incorporating high dielectric particles as fillers provides extensive design space to optimize concentration,morphology,and distribution for improved actuation performance and material modulus.This study presents an integrated framework combining finite element modeling(FEM)and deep learning to optimize the microstructure of DE composites.FEM first calculates actuation performance and the effective modulus across varied filler combinations,with these data used to train a convolutional neural network(CNN).Integrating the CNN into a multi-objective genetic algorithm generates designs with enhanced actuation performance and material modulus compared to the conventional optimization approach based on FEM approach within the same time.This framework harnesses artificial intelligence to navigate vast design possibilities,enabling optimized microstructures for high-performance DE composites.
基金The authors gratefully acknowledge the science teams of NASA High Mountain Asia 8-meter DEM and NASA ICESat-2 for providing access to the data.This work was conducted with the infrastructure provided by the National Remote Sensing Centre(NRSC),for which the authors were indebted to the Director,NRSC,Hyderabad.We acknowledge the continued support and scientific insights from Mr.Rakesh Fararoda,Mr.Sagar S Salunkhe,Mr.Hansraj Meena,Mr.Ashish K.Jain and other staff members of Regional Remote Sensing Centre-West,NRSC/ISRO,Jodhpur.The authors want to acknowledge Dr.Kamal Pandey,Scientist,IIRS,Dehradun,for sharing field-level information about the Auli-Joshimath.This research did not receive any specific grant from funding agencies in the public,commercial,or not-for-profit sectors.
文摘High Mountain Asia(HMA),recognized as a third pole,needs regular and intense studies as it is susceptible to climate change.An accurate and high-resolution Digital Elevation Model(DEM)for this region enables us to analyze it in a 3D environment and understand its intricate role as the Water Tower of Asia.The science teams of NASA realized an 8-m DEM using satellite stereo imagery for HMA,termed HMA 8-m DEM.In this research,we assessed the vertical accuracy of HMA 8-m DEM using reference elevations from ICESat-2 geolocated photons at three test sites of varied topography and land covers.Inferences were made from statistical quantifiers and elevation profiles.For the world’s highest mountain,Mount Everest,and its surroundings,Root Mean Squared Error(RMSE)and Mean Absolute Error(MAE)resulted in 1.94 m and 1.66 m,respectively;however,a uniform positive bias observed in the elevation profiles indicates the seasonal snow cover change will dent the accurate estimation of the elevation in this sort of test sites.The second test site containing gentle slopes with forest patches has exhibited the Digital Surface Model(DSM)features with RMSE and MAE of 0.58 m and 0.52 m,respectively.The third test site,situated in the Zanda County of the Qinghai-Tibet,is a relatively flat terrain bed,mostly bare earth with sudden river cuts,and has minimal errors with RMSE and MAE of 0.32 m and 0.29 m,respectively,and with a negligible bias.Additionally,in one more test site,the feasibility of detecting the glacial lakes was tested,which resulted in exhibiting a flat surface over the surface of the lakes,indicating the potential of HMA 8-m DEM for deriving the hydrological parameters.The results accrued in this investigation confirm that the HMA 8-m DEM has the best vertical accuracy and should be of high use for analyzing natural hazards and monitoring glacier surfaces.
基金the National Natural Science Foundation of China(U1901601)the National Key Research and Development Program of China(2022YFB3903503)。
文摘Faced with increasing global soil degradation,spatially explicit data on cropland soil organic matter(SOM)provides crucial data for soil carbon pool accounting,cropland quality assessment and the formulation of effective management policies.As a spatial information prediction technique,digital soil mapping(DSM)has been widely used to spatially map soil information at different scales.However,the accuracy of digital SOM maps for cropland is typically lower than for other land cover types due to the inherent difficulty in precisely quantifying human disturbance.To overcome this limitation,this study systematically assessed a framework of“information extractionfeature selection-model averaging”for improving model performance in mapping cropland SOM using 462 cropland soil samples collected in Guangzhou,China in 2021.The results showed that using the framework of dynamic information extraction,feature selection and model averaging could efficiently improve the accuracy of the final predictions(R^(2):0.48 to 0.53)without having obviously negative impacts on uncertainty.Quantifying the dynamic information of the environment was an efficient way to generate covariates that are linearly and nonlinearly related to SOM,which improved the R^(2)of random forest from 0.44 to 0.48 and the R^(2)of extreme gradient boosting from 0.37to 0.43.Forward recursive feature selection(FRFS)is recommended when there are relatively few environmental covariates(<200),whereas Boruta is recommended when there are many environmental covariates(>500).The Granger-Ramanathan model averaging approach could improve the prediction accuracy and average uncertainty.When the structures of initial prediction models are similar,increasing in the number of averaging models did not have significantly positive effects on the final predictions.Given the advantages of these selected strategies over information extraction,feature selection and model averaging have a great potential for high-accuracy soil mapping at any scales,so this approach can provide more reliable references for soil conservation policy-making.
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
文摘This study embarks on a comprehensive examination of optimization techniques within GPU-based parallel programming models,pivotal for advancing high-performance computing(HPC).Emphasizing the transition of GPUs from graphic-centric processors to versatile computing units,it delves into the nuanced optimization of memory access,thread management,algorithmic design,and data structures.These optimizations are critical for exploiting the parallel processing capabilities of GPUs,addressingboth the theoretical frameworks and practical implementations.By integrating advanced strategies such as memory coalescing,dynamic scheduling,and parallel algorithmic transformations,this research aims to significantly elevate computational efficiency and throughput.The findings underscore the potential of optimized GPU programming to revolutionize computational tasks across various domains,highlighting a pathway towards achieving unparalleled processing power and efficiency in HPC environments.The paper not only contributes to the academic discourse on GPU optimization but also provides actionable insights for developers,fostering advancements in computational sciences and technology.
文摘安全生产事故往往由多组织交互、多因素耦合造成,事故原因涉及多个组织。为预防和遏制多组织生产安全事故的发生,基于系统理论事故建模与过程模型(Systems-Theory Accident Modeling and Process,STAMP)、24Model,构建一种用于多组织事故分析的方法,并以青岛石油爆炸事故为例进行事故原因分析。结果显示:STAMP-24Model可以分组织,分层次且有效、全面、详细地分析涉及多个组织的事故原因,探究多组织之间的交互关系;对事故进行动态演化分析,可得到各组织不安全动作耦合关系与形成的事故失效链及管控失效路径,进而为预防多组织事故提供思路和参考。
基金the National Natural Science Foundation of China(6187138461921001).
文摘The optimal selection of radar clutter model is the premise of target detection,tracking,recognition,and cognitive waveform design in clutter background.Clutter characterization models are usually derived by mathematical simplification or empirical data fitting.However,the lack of standard model labels is a challenge in the optimal selection process.To solve this problem,a general three-level evaluation system for the model selection performance is proposed,including model selection accuracy index based on simulation data,fit goodness indexs based on the optimally selected model,and evaluation index based on the supporting performance to its third-party.The three-level evaluation system can more comprehensively and accurately describe the selection performance of the radar clutter model in different ways,and can be popularized and applied to the evaluation of other similar characterization model selection.
基金This study was supported by the National Natural Science Foundation of China(Grant No.31870613)Guizhou Province High-level Innovative Talents Training Plan Project(2016)5661.
文摘Light levels determine regeneration in stands and a key concern is how to regulate the light environment of different stand types to the requirements of the understory.In this study,we selected three stands typical in south China(a Cryptomeria japonica plantation,a Quercus acutissima plantation,and a mixed stand of both)and three thinning intensities to determine the best understory light environ-ment for 3-year-old Phoebe bournei seedlings.The canopy structure,understory light environment,and photosynthe-sis and growth indicators were assessed following thin-ning.Thinning improved canopy structure and understory light availability of each stand;species composition was the reason for differences in the understory light environ-ment.Under the same thinning intensity,the mixed stand had the greatest light radiation and most balanced spectral composition.P.bournei photosynthesis and growth were closely related to the light environment;all three stands required heavy thinning to create an effective and sustained understory light environment.In a suitable understory light environment,the efficiency of light interception,absorption,and use by seedlings was enhanced,resulting in a higher carbon assimilation the main limiting factor was stomatal conductance.As a shade-avoidance signal,red/far-red radia-tion is a critical factor driving changes in photosynthesis and growth of P.bournei seedlings,and a reduction increased light absorption and use capacity and height:diameter ratios.The growth advantage transformed from diameter to height,enabling seedlings to access more light.Our findings suggest that the regeneration of shade-tolerant species such as P.bournei could be enhanced if a targeted approach to thinning based on stand type was adopted.
基金funding support from the Science and Technology Commission of Shanghai Municipality(Grant No.21DZ1100500)the Shanghai Frontiers Science Center Program(2021-2025 No.20)+2 种基金the Zhangjiang National Innovation Demonstration Zone(Grant No.ZJ2019ZD-005)supported by a fellowship from the China Postdoctoral Science Foundation(2020M671169)the International Postdoctoral Exchange Program from the Administrative Committee of Post-Doctoral Researchers of China([2020]33)。
文摘Significant progress has been made in computational imaging(CI),in which deep convolutional neural networks(CNNs)have demonstrated that sparse speckle patterns can be reconstructed.However,due to the limited“local”kernel size of the convolutional operator,for the spatially dense patterns,such as the generic face images,the performance of CNNs is limited.Here,we propose a“non-local”model,termed the Speckle-Transformer(SpT)UNet,for speckle feature extraction of generic face images.It is worth noting that the lightweight SpT UNet reveals a high efficiency and strong comparative performance with Pearson Correlation Coefficient(PCC),and structural similarity measure(SSIM)exceeding 0.989,and 0.950,respectively.
基金Supported by the National Natural Science Foundation of China(Nos.41630969,41941013,41806225)the Tianjin Municipal Natural Science Foundation(No.20JCQNJC01290)。
文摘Satellite records show that the extent and thickness of sea ice in the Arctic Ocean have significantly decreased since the early 1970s.The prediction of sea ice is highly important,but accurate simulation of sea ice variations remains highly challenging.For improving model performance,sensitivity experiments were conducted using the coupled ocean and sea ice model(NEMO-LIM),and the simulation results were compared against satellite observations.Moreover,the contribution ratios of dynamic and thermodynamic processes to sea ice variations were analyzed.The results show that the performance of the model in reconstructing the spatial distribution of Arctic sea ice is highly sensitive to ice strength decay constant(C^(rhg)).By reducing the C^(rhg) constant,the sea ice compressive strength increases,leading to improved simulated sea ice states.The contribution of thermodynamic processes to sea ice melting was reduced due to less deformation and fracture of sea ice with increased compressive strength.Meanwhile,dynamic processes constrained more sea ice to the central Arctic Ocean and contributed to the increases in ice concentration,reducing the simulation bias in the central Arctic Ocean in summer.The root mean square error(RMSE)between modeled and the CryoSat-2/SMOS satellite observed ice thickness was reduced in the compressive strength-enhanced model solution.The ice thickness,especially of multiyear thick ice,was also reduced and matched with the satellite observation better in the freezing season.These provide an essential foundation on exploring the response of the marine ecosystem and biogeochemical cycling to sea ice changes.
基金supported by the National Key R&D Program of China with Grant number 2019YFB1803400the National Natural Science Foundation of China under Grant number 62071114the Fundamental Research Funds for the Central Universities of China under grant numbers 3204002004A2 and 2242022k30005。
文摘This paper investigates the wireless communication with a novel architecture of antenna arrays,termed modular extremely large-scale array(XLarray),where array elements of an extremely large number/size are regularly mounted on a shared platform with both horizontally and vertically interlaced modules.Each module consists of a moderate/flexible number of array elements with the inter-element distance typically in the order of the signal wavelength,while different modules are separated by the relatively large inter-module distance for convenience of practical deployment.By accurately modelling the signal amplitudes and phases,as well as projected apertures across all modular elements,we analyse the near-field signal-to-noise ratio(SNR)performance for modular XL-array communications.Based on the non-uniform spherical wave(NUSW)modelling,the closed-form SNR expression is derived in terms of key system parameters,such as the overall modular array size,distances of adjacent modules along all dimensions,and the user's three-dimensional(3D)location.In addition,with the number of modules in different dimensions increasing infinitely,the asymptotic SNR scaling laws are revealed.Furthermore,we show that our proposed near-field modelling and performance analysis include the results for existing array architectures/modelling as special cases,e.g.,the collocated XL-array architecture,the uniform plane wave(UPW)based far-field modelling,and the modular extremely large-scale uniform linear array(XL-ULA)of onedimension.Extensive simulation results are presented to validate our findings.