Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Ar...Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Arctic multiyear sea ice,changes in newly formed sea ice indicate more thermodynamic and dynamic information on Arctic atmosphere–ocean–ice interaction and northern mid–high latitude atmospheric teleconnections. Here, we use a large multimodel ensemble from phase 6 of the Coupled Model Intercomparison Project(CMIP6) to investigate future changes in wintertime newly formed Arctic sea ice. The commonly used model-democracy approach that gives equal weight to each model essentially assumes that all models are independent and equally plausible, which contradicts with the fact that there are large interdependencies in the ensemble and discrepancies in models' performances in reproducing observations. Therefore, instead of using the arithmetic mean of well-performing models or all available models for projections like in previous studies, we employ a newly developed model weighting scheme that weights all models in the ensemble with consideration of their performance and independence to provide more reliable projections. Model democracy leads to evident bias and large intermodel spread in CMIP6 projections of newly formed Arctic sea ice. However, we show that both the bias and the intermodel spread can be effectively reduced by the weighting scheme. Projections from the weighted models indicate that wintertime newly formed Arctic sea ice is likely to increase dramatically until the middle of this century regardless of the emissions scenario.Thereafter, it may decrease(or remain stable) if the Arctic warming crosses a threshold(or is extensively constrained).展开更多
Currently,more than ten ultrahigh arch dams have been constructed or are being constructed in China.Safety control is essential to long-term operation of these dams.This study employed the flexibility coefficient and ...Currently,more than ten ultrahigh arch dams have been constructed or are being constructed in China.Safety control is essential to long-term operation of these dams.This study employed the flexibility coefficient and plastic complementary energy norm to assess the structural safety of arch dams.A comprehensive analysis was conducted,focusing on differences among conventional methods in characterizing the structural behavior of the Xiaowan arch dam in China.Subsequently,the spatiotemporal characteristics of the measured performance of the Xiaowan dam were explored,including periodicity,convergence,and time-effect characteristics.These findings revealed the governing mechanism of main factors.Furthermore,a heterogeneous spatial panel vector model was developed,considering both common factors and specific factors affecting the safety and performance of arch dams.This model aims to comprehensively illustrate spatial heterogeneity between the entire structure and local regions,introducing a specific effect quantity to characterize local deformation differences.Ultimately,the proposed model was applied to the Xiaowan arch dam,accurately quantifying the spatiotemporal heterogeneity of dam performance.Additionally,the spatiotemporal distri-bution characteristics of environmental load effects on different parts of the dam were reasonably interpreted.Validation of the model prediction enhances its credibility,leading to the formulation of health diagnosis criteria for future long-term operation of the Xiaowan dam.The findings not only enhance the predictive ability and timely control of ultrahigh arch dams'performance but also provide a crucial basis for assessing the effectiveness of engineering treatment measures.展开更多
Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the curr...Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.展开更多
To study the anti-explosion protection effect of polyurea coating on reinforced concrete box girder,two segmental girder specimens were made at a scale of 1:3,numbered as G(without polyurea coating)and PCG(with polyur...To study the anti-explosion protection effect of polyurea coating on reinforced concrete box girder,two segmental girder specimens were made at a scale of 1:3,numbered as G(without polyurea coating)and PCG(with polyurea coating).The failure characteristics and dynamic responses of the specimens were compared through conducting explosion tests.The reliability of the numerical simulation using LS-DYNA software was verified by the test results.The effects of different scaled distances,reinforcement ratios,concrete strengths,coating thicknesses and ranges of polyurea were studied.The results show that the polyurea coating can effectively enhance the anti-explosion performance of the girder.The top plate of middle chamber in specimen G forms an elliptical penetrating hole,while that in specimen PCG only shows a very slight local dent.The peak vertical displacement and residual displacement of PCG decrease by 74.8% and 73.7%,respectively,compared with those of specimen G.For the TNT explosion with small equivalent,the polyurea coating has a more significant protective effect on reducing the size of fracture.With the increase of TNT equivalent,the protective effect of polyurea on reducing girder displacement becomes more significant.The optimal reinforcement ratio,concrete strength,thickness and range of polyurea coating were also drawn.展开更多
Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
Cyclic loads generated by environmental factors,such as winds,waves,and trains,will likely lead to performance degradation in pile foundations,resulting in issues like permanent displacement accumulation and bearing c...Cyclic loads generated by environmental factors,such as winds,waves,and trains,will likely lead to performance degradation in pile foundations,resulting in issues like permanent displacement accumulation and bearing capacity attenuation.This paper presents a semi-analytical solution for predicting the axial cyclic behavior of piles in sands.The solution relies on two enhanced nonlinear load-transfer models considering stress-strain hysteresis and cyclic degradation in the pile-soil interaction.Model parameters are calibrated through cyclic shear tests of the sand-steel interface and laboratory geotechnical testing of sands.A novel aspect involves the meticulous formulation of the shaft loadtransfer function using an interface constitutive model,which inherently inherits the interface model’s advantages,such as capturing hysteresis,hardening,degradation,and particle breakage.The semi-analytical solution is computed numerically using the matrix displacement method,and the calculated values are validated through model tests performed on non-displacement and displacement piles in sands.The results demonstrate that the predicted values show excellent agreement with the measured values for both the static and cyclic responses of piles in sands.The displacement pile response,including factors such as bearing capacity,mobilized shaft resistance,and convergence rate of permanent settlement,exhibit improvements compared to non-displacement piles attributed to the soil squeezing effect.This methodology presents an innovative analytical framework,allowing for integrating cyclic interface models into the theoretical investigation of pile responses.展开更多
Predicting students’academic achievements is an essential issue in education,which can benefit many stakeholders,for instance,students,teachers,managers,etc.Compared with online courses such asMOOCs,students’academi...Predicting students’academic achievements is an essential issue in education,which can benefit many stakeholders,for instance,students,teachers,managers,etc.Compared with online courses such asMOOCs,students’academicrelateddata in the face-to-face physical teaching environment is usually sparsity,and the sample size is relativelysmall.It makes building models to predict students’performance accurately in such an environment even morechallenging.This paper proposes a Two-WayNeuralNetwork(TWNN)model based on the bidirectional recurrentneural network and graph neural network to predict students’next semester’s course performance using only theirprevious course achievements.Extensive experiments on a real dataset show that our model performs better thanthe baselines in many indicators.展开更多
After the spread of COVID-19,e-learning systems have become crucial tools in educational systems worldwide,spanning all levels of education.This widespread use of e-learning platforms has resulted in the accumulation ...After the spread of COVID-19,e-learning systems have become crucial tools in educational systems worldwide,spanning all levels of education.This widespread use of e-learning platforms has resulted in the accumulation of vast amounts of valuable data,making it an attractive resource for predicting student performance.In this study,we aimed to predict student performance based on the analysis of data collected from the OULAD and Deeds datasets.The stacking method was employed for modeling in this research.The proposed model utilized weak learners,including nearest neighbor,decision tree,random forest,enhanced gradient,simple Bayes,and logistic regression algorithms.After a trial-and-error process,the logistic regression algorithm was selected as the final learner for the proposed model.The results of experiments with the above algorithms are reported separately for the pass and fail classes.The findings indicate that the accuracy of the proposed model on the OULAD dataset reached 98%.Overall,the proposed method improved accuracy by 4%on the OULAD dataset.展开更多
In this paper,a detailed model of a photovoltaic(PV)panel is used to study the accumulation of dust on solar panels.The presence of dust diminishes the incident light intensity penetrating the panel’s cover glass,as ...In this paper,a detailed model of a photovoltaic(PV)panel is used to study the accumulation of dust on solar panels.The presence of dust diminishes the incident light intensity penetrating the panel’s cover glass,as it increases the reflection of light by particles.This phenomenon,commonly known as the“soiling effect”,presents a significant challenge to PV systems on a global scale.Two basic models of the equivalent circuits of a solar cell can be found,namely the single-diode model and the two-diode models.The limitation of efficiency data in manufacturers’datasheets has encouraged us to develop an equivalent electrical model that is efficient under dust conditions,integrated with optical transmittance considerations to investigate the soiling effect.The proposed approach is based on the use of experimental current-voltage(I-V)characteristics with simulated data using MATLAB/Simulink.Our research outcomes underscores the feasibility of accurately quantifying the reduction in energy production resulting from soiling by assessing the optical transmittance of accumulated dust on the surface of PV glass.展开更多
Objective This study aimed to evaluate and compare the effectiveness of knowledge base-optimized and unoptimized large language models(LLMs)in the field of orthopedics to explore optimization strategies for the applic...Objective This study aimed to evaluate and compare the effectiveness of knowledge base-optimized and unoptimized large language models(LLMs)in the field of orthopedics to explore optimization strategies for the application of LLMs in specific fields.Methods This research constructed a specialized knowledge base using clinical guidelines from the American Academy of Orthopaedic Surgeons(AAOS)and authoritative orthopedic publications.A total of 30 orthopedic-related questions covering aspects such as anatomical knowledge,disease diagnosis,fracture classification,treatment options,and surgical techniques were input into both the knowledge base-optimized and unoptimized versions of the GPT-4,ChatGLM,and Spark LLM,with their generated responses recorded.The overall quality,accuracy,and comprehensiveness of these responses were evaluated by 3 experienced orthopedic surgeons.Results Compared with their unoptimized LLMs,the optimized version of GPT-4 showed improvements of 15.3%in overall quality,12.5%in accuracy,and 12.8%in comprehensiveness;ChatGLM showed improvements of 24.8%,16.1%,and 19.6%,respectively;and Spark LLM showed improvements of 6.5%,14.5%,and 24.7%,respectively.Conclusion The optimization of knowledge bases significantly enhances the quality,accuracy,and comprehensiveness of the responses provided by the 3 models in the orthopedic field.Therefore,knowledge base optimization is an effective method for improving the performance of LLMs in specific fields.展开更多
The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decoup...The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model.展开更多
The magnetopause is the boundary between the Earth’s magnetic field and the interplanetary magnetic field(IMF),located where the supersonic solar wind and magnetospheric pressure are in balance.Although empirical mod...The magnetopause is the boundary between the Earth’s magnetic field and the interplanetary magnetic field(IMF),located where the supersonic solar wind and magnetospheric pressure are in balance.Although empirical models and global magnetohydrodynamic simulations have been used to define the magnetopause,each of these has limitations.In this work,we use 15 years of magnetopause crossing data from the THEMIS(Time History of Events and Macroscale Interactions during Substorms)spacecraft and their corresponding solar wind parameters to investigate under which solar wind conditions these models predict more accurately.We analyze the pattern of large errors in the extensively used magnetopause model and show the specific solar wind parameters,such as components of the IMF,density,velocity,temperature,and others that produce these errors.It is shown that(1)the model error increases notably with increasing solar wind velocity,decreasing proton density,and increasing temperature;(2)when the cone angle becomes smaller or|Bx|is larger,the Shue98 model errors increase,which might be caused by the magnetic reconnection on the dayside magnetopause;(3)when|By|is large,the error of the model is large,which may be caused by the east-west asymmetry of the magnetopause due to magnetic reconnection;(4)when Bz is southward,the error of the model is larger;and(5)the error is larger for positive dipole tilt than for negative dipole tilt and increases with an increasing dipole tilt angle.However,the global simulation model by Liu ZQ et al.(2015)shows a substantial improvement in prediction accuracy when IMF Bx,By,or the dipole tilt cannot be ignored.This result can help us choose a more accurate model for forecasting the magnetopause under different solar wind conditions.展开更多
Blended teaching is one of the essential teaching methods with the development of information technology.Constructing a learning effect evaluation model is helpful to improve students’academic performance and helps t...Blended teaching is one of the essential teaching methods with the development of information technology.Constructing a learning effect evaluation model is helpful to improve students’academic performance and helps teachers to better implement course teaching.However,a lack of evaluation models for the fusion of temporal and non-temporal behavioral data leads to an unsatisfactory evaluation effect.To meet the demand for predicting students’academic performance through learning behavior data,this study proposes a learning effect evaluation method that integrates expert perspective indicators to predict academic performance by constructing a dual-stream network that combines temporal behavior data and non-temporal behavior data in the learning process.In this paper,firstly,the Delphi method is used to analyze and process the course learning behavior data of students and establish an effective evaluation index system of learning behavior with universality;secondly,the Mann-Whitney U-test and the complex correlation analysis are used to analyze further and validate the evaluation indexes;and lastly,a dual-stream information fusion model,which combines temporal and non-temporal features,is established.The learning effect evaluation model is built,and the results of the mean absolute error(MAE)and root mean square error(RMSE)indexes are 4.16 and 5.29,respectively.This study indicates that combining expert perspectives for evaluation index selection and further fusing temporal and non-temporal behavioral features that for learning effect evaluation and prediction is rationality,accuracy,and effectiveness,which provides a powerful help for the practical application of learning effect evaluation and prediction.展开更多
Dielectric elastomers(DEs)require balanced electric actuation performance and mechanical integrity under applied voltages.Incorporating high dielectric particles as fillers provides extensive design space to optimize ...Dielectric elastomers(DEs)require balanced electric actuation performance and mechanical integrity under applied voltages.Incorporating high dielectric particles as fillers provides extensive design space to optimize concentration,morphology,and distribution for improved actuation performance and material modulus.This study presents an integrated framework combining finite element modeling(FEM)and deep learning to optimize the microstructure of DE composites.FEM first calculates actuation performance and the effective modulus across varied filler combinations,with these data used to train a convolutional neural network(CNN).Integrating the CNN into a multi-objective genetic algorithm generates designs with enhanced actuation performance and material modulus compared to the conventional optimization approach based on FEM approach within the same time.This framework harnesses artificial intelligence to navigate vast design possibilities,enabling optimized microstructures for high-performance DE composites.展开更多
High Mountain Asia(HMA),recognized as a third pole,needs regular and intense studies as it is susceptible to climate change.An accurate and high-resolution Digital Elevation Model(DEM)for this region enables us to ana...High Mountain Asia(HMA),recognized as a third pole,needs regular and intense studies as it is susceptible to climate change.An accurate and high-resolution Digital Elevation Model(DEM)for this region enables us to analyze it in a 3D environment and understand its intricate role as the Water Tower of Asia.The science teams of NASA realized an 8-m DEM using satellite stereo imagery for HMA,termed HMA 8-m DEM.In this research,we assessed the vertical accuracy of HMA 8-m DEM using reference elevations from ICESat-2 geolocated photons at three test sites of varied topography and land covers.Inferences were made from statistical quantifiers and elevation profiles.For the world’s highest mountain,Mount Everest,and its surroundings,Root Mean Squared Error(RMSE)and Mean Absolute Error(MAE)resulted in 1.94 m and 1.66 m,respectively;however,a uniform positive bias observed in the elevation profiles indicates the seasonal snow cover change will dent the accurate estimation of the elevation in this sort of test sites.The second test site containing gentle slopes with forest patches has exhibited the Digital Surface Model(DSM)features with RMSE and MAE of 0.58 m and 0.52 m,respectively.The third test site,situated in the Zanda County of the Qinghai-Tibet,is a relatively flat terrain bed,mostly bare earth with sudden river cuts,and has minimal errors with RMSE and MAE of 0.32 m and 0.29 m,respectively,and with a negligible bias.Additionally,in one more test site,the feasibility of detecting the glacial lakes was tested,which resulted in exhibiting a flat surface over the surface of the lakes,indicating the potential of HMA 8-m DEM for deriving the hydrological parameters.The results accrued in this investigation confirm that the HMA 8-m DEM has the best vertical accuracy and should be of high use for analyzing natural hazards and monitoring glacier surfaces.展开更多
Faced with increasing global soil degradation,spatially explicit data on cropland soil organic matter(SOM)provides crucial data for soil carbon pool accounting,cropland quality assessment and the formulation of effect...Faced with increasing global soil degradation,spatially explicit data on cropland soil organic matter(SOM)provides crucial data for soil carbon pool accounting,cropland quality assessment and the formulation of effective management policies.As a spatial information prediction technique,digital soil mapping(DSM)has been widely used to spatially map soil information at different scales.However,the accuracy of digital SOM maps for cropland is typically lower than for other land cover types due to the inherent difficulty in precisely quantifying human disturbance.To overcome this limitation,this study systematically assessed a framework of“information extractionfeature selection-model averaging”for improving model performance in mapping cropland SOM using 462 cropland soil samples collected in Guangzhou,China in 2021.The results showed that using the framework of dynamic information extraction,feature selection and model averaging could efficiently improve the accuracy of the final predictions(R^(2):0.48 to 0.53)without having obviously negative impacts on uncertainty.Quantifying the dynamic information of the environment was an efficient way to generate covariates that are linearly and nonlinearly related to SOM,which improved the R^(2)of random forest from 0.44 to 0.48 and the R^(2)of extreme gradient boosting from 0.37to 0.43.Forward recursive feature selection(FRFS)is recommended when there are relatively few environmental covariates(<200),whereas Boruta is recommended when there are many environmental covariates(>500).The Granger-Ramanathan model averaging approach could improve the prediction accuracy and average uncertainty.When the structures of initial prediction models are similar,increasing in the number of averaging models did not have significantly positive effects on the final predictions.Given the advantages of these selected strategies over information extraction,feature selection and model averaging have a great potential for high-accuracy soil mapping at any scales,so this approach can provide more reliable references for soil conservation policy-making.展开更多
Municipal solid waste generation is strongly linked to rising human population and expanding urban areas, with significant implications on urban metabolism as well as space and place values redefinition. Effective man...Municipal solid waste generation is strongly linked to rising human population and expanding urban areas, with significant implications on urban metabolism as well as space and place values redefinition. Effective management performance of municipal solid waste management underscores the interdisciplinarity strategies. Such knowledge and skills are paramount to uncover the sources of waste generation as well as means of waste storage, collection, recycling, transportation, handling/treatment, disposal, and monitoring. This study was conducted in Dar es Salaam city. Driven by the curiosity model of the solid waste minimization performance at source, study data was collected using focus group discussion techniques to ward-level local government officers, which was triangulated with literature and documentary review. The main themes of the FGD were situational factors (SFA) and local government by-laws (LGBY). In the FGD session, sub-themes of SFA tricked to understand how MSW minimization is related to the presence and effect of services such as land use planning, availability of landfills, solid waste transfer stations, material recovery facilities, incinerators, solid waste collection bins, solid waste trucks, solid waste management budget and solid waste collection agents. Similarly, FGD on LGBY was extended by sub-themes such as contents of the by-law, community awareness of the by-law, and by-law enforcement mechanisms. While data preparation applied an analytical hierarchy process, data analysis applied an ordinary least square (OLS) regression model for sub-criteria that explain SFA and LGBY;and OLS standard residues as variables into geographically weighted regression with a resolution of 241 × 241 meter in ArcMap v10.5. Results showed that situational factors and local government by-laws have a strong relationship with the rate of minimizing solid waste dumping in water bodies (local R square = 0.94).展开更多
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
Neuromyelitis optica spectrum disorders are neuroinflammatory demyelinating disorders that lead to permanent visual loss and motor dysfunction.To date,no effective treatment exists as the exact causative mechanism rem...Neuromyelitis optica spectrum disorders are neuroinflammatory demyelinating disorders that lead to permanent visual loss and motor dysfunction.To date,no effective treatment exists as the exact causative mechanism remains unknown.Therefore,experimental models of neuromyelitis optica spectrum disorders are essential for exploring its pathogenesis and in screening for therapeutic targets.Since most patients with neuromyelitis optica spectrum disorders are seropositive for IgG autoantibodies against aquaporin-4,which is highly expressed on the membrane of astrocyte endfeet,most current experimental models are based on aquaporin-4-IgG that initially targets astrocytes.These experimental models have successfully simulated many pathological features of neuromyelitis optica spectrum disorders,such as aquaporin-4 loss,astrocytopathy,granulocyte and macrophage infiltration,complement activation,demyelination,and neuronal loss;however,they do not fully capture the pathological process of human neuromyelitis optica spectrum disorders.In this review,we summarize the currently known pathogenic mechanisms and the development of associated experimental models in vitro,ex vivo,and in vivo for neuromyelitis optica spectrum disorders,suggest potential pathogenic mechanisms for further investigation,and provide guidance on experimental model choices.In addition,this review summarizes the latest information on pathologies and therapies for neuromyelitis optica spectrum disorders based on experimental models of aquaporin-4-IgG-seropositive neuromyelitis optica spectrum disorders,offering further therapeutic targets and a theoretical basis for clinical trials.展开更多
This study embarks on a comprehensive examination of optimization techniques within GPU-based parallel programming models,pivotal for advancing high-performance computing(HPC).Emphasizing the transition of GPUs from g...This study embarks on a comprehensive examination of optimization techniques within GPU-based parallel programming models,pivotal for advancing high-performance computing(HPC).Emphasizing the transition of GPUs from graphic-centric processors to versatile computing units,it delves into the nuanced optimization of memory access,thread management,algorithmic design,and data structures.These optimizations are critical for exploiting the parallel processing capabilities of GPUs,addressingboth the theoretical frameworks and practical implementations.By integrating advanced strategies such as memory coalescing,dynamic scheduling,and parallel algorithmic transformations,this research aims to significantly elevate computational efficiency and throughput.The findings underscore the potential of optimized GPU programming to revolutionize computational tasks across various domains,highlighting a pathway towards achieving unparalleled processing power and efficiency in HPC environments.The paper not only contributes to the academic discourse on GPU optimization but also provides actionable insights for developers,fostering advancements in computational sciences and technology.展开更多
基金supported by the Chinese–Norwegian Collaboration Projects within Climate Systems jointly funded by the National Key Research and Development Program of China (Grant No.2022YFE0106800)the Research Council of Norway funded project,MAPARC (Grant No.328943)+2 种基金the support from the Research Council of Norway funded project,COMBINED (Grant No.328935)the National Natural Science Foundation of China (Grant No.42075030)the Postgraduate Research and Practice Innovation Program of Jiangsu Province (KYCX23_1314)。
文摘Precipitous Arctic sea-ice decline and the corresponding increase in Arctic open-water areas in summer months give more space for sea-ice growth in the subsequent cold seasons. Compared to the decline of the entire Arctic multiyear sea ice,changes in newly formed sea ice indicate more thermodynamic and dynamic information on Arctic atmosphere–ocean–ice interaction and northern mid–high latitude atmospheric teleconnections. Here, we use a large multimodel ensemble from phase 6 of the Coupled Model Intercomparison Project(CMIP6) to investigate future changes in wintertime newly formed Arctic sea ice. The commonly used model-democracy approach that gives equal weight to each model essentially assumes that all models are independent and equally plausible, which contradicts with the fact that there are large interdependencies in the ensemble and discrepancies in models' performances in reproducing observations. Therefore, instead of using the arithmetic mean of well-performing models or all available models for projections like in previous studies, we employ a newly developed model weighting scheme that weights all models in the ensemble with consideration of their performance and independence to provide more reliable projections. Model democracy leads to evident bias and large intermodel spread in CMIP6 projections of newly formed Arctic sea ice. However, we show that both the bias and the intermodel spread can be effectively reduced by the weighting scheme. Projections from the weighted models indicate that wintertime newly formed Arctic sea ice is likely to increase dramatically until the middle of this century regardless of the emissions scenario.Thereafter, it may decrease(or remain stable) if the Arctic warming crosses a threshold(or is extensively constrained).
基金supported by the National Natural Science Foundation of China(Grant No.52079046).
文摘Currently,more than ten ultrahigh arch dams have been constructed or are being constructed in China.Safety control is essential to long-term operation of these dams.This study employed the flexibility coefficient and plastic complementary energy norm to assess the structural safety of arch dams.A comprehensive analysis was conducted,focusing on differences among conventional methods in characterizing the structural behavior of the Xiaowan arch dam in China.Subsequently,the spatiotemporal characteristics of the measured performance of the Xiaowan dam were explored,including periodicity,convergence,and time-effect characteristics.These findings revealed the governing mechanism of main factors.Furthermore,a heterogeneous spatial panel vector model was developed,considering both common factors and specific factors affecting the safety and performance of arch dams.This model aims to comprehensively illustrate spatial heterogeneity between the entire structure and local regions,introducing a specific effect quantity to characterize local deformation differences.Ultimately,the proposed model was applied to the Xiaowan arch dam,accurately quantifying the spatiotemporal heterogeneity of dam performance.Additionally,the spatiotemporal distri-bution characteristics of environmental load effects on different parts of the dam were reasonably interpreted.Validation of the model prediction enhances its credibility,leading to the formulation of health diagnosis criteria for future long-term operation of the Xiaowan dam.The findings not only enhance the predictive ability and timely control of ultrahigh arch dams'performance but also provide a crucial basis for assessing the effectiveness of engineering treatment measures.
基金supported by Ministry of Science and Technology of China (Grant No. 2018YFA0606501)National Natural Science Foundation of China (Grant No. 42075037)+1 种基金Key Laboratory Open Research Program of Xinjiang Science and Technology Department (Grant No. 2022D04009)the National Key Scientific and Technological Infrastructure project “Earth System Numerical Simulation Facility” (EarthLab)。
文摘Both the attribution of historical change and future projections of droughts rely heavily on climate modeling. However,reasonable drought simulations have remained a challenge, and the related performances of the current state-of-the-art Coupled Model Intercomparison Project phase 6(CMIP6) models remain unknown. Here, both the strengths and weaknesses of CMIP6 models in simulating droughts and corresponding hydrothermal conditions in drylands are assessed.While the general patterns of simulated meteorological elements in drylands resemble the observations, the annual precipitation is overestimated by ~33%(with a model spread of 2.3%–77.2%), along with an underestimation of potential evapotranspiration(PET) by ~32%(17.5%–47.2%). The water deficit condition, measured by the difference between precipitation and PET, is 50%(29.1%–71.7%) weaker than observations. The CMIP6 models show weaknesses in capturing the climate mean drought characteristics in drylands, particularly with the occurrence and duration largely underestimated in the hyperarid Afro-Asian areas. Nonetheless, the drought-associated meteorological anomalies, including reduced precipitation, warmer temperatures, higher evaporative demand, and increased water deficit conditions, are reasonably reproduced. The simulated magnitude of precipitation(water deficit) associated with dryland droughts is overestimated by 28%(24%) compared to observations. The observed increasing trends in drought fractional area,occurrence, and corresponding meteorological anomalies during 1980–2014 are reasonably reproduced. Still, the increase in drought characteristics, associated precipitation and water deficit are obviously underestimated after the late 1990s,especially for mild and moderate droughts, indicative of a weaker response of dryland drought changes to global warming in CMIP6 models. Our results suggest that it is imperative to employ bias correction approaches in drought-related studies over drylands by using CMIP6 outputs.
基金the Natural Science Foundation of Jiangsu Province(Grant No.BK20200494)China Postdoctoral Science Foundation(Grant No.2021M701725)+3 种基金Jiangsu Postdoctoral Research Funding Program(Grant No.2021K522C)Fundamental Research Funds for the Central Universities(Grant No.30919011246)National Natural Science Foundation of China(Grant No.52278188)Natural Science Foundation of Jiangsu Province(Grant No.BK20211196)。
文摘To study the anti-explosion protection effect of polyurea coating on reinforced concrete box girder,two segmental girder specimens were made at a scale of 1:3,numbered as G(without polyurea coating)and PCG(with polyurea coating).The failure characteristics and dynamic responses of the specimens were compared through conducting explosion tests.The reliability of the numerical simulation using LS-DYNA software was verified by the test results.The effects of different scaled distances,reinforcement ratios,concrete strengths,coating thicknesses and ranges of polyurea were studied.The results show that the polyurea coating can effectively enhance the anti-explosion performance of the girder.The top plate of middle chamber in specimen G forms an elliptical penetrating hole,while that in specimen PCG only shows a very slight local dent.The peak vertical displacement and residual displacement of PCG decrease by 74.8% and 73.7%,respectively,compared with those of specimen G.For the TNT explosion with small equivalent,the polyurea coating has a more significant protective effect on reducing the size of fracture.With the increase of TNT equivalent,the protective effect of polyurea on reducing girder displacement becomes more significant.The optimal reinforcement ratio,concrete strength,thickness and range of polyurea coating were also drawn.
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.
基金the financial support provided by the National Natural Science Foundation of China(Grant No.42272310).
文摘Cyclic loads generated by environmental factors,such as winds,waves,and trains,will likely lead to performance degradation in pile foundations,resulting in issues like permanent displacement accumulation and bearing capacity attenuation.This paper presents a semi-analytical solution for predicting the axial cyclic behavior of piles in sands.The solution relies on two enhanced nonlinear load-transfer models considering stress-strain hysteresis and cyclic degradation in the pile-soil interaction.Model parameters are calibrated through cyclic shear tests of the sand-steel interface and laboratory geotechnical testing of sands.A novel aspect involves the meticulous formulation of the shaft loadtransfer function using an interface constitutive model,which inherently inherits the interface model’s advantages,such as capturing hysteresis,hardening,degradation,and particle breakage.The semi-analytical solution is computed numerically using the matrix displacement method,and the calculated values are validated through model tests performed on non-displacement and displacement piles in sands.The results demonstrate that the predicted values show excellent agreement with the measured values for both the static and cyclic responses of piles in sands.The displacement pile response,including factors such as bearing capacity,mobilized shaft resistance,and convergence rate of permanent settlement,exhibit improvements compared to non-displacement piles attributed to the soil squeezing effect.This methodology presents an innovative analytical framework,allowing for integrating cyclic interface models into the theoretical investigation of pile responses.
基金the National Natural Science Foundation of China under Grant Nos.U2268204,62172061 and 61662017National Key R&D Program of China under Grant Nos.2020YFB1711800 and 2020YFB1707900+1 种基金the Science and Technology Project of Sichuan Province under Grant Nos.2022YFG0155,2022YFG0157,2021GFW019,2021YFG0152,2021YFG0025,2020YFG0322the Guangxi Natural Science Foundation Project under Grant No.2021GXNSFAA220074.
文摘Predicting students’academic achievements is an essential issue in education,which can benefit many stakeholders,for instance,students,teachers,managers,etc.Compared with online courses such asMOOCs,students’academicrelateddata in the face-to-face physical teaching environment is usually sparsity,and the sample size is relativelysmall.It makes building models to predict students’performance accurately in such an environment even morechallenging.This paper proposes a Two-WayNeuralNetwork(TWNN)model based on the bidirectional recurrentneural network and graph neural network to predict students’next semester’s course performance using only theirprevious course achievements.Extensive experiments on a real dataset show that our model performs better thanthe baselines in many indicators.
文摘After the spread of COVID-19,e-learning systems have become crucial tools in educational systems worldwide,spanning all levels of education.This widespread use of e-learning platforms has resulted in the accumulation of vast amounts of valuable data,making it an attractive resource for predicting student performance.In this study,we aimed to predict student performance based on the analysis of data collected from the OULAD and Deeds datasets.The stacking method was employed for modeling in this research.The proposed model utilized weak learners,including nearest neighbor,decision tree,random forest,enhanced gradient,simple Bayes,and logistic regression algorithms.After a trial-and-error process,the logistic regression algorithm was selected as the final learner for the proposed model.The results of experiments with the above algorithms are reported separately for the pass and fail classes.The findings indicate that the accuracy of the proposed model on the OULAD dataset reached 98%.Overall,the proposed method improved accuracy by 4%on the OULAD dataset.
文摘In this paper,a detailed model of a photovoltaic(PV)panel is used to study the accumulation of dust on solar panels.The presence of dust diminishes the incident light intensity penetrating the panel’s cover glass,as it increases the reflection of light by particles.This phenomenon,commonly known as the“soiling effect”,presents a significant challenge to PV systems on a global scale.Two basic models of the equivalent circuits of a solar cell can be found,namely the single-diode model and the two-diode models.The limitation of efficiency data in manufacturers’datasheets has encouraged us to develop an equivalent electrical model that is efficient under dust conditions,integrated with optical transmittance considerations to investigate the soiling effect.The proposed approach is based on the use of experimental current-voltage(I-V)characteristics with simulated data using MATLAB/Simulink.Our research outcomes underscores the feasibility of accurately quantifying the reduction in energy production resulting from soiling by assessing the optical transmittance of accumulated dust on the surface of PV glass.
基金supported by the National Natural Science Foundation of China(Grant No.81974355 and No.82172524).
文摘Objective This study aimed to evaluate and compare the effectiveness of knowledge base-optimized and unoptimized large language models(LLMs)in the field of orthopedics to explore optimization strategies for the application of LLMs in specific fields.Methods This research constructed a specialized knowledge base using clinical guidelines from the American Academy of Orthopaedic Surgeons(AAOS)and authoritative orthopedic publications.A total of 30 orthopedic-related questions covering aspects such as anatomical knowledge,disease diagnosis,fracture classification,treatment options,and surgical techniques were input into both the knowledge base-optimized and unoptimized versions of the GPT-4,ChatGLM,and Spark LLM,with their generated responses recorded.The overall quality,accuracy,and comprehensiveness of these responses were evaluated by 3 experienced orthopedic surgeons.Results Compared with their unoptimized LLMs,the optimized version of GPT-4 showed improvements of 15.3%in overall quality,12.5%in accuracy,and 12.8%in comprehensiveness;ChatGLM showed improvements of 24.8%,16.1%,and 19.6%,respectively;and Spark LLM showed improvements of 6.5%,14.5%,and 24.7%,respectively.Conclusion The optimization of knowledge bases significantly enhances the quality,accuracy,and comprehensiveness of the responses provided by the 3 models in the orthopedic field.Therefore,knowledge base optimization is an effective method for improving the performance of LLMs in specific fields.
文摘The evolution of the current network has challenges of programmability, maintainability and manageability, due to network ossification. This challenge led to the concept of software-defined networking (SDN), to decouple the control system from the infrastructure plane caused by ossification. The innovation created a problem with controller placement. That is how to effectively place controllers within a network topology to manage the network of data plane devices from the control plane. The study was designed to empirically evaluate and compare the functionalities of two controller placement algorithms: the POCO and MOCO. The methodology adopted in the study is the explorative and comparative investigation techniques. The study evaluated the performances of the Pareto optimal combination (POCO) and multi-objective combination (MOCO) algorithms in relation to calibrated positions of the controller within a software-defined network. The network environment and measurement metrics were held constant for both the POCO and MOCO models during the evaluation. The strengths and weaknesses of the POCO and MOCO models were justified. The results showed that the latencies of the two algorithms in relation to the GoodNet network are 3100 ms and 2500 ms for POCO and MOCO respectively. In Switch to Controller Average Case latency, the performance gives 2598 ms and 2769 ms for POCO and MOCO respectively. In Worst Case Switch to Controller latency, the performance shows 2776 ms and 2987 ms for POCO and MOCO respectively. The latencies of the two algorithms evaluated in relation to the Savvis network, compared as follows: 2912 ms and 2784 ms for POCO and MOCO respectively in Switch to Controller Average Case latency, 3129 ms and 3017 ms for POCO and MOCO respectively in Worst Case Switch to Controller latency, 2789 ms and 2693 ms for POCO and MOCO respectively in Average Case Controller to Controller latency, and 2873 ms and 2756 ms for POCO and MOCO in Worst Case Switch to Controller latency respectively. The latencies of the two algorithms evaluated in relation to the AARNet, network compared as follows: 2473 ms and 2129 ms for POCO and MOCO respectively, in Switch to Controller Average Case latency, 2198 ms and 2268 ms for POCO and MOCO respectively, in Worst Case Switch to Controller latency, 2598 ms and 2471 ms for POCO and MOCO respectively, in Average Case Controller to Controller latency, 2689 ms and 2814 ms for POCO and MOCO respectively Worst Case Controller to Controller latency. The Average Case and Worst-Case latencies for Switch to Controller and Controller to Controller are minimal, and favourable to the POCO model as against the MOCO model when evaluated in the Goodnet, Savvis, and the Aanet networks. This simply indicates that the POCO model has a speed advantage as against the MOCO model, which appears to be more resilient than the POCO model.
基金supported by the National Natural Science Foundation of China(Grant Nos.42030203,42004132,42074195,and 42074183).
文摘The magnetopause is the boundary between the Earth’s magnetic field and the interplanetary magnetic field(IMF),located where the supersonic solar wind and magnetospheric pressure are in balance.Although empirical models and global magnetohydrodynamic simulations have been used to define the magnetopause,each of these has limitations.In this work,we use 15 years of magnetopause crossing data from the THEMIS(Time History of Events and Macroscale Interactions during Substorms)spacecraft and their corresponding solar wind parameters to investigate under which solar wind conditions these models predict more accurately.We analyze the pattern of large errors in the extensively used magnetopause model and show the specific solar wind parameters,such as components of the IMF,density,velocity,temperature,and others that produce these errors.It is shown that(1)the model error increases notably with increasing solar wind velocity,decreasing proton density,and increasing temperature;(2)when the cone angle becomes smaller or|Bx|is larger,the Shue98 model errors increase,which might be caused by the magnetic reconnection on the dayside magnetopause;(3)when|By|is large,the error of the model is large,which may be caused by the east-west asymmetry of the magnetopause due to magnetic reconnection;(4)when Bz is southward,the error of the model is larger;and(5)the error is larger for positive dipole tilt than for negative dipole tilt and increases with an increasing dipole tilt angle.However,the global simulation model by Liu ZQ et al.(2015)shows a substantial improvement in prediction accuracy when IMF Bx,By,or the dipole tilt cannot be ignored.This result can help us choose a more accurate model for forecasting the magnetopause under different solar wind conditions.
基金supported by the National Key R&D Program of China(2022YFB3203800)National Natural Science Foundation of China(62007026)+2 种基金National Young Talent Program,Shaanxi Young Top-notch Talent Program,Key Research and Development Program of Shaanxi(2022GY-313)Xi’an Science and Technology Project(23ZDCYJSGG0026-2023)the Fundamental Research Funds for Central Universities(ZYTS23192).
文摘Blended teaching is one of the essential teaching methods with the development of information technology.Constructing a learning effect evaluation model is helpful to improve students’academic performance and helps teachers to better implement course teaching.However,a lack of evaluation models for the fusion of temporal and non-temporal behavioral data leads to an unsatisfactory evaluation effect.To meet the demand for predicting students’academic performance through learning behavior data,this study proposes a learning effect evaluation method that integrates expert perspective indicators to predict academic performance by constructing a dual-stream network that combines temporal behavior data and non-temporal behavior data in the learning process.In this paper,firstly,the Delphi method is used to analyze and process the course learning behavior data of students and establish an effective evaluation index system of learning behavior with universality;secondly,the Mann-Whitney U-test and the complex correlation analysis are used to analyze further and validate the evaluation indexes;and lastly,a dual-stream information fusion model,which combines temporal and non-temporal features,is established.The learning effect evaluation model is built,and the results of the mean absolute error(MAE)and root mean square error(RMSE)indexes are 4.16 and 5.29,respectively.This study indicates that combining expert perspectives for evaluation index selection and further fusing temporal and non-temporal behavioral features that for learning effect evaluation and prediction is rationality,accuracy,and effectiveness,which provides a powerful help for the practical application of learning effect evaluation and prediction.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFB3707803)the National Natural Science Foundation of China(Grant Nos.12072179 and 11672168)+1 种基金the Key Research Project of Zhejiang Lab(Grant No.2021PE0AC02)Shanghai Engineering Research Center for Inte-grated Circuits and Advanced Display Materials.
文摘Dielectric elastomers(DEs)require balanced electric actuation performance and mechanical integrity under applied voltages.Incorporating high dielectric particles as fillers provides extensive design space to optimize concentration,morphology,and distribution for improved actuation performance and material modulus.This study presents an integrated framework combining finite element modeling(FEM)and deep learning to optimize the microstructure of DE composites.FEM first calculates actuation performance and the effective modulus across varied filler combinations,with these data used to train a convolutional neural network(CNN).Integrating the CNN into a multi-objective genetic algorithm generates designs with enhanced actuation performance and material modulus compared to the conventional optimization approach based on FEM approach within the same time.This framework harnesses artificial intelligence to navigate vast design possibilities,enabling optimized microstructures for high-performance DE composites.
基金The authors gratefully acknowledge the science teams of NASA High Mountain Asia 8-meter DEM and NASA ICESat-2 for providing access to the data.This work was conducted with the infrastructure provided by the National Remote Sensing Centre(NRSC),for which the authors were indebted to the Director,NRSC,Hyderabad.We acknowledge the continued support and scientific insights from Mr.Rakesh Fararoda,Mr.Sagar S Salunkhe,Mr.Hansraj Meena,Mr.Ashish K.Jain and other staff members of Regional Remote Sensing Centre-West,NRSC/ISRO,Jodhpur.The authors want to acknowledge Dr.Kamal Pandey,Scientist,IIRS,Dehradun,for sharing field-level information about the Auli-Joshimath.This research did not receive any specific grant from funding agencies in the public,commercial,or not-for-profit sectors.
文摘High Mountain Asia(HMA),recognized as a third pole,needs regular and intense studies as it is susceptible to climate change.An accurate and high-resolution Digital Elevation Model(DEM)for this region enables us to analyze it in a 3D environment and understand its intricate role as the Water Tower of Asia.The science teams of NASA realized an 8-m DEM using satellite stereo imagery for HMA,termed HMA 8-m DEM.In this research,we assessed the vertical accuracy of HMA 8-m DEM using reference elevations from ICESat-2 geolocated photons at three test sites of varied topography and land covers.Inferences were made from statistical quantifiers and elevation profiles.For the world’s highest mountain,Mount Everest,and its surroundings,Root Mean Squared Error(RMSE)and Mean Absolute Error(MAE)resulted in 1.94 m and 1.66 m,respectively;however,a uniform positive bias observed in the elevation profiles indicates the seasonal snow cover change will dent the accurate estimation of the elevation in this sort of test sites.The second test site containing gentle slopes with forest patches has exhibited the Digital Surface Model(DSM)features with RMSE and MAE of 0.58 m and 0.52 m,respectively.The third test site,situated in the Zanda County of the Qinghai-Tibet,is a relatively flat terrain bed,mostly bare earth with sudden river cuts,and has minimal errors with RMSE and MAE of 0.32 m and 0.29 m,respectively,and with a negligible bias.Additionally,in one more test site,the feasibility of detecting the glacial lakes was tested,which resulted in exhibiting a flat surface over the surface of the lakes,indicating the potential of HMA 8-m DEM for deriving the hydrological parameters.The results accrued in this investigation confirm that the HMA 8-m DEM has the best vertical accuracy and should be of high use for analyzing natural hazards and monitoring glacier surfaces.
基金the National Natural Science Foundation of China(U1901601)the National Key Research and Development Program of China(2022YFB3903503)。
文摘Faced with increasing global soil degradation,spatially explicit data on cropland soil organic matter(SOM)provides crucial data for soil carbon pool accounting,cropland quality assessment and the formulation of effective management policies.As a spatial information prediction technique,digital soil mapping(DSM)has been widely used to spatially map soil information at different scales.However,the accuracy of digital SOM maps for cropland is typically lower than for other land cover types due to the inherent difficulty in precisely quantifying human disturbance.To overcome this limitation,this study systematically assessed a framework of“information extractionfeature selection-model averaging”for improving model performance in mapping cropland SOM using 462 cropland soil samples collected in Guangzhou,China in 2021.The results showed that using the framework of dynamic information extraction,feature selection and model averaging could efficiently improve the accuracy of the final predictions(R^(2):0.48 to 0.53)without having obviously negative impacts on uncertainty.Quantifying the dynamic information of the environment was an efficient way to generate covariates that are linearly and nonlinearly related to SOM,which improved the R^(2)of random forest from 0.44 to 0.48 and the R^(2)of extreme gradient boosting from 0.37to 0.43.Forward recursive feature selection(FRFS)is recommended when there are relatively few environmental covariates(<200),whereas Boruta is recommended when there are many environmental covariates(>500).The Granger-Ramanathan model averaging approach could improve the prediction accuracy and average uncertainty.When the structures of initial prediction models are similar,increasing in the number of averaging models did not have significantly positive effects on the final predictions.Given the advantages of these selected strategies over information extraction,feature selection and model averaging have a great potential for high-accuracy soil mapping at any scales,so this approach can provide more reliable references for soil conservation policy-making.
文摘Municipal solid waste generation is strongly linked to rising human population and expanding urban areas, with significant implications on urban metabolism as well as space and place values redefinition. Effective management performance of municipal solid waste management underscores the interdisciplinarity strategies. Such knowledge and skills are paramount to uncover the sources of waste generation as well as means of waste storage, collection, recycling, transportation, handling/treatment, disposal, and monitoring. This study was conducted in Dar es Salaam city. Driven by the curiosity model of the solid waste minimization performance at source, study data was collected using focus group discussion techniques to ward-level local government officers, which was triangulated with literature and documentary review. The main themes of the FGD were situational factors (SFA) and local government by-laws (LGBY). In the FGD session, sub-themes of SFA tricked to understand how MSW minimization is related to the presence and effect of services such as land use planning, availability of landfills, solid waste transfer stations, material recovery facilities, incinerators, solid waste collection bins, solid waste trucks, solid waste management budget and solid waste collection agents. Similarly, FGD on LGBY was extended by sub-themes such as contents of the by-law, community awareness of the by-law, and by-law enforcement mechanisms. While data preparation applied an analytical hierarchy process, data analysis applied an ordinary least square (OLS) regression model for sub-criteria that explain SFA and LGBY;and OLS standard residues as variables into geographically weighted regression with a resolution of 241 × 241 meter in ArcMap v10.5. Results showed that situational factors and local government by-laws have a strong relationship with the rate of minimizing solid waste dumping in water bodies (local R square = 0.94).
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
文摘Neuromyelitis optica spectrum disorders are neuroinflammatory demyelinating disorders that lead to permanent visual loss and motor dysfunction.To date,no effective treatment exists as the exact causative mechanism remains unknown.Therefore,experimental models of neuromyelitis optica spectrum disorders are essential for exploring its pathogenesis and in screening for therapeutic targets.Since most patients with neuromyelitis optica spectrum disorders are seropositive for IgG autoantibodies against aquaporin-4,which is highly expressed on the membrane of astrocyte endfeet,most current experimental models are based on aquaporin-4-IgG that initially targets astrocytes.These experimental models have successfully simulated many pathological features of neuromyelitis optica spectrum disorders,such as aquaporin-4 loss,astrocytopathy,granulocyte and macrophage infiltration,complement activation,demyelination,and neuronal loss;however,they do not fully capture the pathological process of human neuromyelitis optica spectrum disorders.In this review,we summarize the currently known pathogenic mechanisms and the development of associated experimental models in vitro,ex vivo,and in vivo for neuromyelitis optica spectrum disorders,suggest potential pathogenic mechanisms for further investigation,and provide guidance on experimental model choices.In addition,this review summarizes the latest information on pathologies and therapies for neuromyelitis optica spectrum disorders based on experimental models of aquaporin-4-IgG-seropositive neuromyelitis optica spectrum disorders,offering further therapeutic targets and a theoretical basis for clinical trials.
文摘This study embarks on a comprehensive examination of optimization techniques within GPU-based parallel programming models,pivotal for advancing high-performance computing(HPC).Emphasizing the transition of GPUs from graphic-centric processors to versatile computing units,it delves into the nuanced optimization of memory access,thread management,algorithmic design,and data structures.These optimizations are critical for exploiting the parallel processing capabilities of GPUs,addressingboth the theoretical frameworks and practical implementations.By integrating advanced strategies such as memory coalescing,dynamic scheduling,and parallel algorithmic transformations,this research aims to significantly elevate computational efficiency and throughput.The findings underscore the potential of optimized GPU programming to revolutionize computational tasks across various domains,highlighting a pathway towards achieving unparalleled processing power and efficiency in HPC environments.The paper not only contributes to the academic discourse on GPU optimization but also provides actionable insights for developers,fostering advancements in computational sciences and technology.