BACKGROUND As one of the fatal diseases with high incidence,lung cancer has seriously endangered public health and safety.Elderly patients usually have poor self-care and are more likely to show a series of psychologi...BACKGROUND As one of the fatal diseases with high incidence,lung cancer has seriously endangered public health and safety.Elderly patients usually have poor self-care and are more likely to show a series of psychological problems.AIM To investigate the effectiveness of the initial check,information exchange,final accuracy check,reaction(IIFAR)information care model on the mental health status of elderly patients with lung cancer.METHODS This study is a single-centre study.We randomly recruited 60 elderly patients with lung cancer who attended our hospital from January 2021 to January 2022.These elderly patients with lung cancer were randomly divided into two groups,with the control group taking the conventional propaganda and education and the observation group taking the IIFAR information care model based on the conventional care protocol.The differences in psychological distress,anxiety and depression,life quality,fatigue,and the locus of control in psychology were compared between these two groups,and the causes of psychological distress were analyzed.RESULTS After the intervention,Distress Thermometer,Hospital Anxiety and Depression Scale(HADS)for anxiety and the HADS for depression,Revised Piper’s Fatigue Scale,and Chance Health Locus of Control scores were lower in the observation group compared to the pre-intervention period in the same group and were significantly lower in the observation group compared to those of the control group(P<0.05).After the intervention,Quality of Life Questionnaire Core 30(QLQ-C30),Internal Health Locus of Control,and Powerful Others Health Locus of Control scores were significantly higher in the observation and the control groups compared to the pre-intervention period in their same group,and QLQ-C30 scores were significantly higher in the observation group compared to those of the control group(P<0.05).CONCLUSION The IIFAR information care model can help elderly patients with lung cancer by reducing their anxiety and depression,psychological distress,and fatigue,improving their tendencies on the locus of control in psychology,and enhancing their life qualities.展开更多
Prediction of tunneling-induced ground settlements is an essential task,particularly for tunneling in urban settings.Ground settlements should be limited within a tolerable threshold to avoid damages to aboveground st...Prediction of tunneling-induced ground settlements is an essential task,particularly for tunneling in urban settings.Ground settlements should be limited within a tolerable threshold to avoid damages to aboveground structures.Machine learning(ML)methods are becoming popular in many fields,including tunneling and underground excavations,as a powerful learning and predicting technique.However,the available datasets collected from a tunneling project are usually small from the perspective of applying ML methods.Can ML algorithms effectively predict tunneling-induced ground settlements when the available datasets are small?In this study,seven ML methods are utilized to predict tunneling-induced ground settlement using 14 contributing factors measured before or during tunnel excavation.These methods include multiple linear regression(MLR),decision tree(DT),random forest(RF),gradient boosting(GB),support vector regression(SVR),back-propagation neural network(BPNN),and permutation importancebased BPNN(PI-BPNN)models.All methods except BPNN and PI-BPNN are shallow-structure ML methods.The effectiveness of these seven ML approaches on small datasets is evaluated using model accuracy and stability.The model accuracy is measured by the coefficient of determination(R2)of training and testing datasets,and the stability of a learning algorithm indicates robust predictive performance.Also,the quantile error(QE)criterion is introduced to assess model predictive performance considering underpredictions and overpredictions.Our study reveals that the RF algorithm outperforms all the other models with the highest model prediction accuracy(0.9)and stability(3.0210^(-27)).Deep-structure ML models do not perform well for small datasets with relatively low model accuracy(0.59)and stability(5.76).The PI-BPNN architecture is proposed and designed for small datasets,showing better performance than typical BPNN.Six important contributing factors of ground settlements are identified,including tunnel depth,the distance between tunnel face and surface monitoring points(DTM),weighted average soil compressibility modulus(ACM),grouting pressure,penetrating rate and thrust force.展开更多
This article deals with the problem of calculating the comparative uncertainty of the main variable in the model of the studied physical phenomenon, which depends on a qualitative and quantitative set of variables. Th...This article deals with the problem of calculating the comparative uncertainty of the main variable in the model of the studied physical phenomenon, which depends on a qualitative and quantitative set of variables. The choice of variables is determined by preliminary information available to the observer and dependent on his knowledge, experience and intuition. The finite value of the amount of information available to the researcher leads to the inevitable aberration of the observed object. This causes the existence of an unremovable and intractable processing by any statistical methods, a comparative (respectively, relative) uncertainty of the model. The goal is to present a theoretical justification for the existence of this uncertainty and proposes a procedure for its calculation. The practical application of the informational method for choosing the preferred model for the Einstein formula and for calculating the speed of sound is demonstrated.展开更多
We postulate and analyze a nonlinear subsampling accuracy loss(SSAL)model based on the root mean square error(RMSE)and two SSAL models based on the mean square error(MSE),suggested by extensive preliminary simulations...We postulate and analyze a nonlinear subsampling accuracy loss(SSAL)model based on the root mean square error(RMSE)and two SSAL models based on the mean square error(MSE),suggested by extensive preliminary simulations.The SSAL models predict accuracy loss in terms of subsampling parameters like the fraction of users dropped(FUD)and the fraction of items dropped(FID).We seek to investigate whether the models depend on the characteristics of the dataset in a constant way across datasets when using the SVD collaborative filtering(CF)algorithm.The dataset characteristics considered include various densities of the rating matrix and the numbers of users and items.Extensive simulations and rigorous regression analysis led to empirical symmetrical SSAL models in terms of FID and FUD whose coefficients depend only on the data characteristics.The SSAL models came out to be multi-linear in terms of odds ratios of dropping a user(or an item)vs.not dropping it.Moreover,one MSE deterioration model turned out to be linear in the FID and FUD odds where their interaction term has a zero coefficient.Most importantly,the models are constant in the sense that they are written in closed-form using the considered data characteristics(densities and numbers of users and items).The models are validated through extensive simulations based on 850 synthetically generated primary(pre-subsampling)matrices derived from the 25M MovieLens dataset.Nearly 460000 subsampled rating matrices were then simulated and subjected to the singular value decomposition(SVD)CF algorithm.Further validation was conducted using the 1M MovieLens and the Yahoo!Music Rating datasets.The models were constant and significant across all 3 datasets.展开更多
High accuracy surface modeling (HASM) is a method which can be applied to soil property interpolation. In this paper, we present a method of HASM combined geographic information for soil property interpolation (HAS...High accuracy surface modeling (HASM) is a method which can be applied to soil property interpolation. In this paper, we present a method of HASM combined geographic information for soil property interpolation (HASM-SP) to improve the accuracy. Based on soil types, land use types and parent rocks, HASM-SP was applied to interpolate soil available P, Li, pH, alkali-hydrolyzable N, total K and Cr in a typical red soil hilly region. To evaluate the performance of HASM-SP, we compared its performance with that of ordinary kriging (OK), ordinary kriging combined geographic information (OK-Geo) and stratified kriging (SK). The results showed that the methods combined with geographic information including HASM-SP and OK-Geo obtained a lower estimation bias. HASM-SP also showed less MAEs and RMSEs when it was compared with the other three methods (OK-Geo, OK and SK). Much more details were presented in the HASM-SP maps for soil properties due to the combination of different types of geographic information which gave abrupt boundary for the spatial varia- tion of soil properties. Therefore, HASM-SP can not only reduce prediction errors but also can be accordant with the distribution of geographic information, which make the spatial simula- tion of soil property more reasonable. HASM-SP has not only enriched the theory of high accuracy surface modeling of soil property, but also provided a scientific method for the ap- plication in resource management and environment planning.展开更多
Various types of flexure hinges have been introduced and implemented in a variety of fields due to their superior performances.The Castigliano’s second theorem,the Euler–Bernoulli beam theory based direct integratio...Various types of flexure hinges have been introduced and implemented in a variety of fields due to their superior performances.The Castigliano’s second theorem,the Euler–Bernoulli beam theory based direct integration method and the unit-load method have been employed to analytically describe the elastic behavior of flexure hinges.However,all these methods require prior-knowledge of the beam theory and need to execute laborious integration operations for each term of the compliance matrix,thus highly decreasing the modeling efficiency and blocking practical applications of the modeling methods.In this paper,a novel finite beam based matrix modeling(FBMM)method is proposed to numerically obtain compliance matrices of flexure hinges with various shapes.The main concept of the method is to treat flexure hinges as serial connections of finite micro-beams,and the shearing and torsion effects of the hinges are especially considered to enhance the modeling accuracy.By means of matrix calculations,complete compliance matrices of flexure hinges can be derived effectively in one calculation process.A large number of numerical calculations are conducted for various types of flexure hinges with different shapes,and the results are compared with the ones obtained by conventional modeling methods.It demonstrates that the proposed modeling method is not only efficient but also accurate,and it is a more universal and more robust tool for describing elastic behavior of flexure hinges.展开更多
In order to accurately model compliant mechanism utilizing plate flexures, qualitative planar stress (Young's modulus) and planar strain (plate modulus) assumptions are not feasible. This paper investigates a qua...In order to accurately model compliant mechanism utilizing plate flexures, qualitative planar stress (Young's modulus) and planar strain (plate modulus) assumptions are not feasible. This paper investigates a quantitative equivalent modulus using nonlinear finite element analysis (FEA) to reflect coupled factors in affecting the modelling accuracy of two typical distrib- uted-compliance mechanisms. It has been shown that all parameters have influences on the equivalent modulus with different degrees; that the presence of large load-stiffening effect makes the equivalent modulus significantly deviate from the planar assumptions in two ideal scenarios; and that a plate modulus assumption is more reasonable for a very large out-of-plane thickness if the beam length is large.展开更多
In this work, we examine the impact of crude distillation unit(CDU) model errors on the results of refinery-wide optimization for production planning or feedstock selection. We compare the swing cut + bias CDU model w...In this work, we examine the impact of crude distillation unit(CDU) model errors on the results of refinery-wide optimization for production planning or feedstock selection. We compare the swing cut + bias CDU model with a recently developed hybrid CDU model(Fu et al., 2016). The hybrid CDU model computes material and energy balances, as well as product true boiling point(TBP) curves and bulk properties(e.g., sulfur% and cetane index, and other properties). Product TBP curves are predicted with an average error of 0.5% against rigorous simulation curves. Case studies of optimal operation computed using a planning model that is based on the swing cut + bias CDU model and using a planning model that incorporates the hybrid CDU model are presented. Our results show that significant economic benefits can be obtained using accurate CDU models in refinery production planning.展开更多
Fused deposition modeling (FDM) is an additive manufacturing technique used to fabricate intricate parts in 3D, within the shortest possible time without using tools, dies, fixtures, or human intervention. This arti...Fused deposition modeling (FDM) is an additive manufacturing technique used to fabricate intricate parts in 3D, within the shortest possible time without using tools, dies, fixtures, or human intervention. This article empiri- cally reports the effects of the process parameters, i.e., the layer thickness, raster angle, raster width, air gap, part orientation, and their interactions on the accuracy of the length, width, and thicknes, of acrylonitrile-butadiene- styrene (ABSP 400) parts fabricated using the FDM tech- nique. It was found that contraction prevailed along the directions of the length and width, whereas the thickness increased from the desired value of the fabricated part. Optimum parameter settings to minimize the responses, such as the change in length, width, and thickness of the test specimen, have been determined using Taguchi's parameter design. Because Taguchi's philosophy fails to obtain uniform optimal factor settings for each response, in this study, a fuzzy inference system combined with the Taguchi philosophy has been adopted to generate a single response from three responses, to reach the specific target values with the overall optimum factor level settings. Further, Taguchi and artificial neural network predictive models are also presented in this study for an accuracy evaluation within the dimensions of the FDM fabricated parts, subjected to various operating conditions. The pre- dicted values obtained from both models are in good agreement with the values from the experiment data, with mean absolute percentage errors of 3.16 and 0.15, respectively. Finally, the confirmatory test results showed an improvement in the multi-response performance index of 0.454 when using the optimal FDM parameters over the initial values.展开更多
High-resolution global non-hydrostatic gridded dynamic models have drawn significant attention in recent years in conjunction with the rising demand for improving weather forecasting and climate predictions.By far it ...High-resolution global non-hydrostatic gridded dynamic models have drawn significant attention in recent years in conjunction with the rising demand for improving weather forecasting and climate predictions.By far it is still challenging to build a high-resolution gridded global model,which is required to meet numerical accuracy,dispersion relation,conservation,and computation requirements.Among these requirements,this review focuses on one significant topic—the numerical accuracy over the entire non-uniform spherical grids.The paper discusses all the topic-related challenges by comparing the schemes adopted in well-known finite-volume-based operational or research dynamical cores.It provides an overview of how these challenges are met in a summary table.The analysis and validation in this review are based on the shallow-water equation system.The conclusions can be applied to more complicated models.These challenges should be critical research topics in the future development of finite-volume global models.展开更多
Over the past few years, the Utah Department of Transportation has developed the signal performance metrics (SPMs) system to evaluate the performance of signalized in- tersections dynamically. This system currently ...Over the past few years, the Utah Department of Transportation has developed the signal performance metrics (SPMs) system to evaluate the performance of signalized in- tersections dynamically. This system currently provides data summaries for several per- formance measures, one of them being turning movement counts collected by microwave sensors. As this system became public, there was a need to evaluate the accuracy of the data placed on the SPMs. A large-scale data collection was carried out to meet this need. Vehicles in the Hi-resolution data from microwave sensors were matched with the vehicles by ground-truth volume count data. Matching vehicles from the microwave sensor data and the ground-truth data manually collected required significant effort, A spreadsheet- based data analysis procedure was developed to carry out the task. A mixed model analysis of variance was used to analyze the effects of the factors considered on turning volume count accuracy. The analysis found that approach volume level and number of approach lanes would have significant effect on the accuracy of turning volume counts but the location of the sensors did not significantly affect the accuracy of turning volume counts. In addition, it was found that the location of lanes in relation to the sensor did not significantly affect the accuracy of lane-by-lane volume counts. This indicated that accu- racy analysis could be performed by using total approach volumes without comparing specific turning counts, that is, left-turn, through and right-turn movements. In general, the accuracy of approach volume counts collected by microwave sensors were within the margin of error that traffic engineers could accept. The procedure taken to perform the analysis and a summary of accuracy of volume counts for the factor combinations considered are presented in this paper.展开更多
The miniaturization of transistors led to advances in computers mainly to speed up their computation.Such miniaturization has approached its fundamental limits.However,many practices require better computational resou...The miniaturization of transistors led to advances in computers mainly to speed up their computation.Such miniaturization has approached its fundamental limits.However,many practices require better computational resources than the capabilities of existing computers.Fortunately,the development of quantum computing brings light to solve this problem.We briefly review the history of quantum computing and highlight some of its advanced achievements.Based on current studies,the Quantum Computing Advantage(QCA)seems indisputable.The challenge is how to actualize the practical quantum advantage(PQA).It is clear that machine learning can help with this task.The method used for high accuracy surface modelling(HASM)incorporates reinforced machine learning.It can be transformed into a large sparse linear system and combined with the Harrow-Hassidim-Lloyd(HHL)quantum algorithm to support quantum machine learning.HASM has been successfully used with classical computers to conduct spatial interpolation,upscaling,downscaling,data fusion and model-data assimilation of ecoenvironmental surfaces.Furthermore,a training experiment on a supercomputer indicates that our HASM-HHL quantum computing approach has a similar accuracy to classical HASM and can realize exponential acceleration over the classical algorithms.A universal platform for hybrid classical-quantum computing would be an obvious next step along with further work to improve the approach because of the many known limitations of the HHL algorithm.In addition,HASM quantum machine learning might be improved by:(1)considerably reducing the number of gates required for operating HASM-HHL;(2)evaluating cost and benchmark problems of quantum machine learning;(3)comparing the performance of the quantum and classical algorithms to clarify their advantages and disadvantages in terms of accuracy and computational speed;and(4)the algorithms would be added to a cloud platform to support applications and gather active feedback from users of the algorithms.展开更多
Soil particle-size fractions(PSFs),including three components of sand,silt,and clay,are very improtant for the simulation of land-surface process and the evaluation of ecosystem services.Accurate spatial prediction of...Soil particle-size fractions(PSFs),including three components of sand,silt,and clay,are very improtant for the simulation of land-surface process and the evaluation of ecosystem services.Accurate spatial prediction of soil PSFs can help better understand the simulation processes of these models.Because soil PSFs are compositional data,there are some special demands such as the constant sum(1 or 100%) in the interpolation process.In addition,the performance of spatial prediction methods can mostly affect the accuracy of the spatial distributions.Here,we proposed a framework for the spatial prediction of soil PSFs.It included log-ratio transformation methods of soil PSFs(additive log-ratio,centered log-ratio,symmetry log-ratio,and isometric log-ratio methods),interpolation methods(geostatistical methods,regression models,and machine learning models),validation methods(probability sampling,data splitting,and cross-validation) and indices of accuracy assessments in soil PSF interpolation and soil texture classification(rank correlation coefficient,mean error,root mean square error,mean absolute error,coefficient of determination,Aitchison distance,standardized residual sum of squares,overall accuracy,Kappa coefficient,and Precision-Recall curve) and uncertainty analysis indices(prediction and confidence intervals,standard deviation,and confusion index).Moreover,we summarized several paths on improving the accuracy of soil PSF interpolation,such as improving data distribution through effective data transformation,choosing appropriate prediction methods according to the data distribution,combining auxiliary variables to improve mapping accuracy and distribution rationality,improving interpolation accuracy using hybrid models,and developing multi-component joint models.In the future,we should pay more attention to the principles and mechanisms of data transformation,joint simulation models and high accuracy surface modeling methods for multi-components,as well as the combination of soil particle size curves with stochastic simulations.We proposed a clear framework for improving the performance of the prediction methods for soil PSFs,which can be referenced by other researchers in digital soil sciences.展开更多
文摘BACKGROUND As one of the fatal diseases with high incidence,lung cancer has seriously endangered public health and safety.Elderly patients usually have poor self-care and are more likely to show a series of psychological problems.AIM To investigate the effectiveness of the initial check,information exchange,final accuracy check,reaction(IIFAR)information care model on the mental health status of elderly patients with lung cancer.METHODS This study is a single-centre study.We randomly recruited 60 elderly patients with lung cancer who attended our hospital from January 2021 to January 2022.These elderly patients with lung cancer were randomly divided into two groups,with the control group taking the conventional propaganda and education and the observation group taking the IIFAR information care model based on the conventional care protocol.The differences in psychological distress,anxiety and depression,life quality,fatigue,and the locus of control in psychology were compared between these two groups,and the causes of psychological distress were analyzed.RESULTS After the intervention,Distress Thermometer,Hospital Anxiety and Depression Scale(HADS)for anxiety and the HADS for depression,Revised Piper’s Fatigue Scale,and Chance Health Locus of Control scores were lower in the observation group compared to the pre-intervention period in the same group and were significantly lower in the observation group compared to those of the control group(P<0.05).After the intervention,Quality of Life Questionnaire Core 30(QLQ-C30),Internal Health Locus of Control,and Powerful Others Health Locus of Control scores were significantly higher in the observation and the control groups compared to the pre-intervention period in their same group,and QLQ-C30 scores were significantly higher in the observation group compared to those of the control group(P<0.05).CONCLUSION The IIFAR information care model can help elderly patients with lung cancer by reducing their anxiety and depression,psychological distress,and fatigue,improving their tendencies on the locus of control in psychology,and enhancing their life qualities.
基金funded by the University Transportation Center for Underground Transportation Infrastructure(UTC-UTI)at the Colorado School of Mines under Grant No.69A3551747118 from the US Department of Transportation(DOT).
文摘Prediction of tunneling-induced ground settlements is an essential task,particularly for tunneling in urban settings.Ground settlements should be limited within a tolerable threshold to avoid damages to aboveground structures.Machine learning(ML)methods are becoming popular in many fields,including tunneling and underground excavations,as a powerful learning and predicting technique.However,the available datasets collected from a tunneling project are usually small from the perspective of applying ML methods.Can ML algorithms effectively predict tunneling-induced ground settlements when the available datasets are small?In this study,seven ML methods are utilized to predict tunneling-induced ground settlement using 14 contributing factors measured before or during tunnel excavation.These methods include multiple linear regression(MLR),decision tree(DT),random forest(RF),gradient boosting(GB),support vector regression(SVR),back-propagation neural network(BPNN),and permutation importancebased BPNN(PI-BPNN)models.All methods except BPNN and PI-BPNN are shallow-structure ML methods.The effectiveness of these seven ML approaches on small datasets is evaluated using model accuracy and stability.The model accuracy is measured by the coefficient of determination(R2)of training and testing datasets,and the stability of a learning algorithm indicates robust predictive performance.Also,the quantile error(QE)criterion is introduced to assess model predictive performance considering underpredictions and overpredictions.Our study reveals that the RF algorithm outperforms all the other models with the highest model prediction accuracy(0.9)and stability(3.0210^(-27)).Deep-structure ML models do not perform well for small datasets with relatively low model accuracy(0.59)and stability(5.76).The PI-BPNN architecture is proposed and designed for small datasets,showing better performance than typical BPNN.Six important contributing factors of ground settlements are identified,including tunnel depth,the distance between tunnel face and surface monitoring points(DTM),weighted average soil compressibility modulus(ACM),grouting pressure,penetrating rate and thrust force.
文摘This article deals with the problem of calculating the comparative uncertainty of the main variable in the model of the studied physical phenomenon, which depends on a qualitative and quantitative set of variables. The choice of variables is determined by preliminary information available to the observer and dependent on his knowledge, experience and intuition. The finite value of the amount of information available to the researcher leads to the inevitable aberration of the observed object. This causes the existence of an unremovable and intractable processing by any statistical methods, a comparative (respectively, relative) uncertainty of the model. The goal is to present a theoretical justification for the existence of this uncertainty and proposes a procedure for its calculation. The practical application of the informational method for choosing the preferred model for the Einstein formula and for calculating the speed of sound is demonstrated.
文摘We postulate and analyze a nonlinear subsampling accuracy loss(SSAL)model based on the root mean square error(RMSE)and two SSAL models based on the mean square error(MSE),suggested by extensive preliminary simulations.The SSAL models predict accuracy loss in terms of subsampling parameters like the fraction of users dropped(FUD)and the fraction of items dropped(FID).We seek to investigate whether the models depend on the characteristics of the dataset in a constant way across datasets when using the SVD collaborative filtering(CF)algorithm.The dataset characteristics considered include various densities of the rating matrix and the numbers of users and items.Extensive simulations and rigorous regression analysis led to empirical symmetrical SSAL models in terms of FID and FUD whose coefficients depend only on the data characteristics.The SSAL models came out to be multi-linear in terms of odds ratios of dropping a user(or an item)vs.not dropping it.Moreover,one MSE deterioration model turned out to be linear in the FID and FUD odds where their interaction term has a zero coefficient.Most importantly,the models are constant in the sense that they are written in closed-form using the considered data characteristics(densities and numbers of users and items).The models are validated through extensive simulations based on 850 synthetically generated primary(pre-subsampling)matrices derived from the 25M MovieLens dataset.Nearly 460000 subsampled rating matrices were then simulated and subjected to the singular value decomposition(SVD)CF algorithm.Further validation was conducted using the 1M MovieLens and the Yahoo!Music Rating datasets.The models were constant and significant across all 3 datasets.
基金Foundation: National Natural Science Foundation of China, No.41001057 China National Science Fund for Distinguished Young Scholars, No.40825003 Project Supported by State Key Laboratory of Earth Surface Processes and Resource Ecology, No.2011-KF-06
文摘High accuracy surface modeling (HASM) is a method which can be applied to soil property interpolation. In this paper, we present a method of HASM combined geographic information for soil property interpolation (HASM-SP) to improve the accuracy. Based on soil types, land use types and parent rocks, HASM-SP was applied to interpolate soil available P, Li, pH, alkali-hydrolyzable N, total K and Cr in a typical red soil hilly region. To evaluate the performance of HASM-SP, we compared its performance with that of ordinary kriging (OK), ordinary kriging combined geographic information (OK-Geo) and stratified kriging (SK). The results showed that the methods combined with geographic information including HASM-SP and OK-Geo obtained a lower estimation bias. HASM-SP also showed less MAEs and RMSEs when it was compared with the other three methods (OK-Geo, OK and SK). Much more details were presented in the HASM-SP maps for soil properties due to the combination of different types of geographic information which gave abrupt boundary for the spatial varia- tion of soil properties. Therefore, HASM-SP can not only reduce prediction errors but also can be accordant with the distribution of geographic information, which make the spatial simula- tion of soil property more reasonable. HASM-SP has not only enriched the theory of high accuracy surface modeling of soil property, but also provided a scientific method for the ap- plication in resource management and environment planning.
基金supported by the National Natural Science Foundation of China(Grant Nos.50775099,51075041,51175221 and 51305162)
文摘Various types of flexure hinges have been introduced and implemented in a variety of fields due to their superior performances.The Castigliano’s second theorem,the Euler–Bernoulli beam theory based direct integration method and the unit-load method have been employed to analytically describe the elastic behavior of flexure hinges.However,all these methods require prior-knowledge of the beam theory and need to execute laborious integration operations for each term of the compliance matrix,thus highly decreasing the modeling efficiency and blocking practical applications of the modeling methods.In this paper,a novel finite beam based matrix modeling(FBMM)method is proposed to numerically obtain compliance matrices of flexure hinges with various shapes.The main concept of the method is to treat flexure hinges as serial connections of finite micro-beams,and the shearing and torsion effects of the hinges are especially considered to enhance the modeling accuracy.By means of matrix calculations,complete compliance matrices of flexure hinges can be derived effectively in one calculation process.A large number of numerical calculations are conducted for various types of flexure hinges with different shapes,and the results are compared with the ones obtained by conventional modeling methods.It demonstrates that the proposed modeling method is not only efficient but also accurate,and it is a more universal and more robust tool for describing elastic behavior of flexure hinges.
文摘In order to accurately model compliant mechanism utilizing plate flexures, qualitative planar stress (Young's modulus) and planar strain (plate modulus) assumptions are not feasible. This paper investigates a quantitative equivalent modulus using nonlinear finite element analysis (FEA) to reflect coupled factors in affecting the modelling accuracy of two typical distrib- uted-compliance mechanisms. It has been shown that all parameters have influences on the equivalent modulus with different degrees; that the presence of large load-stiffening effect makes the equivalent modulus significantly deviate from the planar assumptions in two ideal scenarios; and that a plate modulus assumption is more reasonable for a very large out-of-plane thickness if the beam length is large.
基金supported by the Ontario Research FoundationMc Master Advanced Control ConsortiumImperial Oil
文摘In this work, we examine the impact of crude distillation unit(CDU) model errors on the results of refinery-wide optimization for production planning or feedstock selection. We compare the swing cut + bias CDU model with a recently developed hybrid CDU model(Fu et al., 2016). The hybrid CDU model computes material and energy balances, as well as product true boiling point(TBP) curves and bulk properties(e.g., sulfur% and cetane index, and other properties). Product TBP curves are predicted with an average error of 0.5% against rigorous simulation curves. Case studies of optimal operation computed using a planning model that is based on the swing cut + bias CDU model and using a planning model that incorporates the hybrid CDU model are presented. Our results show that significant economic benefits can be obtained using accurate CDU models in refinery production planning.
文摘Fused deposition modeling (FDM) is an additive manufacturing technique used to fabricate intricate parts in 3D, within the shortest possible time without using tools, dies, fixtures, or human intervention. This article empiri- cally reports the effects of the process parameters, i.e., the layer thickness, raster angle, raster width, air gap, part orientation, and their interactions on the accuracy of the length, width, and thicknes, of acrylonitrile-butadiene- styrene (ABSP 400) parts fabricated using the FDM tech- nique. It was found that contraction prevailed along the directions of the length and width, whereas the thickness increased from the desired value of the fabricated part. Optimum parameter settings to minimize the responses, such as the change in length, width, and thickness of the test specimen, have been determined using Taguchi's parameter design. Because Taguchi's philosophy fails to obtain uniform optimal factor settings for each response, in this study, a fuzzy inference system combined with the Taguchi philosophy has been adopted to generate a single response from three responses, to reach the specific target values with the overall optimum factor level settings. Further, Taguchi and artificial neural network predictive models are also presented in this study for an accuracy evaluation within the dimensions of the FDM fabricated parts, subjected to various operating conditions. The pre- dicted values obtained from both models are in good agreement with the values from the experiment data, with mean absolute percentage errors of 3.16 and 0.15, respectively. Finally, the confirmatory test results showed an improvement in the multi-response performance index of 0.454 when using the optimal FDM parameters over the initial values.
基金Supported by the National Key Research and Development Program of China(2017YFC1502201)Basic Scientific Research and Operation Fund of Chinese Academy of Meteorological Sciences(2017Z017)。
文摘High-resolution global non-hydrostatic gridded dynamic models have drawn significant attention in recent years in conjunction with the rising demand for improving weather forecasting and climate predictions.By far it is still challenging to build a high-resolution gridded global model,which is required to meet numerical accuracy,dispersion relation,conservation,and computation requirements.Among these requirements,this review focuses on one significant topic—the numerical accuracy over the entire non-uniform spherical grids.The paper discusses all the topic-related challenges by comparing the schemes adopted in well-known finite-volume-based operational or research dynamical cores.It provides an overview of how these challenges are met in a summary table.The analysis and validation in this review are based on the shallow-water equation system.The conclusions can be applied to more complicated models.These challenges should be critical research topics in the future development of finite-volume global models.
文摘Over the past few years, the Utah Department of Transportation has developed the signal performance metrics (SPMs) system to evaluate the performance of signalized in- tersections dynamically. This system currently provides data summaries for several per- formance measures, one of them being turning movement counts collected by microwave sensors. As this system became public, there was a need to evaluate the accuracy of the data placed on the SPMs. A large-scale data collection was carried out to meet this need. Vehicles in the Hi-resolution data from microwave sensors were matched with the vehicles by ground-truth volume count data. Matching vehicles from the microwave sensor data and the ground-truth data manually collected required significant effort, A spreadsheet- based data analysis procedure was developed to carry out the task. A mixed model analysis of variance was used to analyze the effects of the factors considered on turning volume count accuracy. The analysis found that approach volume level and number of approach lanes would have significant effect on the accuracy of turning volume counts but the location of the sensors did not significantly affect the accuracy of turning volume counts. In addition, it was found that the location of lanes in relation to the sensor did not significantly affect the accuracy of lane-by-lane volume counts. This indicated that accu- racy analysis could be performed by using total approach volumes without comparing specific turning counts, that is, left-turn, through and right-turn movements. In general, the accuracy of approach volume counts collected by microwave sensors were within the margin of error that traffic engineers could accept. The procedure taken to perform the analysis and a summary of accuracy of volume counts for the factor combinations considered are presented in this paper.
基金supported by the Open Research Program of the International Research Center of Big Data for Sustainable Development Goals(Grant No.CBAS2022ORP02)the National Natural Science Foundation of China(Grant Nos.41930647,72221002)the Key Project of Innovation LREIS(Grant No.KPI005).
文摘The miniaturization of transistors led to advances in computers mainly to speed up their computation.Such miniaturization has approached its fundamental limits.However,many practices require better computational resources than the capabilities of existing computers.Fortunately,the development of quantum computing brings light to solve this problem.We briefly review the history of quantum computing and highlight some of its advanced achievements.Based on current studies,the Quantum Computing Advantage(QCA)seems indisputable.The challenge is how to actualize the practical quantum advantage(PQA).It is clear that machine learning can help with this task.The method used for high accuracy surface modelling(HASM)incorporates reinforced machine learning.It can be transformed into a large sparse linear system and combined with the Harrow-Hassidim-Lloyd(HHL)quantum algorithm to support quantum machine learning.HASM has been successfully used with classical computers to conduct spatial interpolation,upscaling,downscaling,data fusion and model-data assimilation of ecoenvironmental surfaces.Furthermore,a training experiment on a supercomputer indicates that our HASM-HHL quantum computing approach has a similar accuracy to classical HASM and can realize exponential acceleration over the classical algorithms.A universal platform for hybrid classical-quantum computing would be an obvious next step along with further work to improve the approach because of the many known limitations of the HHL algorithm.In addition,HASM quantum machine learning might be improved by:(1)considerably reducing the number of gates required for operating HASM-HHL;(2)evaluating cost and benchmark problems of quantum machine learning;(3)comparing the performance of the quantum and classical algorithms to clarify their advantages and disadvantages in terms of accuracy and computational speed;and(4)the algorithms would be added to a cloud platform to support applications and gather active feedback from users of the algorithms.
基金National Natural Science Foundation of China,No.41930647The Strategic Priority Research Program of the Chinese Academy of Sciences,No.XDA23100202, No.XDA20040301State Key Laboratory of Resources and Environmental Information System。
文摘Soil particle-size fractions(PSFs),including three components of sand,silt,and clay,are very improtant for the simulation of land-surface process and the evaluation of ecosystem services.Accurate spatial prediction of soil PSFs can help better understand the simulation processes of these models.Because soil PSFs are compositional data,there are some special demands such as the constant sum(1 or 100%) in the interpolation process.In addition,the performance of spatial prediction methods can mostly affect the accuracy of the spatial distributions.Here,we proposed a framework for the spatial prediction of soil PSFs.It included log-ratio transformation methods of soil PSFs(additive log-ratio,centered log-ratio,symmetry log-ratio,and isometric log-ratio methods),interpolation methods(geostatistical methods,regression models,and machine learning models),validation methods(probability sampling,data splitting,and cross-validation) and indices of accuracy assessments in soil PSF interpolation and soil texture classification(rank correlation coefficient,mean error,root mean square error,mean absolute error,coefficient of determination,Aitchison distance,standardized residual sum of squares,overall accuracy,Kappa coefficient,and Precision-Recall curve) and uncertainty analysis indices(prediction and confidence intervals,standard deviation,and confusion index).Moreover,we summarized several paths on improving the accuracy of soil PSF interpolation,such as improving data distribution through effective data transformation,choosing appropriate prediction methods according to the data distribution,combining auxiliary variables to improve mapping accuracy and distribution rationality,improving interpolation accuracy using hybrid models,and developing multi-component joint models.In the future,we should pay more attention to the principles and mechanisms of data transformation,joint simulation models and high accuracy surface modeling methods for multi-components,as well as the combination of soil particle size curves with stochastic simulations.We proposed a clear framework for improving the performance of the prediction methods for soil PSFs,which can be referenced by other researchers in digital soil sciences.