Continuum robots with high flexibility and compliance have the capability to operate in confined and cluttered environments. To enhance the load capacity while maintaining robot dexterity, we propose a novel non-const...Continuum robots with high flexibility and compliance have the capability to operate in confined and cluttered environments. To enhance the load capacity while maintaining robot dexterity, we propose a novel non-constant subsegment stiffness structure for tendon-driven quasi continuum robots(TDQCRs) comprising rigid-flexible coupling subsegments.Aiming at real-time control applications, we present a novel static-to-kinematic modeling approach to gain a comprehensive understanding of the TDQCR model. The analytical subsegment-based kinematics for the multisection manipulator is derived based on screw theory and product of exponentials formula, and the static model considering gravity loading,actuation loading, and robot constitutive laws is established. Additionally, the effect of tension attenuation caused by routing channel friction is considered in the robot statics, resulting in improved model accuracy. The root-mean-square error between the outputs of the static model and the experimental system is less than 1.63% of the arm length(0.5 m). By employing the proposed static model, a mapping of bending angles between the configuration space and the subsegment space is established. Furthermore, motion control experiments are conducted on our TDQCR system, and the results demonstrate the effectiveness of the static-to-kinematic model.展开更多
The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally in...The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.展开更多
The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased si...The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.展开更多
The features of a quasi-two-dimensional( quasi-2D) model for simulating two-phase water hammer flows with vaporous cavity in a pipe are investigated. The quasi-2D model with discrete vaporous cavity in the pipe is pro...The features of a quasi-two-dimensional( quasi-2D) model for simulating two-phase water hammer flows with vaporous cavity in a pipe are investigated. The quasi-2D model with discrete vaporous cavity in the pipe is proposed in this paper. This model uses the quasi-2D model for pure liquid zone and one-dimensional( 1D) discrete vapor cavity model for vaporous cavity zone. The quasi-2D model solves two-dimensional equations for both axial and radial velocities and 1D equations for both pressure head and discharge by the method of characteristics. The 1D discrete vapor cavity model is used to simulate the vaporous cavity occurred when the pressure in the local pipe is lower than the vapor pressure of the liquid. The proposed model is used to simulate two-phase water flows caused by the rapid downstream valve closure in a reservoir-pipe-valve system.The results obtained by the proposed model are compared with those by the corresponding 1D model and the experimental ones provided by the literature,respectively. The comparison shows that the maximum pressure heads simulated by the proposed model are more accurate than those by the corresponding 1D model.展开更多
It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limit...It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.展开更多
In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified fro...In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.展开更多
In this study, an in-house quasi dimensional code has been developed which simulates the intake, compression, combustion, expansion and exhaust strokes of a homogeneous charge compression ignition (HCCI) engine. The c...In this study, an in-house quasi dimensional code has been developed which simulates the intake, compression, combustion, expansion and exhaust strokes of a homogeneous charge compression ignition (HCCI) engine. The compressed natural gas (CNG) has been used as fuel. A detailed chemical kineticscheme constituting of 310 and 1701 elementary equations developed by [Bakhshan and al.] has been applied for combustion modeling andheat release calculations. The zero-dimensional k-ε turbulence model has been used for calculation of heat transfer. The output results are the performance and pollutants emission and combustion characteristics in HCCI engines. Parametric studies have been conducted to discussing the effects of various parameters on performance and pollutants emission of these engines.展开更多
Based on the analysis of the physical mechanism of the Stationary Plasma Thruster (SPT), an integral equation describing the ion density of the steady SPT and the ion velocity distribution function at an arbitrary a...Based on the analysis of the physical mechanism of the Stationary Plasma Thruster (SPT), an integral equation describing the ion density of the steady SPT and the ion velocity distribution function at an arbitrary axial position of the steady SPT channel are derived. The integral equation is equivalent to the Vlasov equation, but the former is simpler than the latter. A one dimensional steady quasineutral hybrid model is established. In this model, ions are described by the above integral equation, and neutrals and electrons are described by hydrodynamic equations. The transferred equivalency to the differential equation and the integral equation, together with other equations, are solved by an ordinary differential equation (ODE) solver in the Matlab. The numerical simulation results show that under various circumstances, the ion average velocity would be different and needs to be deduced separately.展开更多
Multistation machining process is widely applied in contemporary manufacturing environment. Modeling of variation propagation in multistation machining process is one of the most important research scenarios. Due to t...Multistation machining process is widely applied in contemporary manufacturing environment. Modeling of variation propagation in multistation machining process is one of the most important research scenarios. Due to the existence of multiple variation streams, it is challenging to model and analyze variation propagation in a multi-station system. Current approaches to error modeling for multistation machining process are not explicit enough for error control and ensuring final product quality. In this paper, a mathematic model to depict the part dimensional variation of the complex multistation manufacturing process is formulated. A linear state space dimensional error propagation equation is established through kinematics analysis of the influence of locating parameter variations and locating datum variations on dimensional errors, so the dimensional error accumulation and transformation within the multistation process are quantitatively described. A systematic procedure to build the model is presented, which enhances the way to determine the variation sources in complex machining systems. A simple two-dimensional example is used to illustrate the proposed procedures. Finally, an industrial case of multistation machining part in a manufacturing shop is given to testify the validation and practicability of the method. The proposed analytical model is essential to quality control and improvement for multistation systems in machining quality forecasting and design optimization.展开更多
The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrai...The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is pro- posed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighbor- ing triangle location method by making full use of the surface normal information. Experimental results prove that this algo- rithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automati- cally reconstructed surface has only small topological difference from the true surface. This algorithm has potential applica- tions to virtual environments, computer vision, and so on.展开更多
The use of three dimensional in vitro systems in cancer research is a promising path for developing effective anticancer therapies.The aim of this study was to engineer a functional 3-D in vitro model of normal and ca...The use of three dimensional in vitro systems in cancer research is a promising path for developing effective anticancer therapies.The aim of this study was to engineer a functional 3-D in vitro model of normal and cancerous cervical tissue.Normal epithelial and immortalized cervical epithelial carcinoma cell lines were used to construct 3-D artificial normal cervical and cervical cancerous tissues.De-epidermised dermis(DED) was used as a scaffold for both models.Morphological analyses were conducted by using hematoxylin and eosin staining and characteristics of the models were studied by analyzing the expression of different structural cytokeratins and differential protein marker MAX dimerisation protein 1(Mad1) using immunohistochemical technique.Haematoxylin and eosin staining results showed that normal cervical tissue had multi epithelial layers while cancerous cervical tissue showed dysplastic changes.Immunohistochemistry staining revealed that for normal cervix model cytokeratin 10 was expressed in the upper stratified layer of the epithelium while cytokeratin 5 was expressed mainly in the middle and basal layer.Cytokeratin 19 was weakly expressed in a few basal cells.Cervical cancer model showed cytokeratin 19 expression in different epithelial layers and weak or no expression for cytokeratin 5 and cytokeratin 10.Madl expression was detected in some suprabasal cells.The 3-D in vitro models showed stratified epithelial layers and expressed the same types and patterns of differentiation marker proteins as seen in corresponding in vivo tissue in either normal cervical or cervical cancerous tissue.These findings imply that they can serve as functional normal and cervical cancer models.展开更多
The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performanc...The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performances.To overcome this,dimensionality reduction techniques are widely used.Traditional image processing applications recently propose numerous deep learning models.However,in hyperspectral image classification,the features of deep learning models are less explored.Thus,for efficient hyperspectral image classification,a depth-wise convolutional neural network is presented in this research work.To handle the dimensionality issue in the classification process,an optimized self-organized map model is employed using a water strider optimization algorithm.The network parameters of the self-organized map are optimized by the water strider optimization which reduces the dimensionality issues and enhances the classification performances.Standard datasets such as Indian Pines and the University of Pavia(UP)are considered for experimental analysis.Existing dimensionality reduction methods like Enhanced Hybrid-Graph Discriminant Learning(EHGDL),local geometric structure Fisher analysis(LGSFA),Discriminant Hyper-Laplacian projection(DHLP),Group-based tensor model(GBTM),and Lower rank tensor approximation(LRTA)methods are compared with proposed optimized SOM model.Results confirm the superior performance of the proposed model of 98.22%accuracy for the Indian pines dataset and 98.21%accuracy for the University of Pavia dataset over the existing maximum likelihood classifier,and Support vector machine(SVM).展开更多
In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems ...In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.展开更多
Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a...Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a Principal Component Analysis (PCA) of residuals, estimate separate person measures based on the two item sets, compare the two estimates on a person-by-person basis using t-tests and determine the number of cases that differ significantly at the 0.05-level;if ≤5% of tests are significant, or the lower bound of a binomial 95% confidence interval (CI) of the observed proportion overlaps 5%, then it is suggested that strict unidimensionality can be inferred;otherwise the scale is multidimensional. Given its proposed significance and potential implications, this procedure needs detailed scrutiny. This paper explores the impact of sample size and method of estimating the 95% binomial CI upon conclusions according to recommended conventions. Normal approximation, “exact”, Wilson, Agresti-Coull, and Jeffreys binomial CIs were calculated for observed proportions of 0.06, 0.08 and 0.10 and sample sizes from n= 100 to n= 2500. Lower 95%CI boundaries were inspected regarding coverage of the 5% threshold. Results showed that all binomial 95% CIs included as well as excluded 5% as an effect of sample size for all three investigated proportions, except for the Wilson, Agresti-Coull, and JeffreysCIs, which did not include 5% for any sample size with a 10% observed proportion. The normal approximation CI was most sensitive to sample size. These data illustrate that the PCA/t-test protocol should be used and interpreted as any hypothesis testing procedure and is dependent on sample size as well as binomial CI estimation procedure. The PCA/t-test protocol should not be viewed as a “definite” test of unidimensionality and does not replace an integrated quantitative/qualitative interpretation based on an explicit variable definition in view of the perspective, context and purpose of measurement.展开更多
Seismic data reconstruction is an essential and yet fundamental step in seismic data processing workflow,which is of profound significance to improve migration imaging quality,multiple suppression effect,and seismic i...Seismic data reconstruction is an essential and yet fundamental step in seismic data processing workflow,which is of profound significance to improve migration imaging quality,multiple suppression effect,and seismic inversion accuracy.Regularization methods play a central role in solving the underdetermined inverse problem of seismic data reconstruction.In this paper,a novel regularization approach is proposed,the low dimensional manifold model(LDMM),for reconstructing the missing seismic data.Our work relies on the fact that seismic patches always occupy a low dimensional manifold.Specifically,we exploit the dimension of the seismic patches manifold as a regularization term in the reconstruction problem,and reconstruct the missing seismic data by enforcing low dimensionality on this manifold.The crucial procedure of the proposed method is to solve the dimension of the patches manifold.Toward this,we adopt an efficient dimensionality calculation method based on low-rank approximation,which provides a reliable safeguard to enforce the constraints in the reconstruction process.Numerical experiments performed on synthetic and field seismic data demonstrate that,compared with the curvelet-based sparsity-promoting L1-norm minimization method and the multichannel singular spectrum analysis method,the proposed method obtains state-of-the-art reconstruction results.展开更多
The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynam...The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynamic updating of database.Firstly, arming at the deficiency of OO-Solid model in the aspect of dynamic updating,modeling primitives of OO-Solid model were modified.And then the algorithms of dynamic updating of 3D geological model with the node data,line data or surface data change were dis- cussed.The core algorithms was done by establishing space index,following the way of facing the object from bottom to top,namely the dynamic updating from the node to arc, and then to polygon,then to the face of the component and finally to the geological object. The research has important theoretical and practical values in the field of three dimen- sional geological modeling and is significant in the field of mineral resources.展开更多
Using the complex variable function method and the technique of conformal mapping, the anti-plane shear problem of an elliptic hole with asymmetric colfinear cracks in a one-dimensional hexagonal quasi-crystal is solv...Using the complex variable function method and the technique of conformal mapping, the anti-plane shear problem of an elliptic hole with asymmetric colfinear cracks in a one-dimensional hexagonal quasi-crystal is solved, and the exact analytic solutions of the stress intensity factors (SIFs) for mode Ⅲ problem are obtained. Under the limiting conditions, the present results reduce to the Griffith crack and many new results obtained as well, such as the circular hole with asymmetric collinear cracks, the elliptic hole with a straight crack, the mode T crack, the cross crack and so on. As far as the phonon field is concerned, these results, which play an important role in many practical and theoretical applications, are shown to be in good agreement with the classical results.展开更多
The research work has been seldom done about cloverleaf junction expression in a 3-dimensional city model (3DCM). The main reason is that the cloverleaf junction is often in a complex and enormous construction. Its ma...The research work has been seldom done about cloverleaf junction expression in a 3-dimensional city model (3DCM). The main reason is that the cloverleaf junction is often in a complex and enormous construction. Its main body is bestraddle in air,and has aerial intersections between its parts. This complex feature made cloverleaf junction quite different from buildings and terrain, therefore, it is difficult to express this kind of spatial objects in the same way as for buildings and terrain. In this paper,authors analyze spatial characteristics of cloverleaf junction, propose an all-constraint points TIN algorithm to partition cloverleaf junction road surface, and develop a method to visualize cloverleaf junction road surface using TIN. In order to manage cloverleaf junction data efficiently, the authors also analyzed the mechanism of 3DCM data management, extended BLOB type in relational database, and combined R-tree index to manage 3D spatial data. Based on this extension, an appropriate data展开更多
In gravity-anomaly-based prospecting, the computational and memory requirements for practical numerical modeling are potentially enormous. Achieving an efficient and precise inversion for gravity anomaly imaging over ...In gravity-anomaly-based prospecting, the computational and memory requirements for practical numerical modeling are potentially enormous. Achieving an efficient and precise inversion for gravity anomaly imaging over large-scale and complex terrain requires additional methods. To this end, we have proposed a new topography-capable By performing a two-dimensional Fourier transform in the horizontal directions, threedimensional partial differential equations in the spatial domain were transformed into a group of independent, one-dimensional differential equations engaged with different wave numbers. These independent differential equations are highly parallel across different wave numbers. differential equations with different wave numbers, and the efficiency of solving fixedbandwidth linear equations was further improved by a chasing method. In a synthetic test, a prism model was used to verify the accuracy and reliability of the proposed algorithm by comparing the numerical solution with the analytical solution. We studied the computational precision and efficiency with and without topography using different Fourier transform methods. The results showed that the Guass-FFT method has higher numerical precision, while the standard FFT method is superior, in terms of computation time, for inversion and quantitative interpretation under complicated terrain.展开更多
Water quality models are important tools to support the optimization of aquatic ecosystem rehabilitation programs and assess their efficiency. Basing on the flow conditions of the Daqinghe River Mouth of the Dianchi L...Water quality models are important tools to support the optimization of aquatic ecosystem rehabilitation programs and assess their efficiency. Basing on the flow conditions of the Daqinghe River Mouth of the Dianchi Lake, China, a two-dimensional water quality model was developed in the research. The hydrodynamics module was numerically solved by the alternating direction iteration (ADI) method. The parameters of the water quality module were obtained through the in situ experiments and the laboratory analyses that were conducted from 2006 to 2007. The model was calibrated and verified by the observation data in 2007. Among the four modelled key variables, i.e., water level, COD (in CODcr), NH4+-N and PO43-P the minimum value of the coefficient of determination (COD) was 0.69, indicating the model performed reasonably well. The developed model was then applied to simulate the water quality changes at a downstream cross-section assuming that the designed restoration programs were implemented. According to the simulated results, the restoration programs could cut down the loads of COD and PO43-P about 15%. Such a load reduction, unfortunately, would have very little effect on the NH4^+-N removal. Moreover, the water quality at the outlet cross-section would be still in class V (3838-02), indicating more measures should be taken to further reduce the loads. The study demonstrated the capability of water quality models to support aquatic ecosystem restorations.展开更多
基金Project supported by the National Natural Science Foundation of China (Grant No.61973167)the Jiangsu Funding Program for Excellent Postdoctoral Talent。
文摘Continuum robots with high flexibility and compliance have the capability to operate in confined and cluttered environments. To enhance the load capacity while maintaining robot dexterity, we propose a novel non-constant subsegment stiffness structure for tendon-driven quasi continuum robots(TDQCRs) comprising rigid-flexible coupling subsegments.Aiming at real-time control applications, we present a novel static-to-kinematic modeling approach to gain a comprehensive understanding of the TDQCR model. The analytical subsegment-based kinematics for the multisection manipulator is derived based on screw theory and product of exponentials formula, and the static model considering gravity loading,actuation loading, and robot constitutive laws is established. Additionally, the effect of tension attenuation caused by routing channel friction is considered in the robot statics, resulting in improved model accuracy. The root-mean-square error between the outputs of the static model and the experimental system is less than 1.63% of the arm length(0.5 m). By employing the proposed static model, a mapping of bending angles between the configuration space and the subsegment space is established. Furthermore, motion control experiments are conducted on our TDQCR system, and the results demonstrate the effectiveness of the static-to-kinematic model.
文摘The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.
基金supported in part by the National Natural Science Foundation of China(NSFC)(92167106,61833014)Key Research and Development Program of Zhejiang Province(2022C01206)。
文摘The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.
基金Sponsored by the National Natural Science Foundation of China(Grant No.51208160)the Natural Science Foundation of Heilongjiang Province(Grant No.QC2012C056)
文摘The features of a quasi-two-dimensional( quasi-2D) model for simulating two-phase water hammer flows with vaporous cavity in a pipe are investigated. The quasi-2D model with discrete vaporous cavity in the pipe is proposed in this paper. This model uses the quasi-2D model for pure liquid zone and one-dimensional( 1D) discrete vapor cavity model for vaporous cavity zone. The quasi-2D model solves two-dimensional equations for both axial and radial velocities and 1D equations for both pressure head and discharge by the method of characteristics. The 1D discrete vapor cavity model is used to simulate the vaporous cavity occurred when the pressure in the local pipe is lower than the vapor pressure of the liquid. The proposed model is used to simulate two-phase water flows caused by the rapid downstream valve closure in a reservoir-pipe-valve system.The results obtained by the proposed model are compared with those by the corresponding 1D model and the experimental ones provided by the literature,respectively. The comparison shows that the maximum pressure heads simulated by the proposed model are more accurate than those by the corresponding 1D model.
文摘It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.
文摘In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.
文摘In this study, an in-house quasi dimensional code has been developed which simulates the intake, compression, combustion, expansion and exhaust strokes of a homogeneous charge compression ignition (HCCI) engine. The compressed natural gas (CNG) has been used as fuel. A detailed chemical kineticscheme constituting of 310 and 1701 elementary equations developed by [Bakhshan and al.] has been applied for combustion modeling andheat release calculations. The zero-dimensional k-ε turbulence model has been used for calculation of heat transfer. The output results are the performance and pollutants emission and combustion characteristics in HCCI engines. Parametric studies have been conducted to discussing the effects of various parameters on performance and pollutants emission of these engines.
基金The project supported by National Fundamental Science Research Fundation of China (No. K1403060719)
文摘Based on the analysis of the physical mechanism of the Stationary Plasma Thruster (SPT), an integral equation describing the ion density of the steady SPT and the ion velocity distribution function at an arbitrary axial position of the steady SPT channel are derived. The integral equation is equivalent to the Vlasov equation, but the former is simpler than the latter. A one dimensional steady quasineutral hybrid model is established. In this model, ions are described by the above integral equation, and neutrals and electrons are described by hydrodynamic equations. The transferred equivalency to the differential equation and the integral equation, together with other equations, are solved by an ordinary differential equation (ODE) solver in the Matlab. The numerical simulation results show that under various circumstances, the ion average velocity would be different and needs to be deduced separately.
基金supported by National Department Fundamental Research Foundation of China (Grant No. B222090014)National Department Technology Fundatmental Foundaiton of China (Grant No. C172009C001)
文摘Multistation machining process is widely applied in contemporary manufacturing environment. Modeling of variation propagation in multistation machining process is one of the most important research scenarios. Due to the existence of multiple variation streams, it is challenging to model and analyze variation propagation in a multi-station system. Current approaches to error modeling for multistation machining process are not explicit enough for error control and ensuring final product quality. In this paper, a mathematic model to depict the part dimensional variation of the complex multistation manufacturing process is formulated. A linear state space dimensional error propagation equation is established through kinematics analysis of the influence of locating parameter variations and locating datum variations on dimensional errors, so the dimensional error accumulation and transformation within the multistation process are quantitatively described. A systematic procedure to build the model is presented, which enhances the way to determine the variation sources in complex machining systems. A simple two-dimensional example is used to illustrate the proposed procedures. Finally, an industrial case of multistation machining part in a manufacturing shop is given to testify the validation and practicability of the method. The proposed analytical model is essential to quality control and improvement for multistation systems in machining quality forecasting and design optimization.
基金Supported by the National Natural Science Foundation of China (No.40671158), the National 863 Program of China(No.2006AA12Z224) and the Program for New Century Excellent Talents in University (No.NCET-05-0626).
文摘The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is pro- posed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighbor- ing triangle location method by making full use of the surface normal information. Experimental results prove that this algo- rithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automati- cally reconstructed surface has only small topological difference from the true surface. This algorithm has potential applica- tions to virtual environments, computer vision, and so on.
基金supported by the Middlesex University,particularly in the award of a Postgraduate Research Studentship that provided the necessary financial support for this research
文摘The use of three dimensional in vitro systems in cancer research is a promising path for developing effective anticancer therapies.The aim of this study was to engineer a functional 3-D in vitro model of normal and cancerous cervical tissue.Normal epithelial and immortalized cervical epithelial carcinoma cell lines were used to construct 3-D artificial normal cervical and cervical cancerous tissues.De-epidermised dermis(DED) was used as a scaffold for both models.Morphological analyses were conducted by using hematoxylin and eosin staining and characteristics of the models were studied by analyzing the expression of different structural cytokeratins and differential protein marker MAX dimerisation protein 1(Mad1) using immunohistochemical technique.Haematoxylin and eosin staining results showed that normal cervical tissue had multi epithelial layers while cancerous cervical tissue showed dysplastic changes.Immunohistochemistry staining revealed that for normal cervix model cytokeratin 10 was expressed in the upper stratified layer of the epithelium while cytokeratin 5 was expressed mainly in the middle and basal layer.Cytokeratin 19 was weakly expressed in a few basal cells.Cervical cancer model showed cytokeratin 19 expression in different epithelial layers and weak or no expression for cytokeratin 5 and cytokeratin 10.Madl expression was detected in some suprabasal cells.The 3-D in vitro models showed stratified epithelial layers and expressed the same types and patterns of differentiation marker proteins as seen in corresponding in vivo tissue in either normal cervical or cervical cancerous tissue.These findings imply that they can serve as functional normal and cervical cancer models.
文摘The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performances.To overcome this,dimensionality reduction techniques are widely used.Traditional image processing applications recently propose numerous deep learning models.However,in hyperspectral image classification,the features of deep learning models are less explored.Thus,for efficient hyperspectral image classification,a depth-wise convolutional neural network is presented in this research work.To handle the dimensionality issue in the classification process,an optimized self-organized map model is employed using a water strider optimization algorithm.The network parameters of the self-organized map are optimized by the water strider optimization which reduces the dimensionality issues and enhances the classification performances.Standard datasets such as Indian Pines and the University of Pavia(UP)are considered for experimental analysis.Existing dimensionality reduction methods like Enhanced Hybrid-Graph Discriminant Learning(EHGDL),local geometric structure Fisher analysis(LGSFA),Discriminant Hyper-Laplacian projection(DHLP),Group-based tensor model(GBTM),and Lower rank tensor approximation(LRTA)methods are compared with proposed optimized SOM model.Results confirm the superior performance of the proposed model of 98.22%accuracy for the Indian pines dataset and 98.21%accuracy for the University of Pavia dataset over the existing maximum likelihood classifier,and Support vector machine(SVM).
基金supported by National High Technology Research and Development Program of China (863 Program)(No. 2009AA04Z162)National Nature Science Foundation of China(No. 60825302, No. 60934007, No. 61074061)+1 种基金Program of Shanghai Subject Chief Scientist,"Shu Guang" project supported by Shang-hai Municipal Education Commission and Shanghai Education Development FoundationKey Project of Shanghai Science and Technology Commission, China (No. 10JC1403400)
文摘In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.
文摘Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a Principal Component Analysis (PCA) of residuals, estimate separate person measures based on the two item sets, compare the two estimates on a person-by-person basis using t-tests and determine the number of cases that differ significantly at the 0.05-level;if ≤5% of tests are significant, or the lower bound of a binomial 95% confidence interval (CI) of the observed proportion overlaps 5%, then it is suggested that strict unidimensionality can be inferred;otherwise the scale is multidimensional. Given its proposed significance and potential implications, this procedure needs detailed scrutiny. This paper explores the impact of sample size and method of estimating the 95% binomial CI upon conclusions according to recommended conventions. Normal approximation, “exact”, Wilson, Agresti-Coull, and Jeffreys binomial CIs were calculated for observed proportions of 0.06, 0.08 and 0.10 and sample sizes from n= 100 to n= 2500. Lower 95%CI boundaries were inspected regarding coverage of the 5% threshold. Results showed that all binomial 95% CIs included as well as excluded 5% as an effect of sample size for all three investigated proportions, except for the Wilson, Agresti-Coull, and JeffreysCIs, which did not include 5% for any sample size with a 10% observed proportion. The normal approximation CI was most sensitive to sample size. These data illustrate that the PCA/t-test protocol should be used and interpreted as any hypothesis testing procedure and is dependent on sample size as well as binomial CI estimation procedure. The PCA/t-test protocol should not be viewed as a “definite” test of unidimensionality and does not replace an integrated quantitative/qualitative interpretation based on an explicit variable definition in view of the perspective, context and purpose of measurement.
基金supported by National Natural Science Foundation of China(Grant No.41874146 and No.42030103)Postgraduate Innovation Project of China University of Petroleum(East China)(No.YCX2021012)
文摘Seismic data reconstruction is an essential and yet fundamental step in seismic data processing workflow,which is of profound significance to improve migration imaging quality,multiple suppression effect,and seismic inversion accuracy.Regularization methods play a central role in solving the underdetermined inverse problem of seismic data reconstruction.In this paper,a novel regularization approach is proposed,the low dimensional manifold model(LDMM),for reconstructing the missing seismic data.Our work relies on the fact that seismic patches always occupy a low dimensional manifold.Specifically,we exploit the dimension of the seismic patches manifold as a regularization term in the reconstruction problem,and reconstruct the missing seismic data by enforcing low dimensionality on this manifold.The crucial procedure of the proposed method is to solve the dimension of the patches manifold.Toward this,we adopt an efficient dimensionality calculation method based on low-rank approximation,which provides a reliable safeguard to enforce the constraints in the reconstruction process.Numerical experiments performed on synthetic and field seismic data demonstrate that,compared with the curvelet-based sparsity-promoting L1-norm minimization method and the multichannel singular spectrum analysis method,the proposed method obtains state-of-the-art reconstruction results.
基金the National Natural Science Foundation of China(40572165)
文摘The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynamic updating of database.Firstly, arming at the deficiency of OO-Solid model in the aspect of dynamic updating,modeling primitives of OO-Solid model were modified.And then the algorithms of dynamic updating of 3D geological model with the node data,line data or surface data change were dis- cussed.The core algorithms was done by establishing space index,following the way of facing the object from bottom to top,namely the dynamic updating from the node to arc, and then to polygon,then to the face of the component and finally to the geological object. The research has important theoretical and practical values in the field of three dimen- sional geological modeling and is significant in the field of mineral resources.
基金supported by the National Natural Science Foundation of China (Grant No 10761005)the Inner Mongolia Natural Science Foundation of China (Grant No 200607010104)
文摘Using the complex variable function method and the technique of conformal mapping, the anti-plane shear problem of an elliptic hole with asymmetric colfinear cracks in a one-dimensional hexagonal quasi-crystal is solved, and the exact analytic solutions of the stress intensity factors (SIFs) for mode Ⅲ problem are obtained. Under the limiting conditions, the present results reduce to the Griffith crack and many new results obtained as well, such as the circular hole with asymmetric collinear cracks, the elliptic hole with a straight crack, the mode T crack, the cross crack and so on. As far as the phonon field is concerned, these results, which play an important role in many practical and theoretical applications, are shown to be in good agreement with the classical results.
文摘The research work has been seldom done about cloverleaf junction expression in a 3-dimensional city model (3DCM). The main reason is that the cloverleaf junction is often in a complex and enormous construction. Its main body is bestraddle in air,and has aerial intersections between its parts. This complex feature made cloverleaf junction quite different from buildings and terrain, therefore, it is difficult to express this kind of spatial objects in the same way as for buildings and terrain. In this paper,authors analyze spatial characteristics of cloverleaf junction, propose an all-constraint points TIN algorithm to partition cloverleaf junction road surface, and develop a method to visualize cloverleaf junction road surface using TIN. In order to manage cloverleaf junction data efficiently, the authors also analyzed the mechanism of 3DCM data management, extended BLOB type in relational database, and combined R-tree index to manage 3D spatial data. Based on this extension, an appropriate data
基金supported by the Natural Science Foundation of China(No.41574127)the China Postdoctoral Science Foundation(No.2017M622608)the project for the independent exploration of graduate students at Central South University(No.2017zzts008)
文摘In gravity-anomaly-based prospecting, the computational and memory requirements for practical numerical modeling are potentially enormous. Achieving an efficient and precise inversion for gravity anomaly imaging over large-scale and complex terrain requires additional methods. To this end, we have proposed a new topography-capable By performing a two-dimensional Fourier transform in the horizontal directions, threedimensional partial differential equations in the spatial domain were transformed into a group of independent, one-dimensional differential equations engaged with different wave numbers. These independent differential equations are highly parallel across different wave numbers. differential equations with different wave numbers, and the efficiency of solving fixedbandwidth linear equations was further improved by a chasing method. In a synthetic test, a prism model was used to verify the accuracy and reliability of the proposed algorithm by comparing the numerical solution with the analytical solution. We studied the computational precision and efficiency with and without topography using different Fourier transform methods. The results showed that the Guass-FFT method has higher numerical precision, while the standard FFT method is superior, in terms of computation time, for inversion and quantitative interpretation under complicated terrain.
基金supported by the National Hi-Tech Research and Development Program (863) of China (No.2007AA06A405, 2005AA6010100401)
文摘Water quality models are important tools to support the optimization of aquatic ecosystem rehabilitation programs and assess their efficiency. Basing on the flow conditions of the Daqinghe River Mouth of the Dianchi Lake, China, a two-dimensional water quality model was developed in the research. The hydrodynamics module was numerically solved by the alternating direction iteration (ADI) method. The parameters of the water quality module were obtained through the in situ experiments and the laboratory analyses that were conducted from 2006 to 2007. The model was calibrated and verified by the observation data in 2007. Among the four modelled key variables, i.e., water level, COD (in CODcr), NH4+-N and PO43-P the minimum value of the coefficient of determination (COD) was 0.69, indicating the model performed reasonably well. The developed model was then applied to simulate the water quality changes at a downstream cross-section assuming that the designed restoration programs were implemented. According to the simulated results, the restoration programs could cut down the loads of COD and PO43-P about 15%. Such a load reduction, unfortunately, would have very little effect on the NH4^+-N removal. Moreover, the water quality at the outlet cross-section would be still in class V (3838-02), indicating more measures should be taken to further reduce the loads. The study demonstrated the capability of water quality models to support aquatic ecosystem restorations.