The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased si...The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.展开更多
It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limit...It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.展开更多
In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified fro...In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.展开更多
The Indo-Gangetic Plain(IGP)is one of the most seismically vulnerable areas due to its proximity to the Himalayas.Geographic information system(GIS)-based seismic characterization of the IGP was performed based on the...The Indo-Gangetic Plain(IGP)is one of the most seismically vulnerable areas due to its proximity to the Himalayas.Geographic information system(GIS)-based seismic characterization of the IGP was performed based on the degree of deformation and fractal dimension.The zone between the Main Boundary Thrust(MBT)and the Main Central Thrust(MCT)in the Himalayan Mountain Range(HMR)experienced large variations in earthquake magnitude,which were identified by Number-Size(NS)fractal modeling.The central IGP zone experienced only moderate to low mainshock levels.Fractal analysis of earthquake epicenters reveals a large scattering of earthquake epicenters in the HMR and central IGP zones.Similarly,the fault fractal analysis identifies the HMR,central IGP,and south-western IGP zones as having more faults.Overall,the seismicity of the study region is strong in the central IGP,south-western IGP,and HMR zones,moderate in the western and southern IGP,and low in the northern,eastern,and south-eastern IGP zones.展开更多
The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally in...The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.展开更多
The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performanc...The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performances.To overcome this,dimensionality reduction techniques are widely used.Traditional image processing applications recently propose numerous deep learning models.However,in hyperspectral image classification,the features of deep learning models are less explored.Thus,for efficient hyperspectral image classification,a depth-wise convolutional neural network is presented in this research work.To handle the dimensionality issue in the classification process,an optimized self-organized map model is employed using a water strider optimization algorithm.The network parameters of the self-organized map are optimized by the water strider optimization which reduces the dimensionality issues and enhances the classification performances.Standard datasets such as Indian Pines and the University of Pavia(UP)are considered for experimental analysis.Existing dimensionality reduction methods like Enhanced Hybrid-Graph Discriminant Learning(EHGDL),local geometric structure Fisher analysis(LGSFA),Discriminant Hyper-Laplacian projection(DHLP),Group-based tensor model(GBTM),and Lower rank tensor approximation(LRTA)methods are compared with proposed optimized SOM model.Results confirm the superior performance of the proposed model of 98.22%accuracy for the Indian pines dataset and 98.21%accuracy for the University of Pavia dataset over the existing maximum likelihood classifier,and Support vector machine(SVM).展开更多
The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrai...The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is pro- posed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighbor- ing triangle location method by making full use of the surface normal information. Experimental results prove that this algo- rithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automati- cally reconstructed surface has only small topological difference from the true surface. This algorithm has potential applica- tions to virtual environments, computer vision, and so on.展开更多
In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems ...In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.展开更多
Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a...Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a Principal Component Analysis (PCA) of residuals, estimate separate person measures based on the two item sets, compare the two estimates on a person-by-person basis using t-tests and determine the number of cases that differ significantly at the 0.05-level;if ≤5% of tests are significant, or the lower bound of a binomial 95% confidence interval (CI) of the observed proportion overlaps 5%, then it is suggested that strict unidimensionality can be inferred;otherwise the scale is multidimensional. Given its proposed significance and potential implications, this procedure needs detailed scrutiny. This paper explores the impact of sample size and method of estimating the 95% binomial CI upon conclusions according to recommended conventions. Normal approximation, “exact”, Wilson, Agresti-Coull, and Jeffreys binomial CIs were calculated for observed proportions of 0.06, 0.08 and 0.10 and sample sizes from n= 100 to n= 2500. Lower 95%CI boundaries were inspected regarding coverage of the 5% threshold. Results showed that all binomial 95% CIs included as well as excluded 5% as an effect of sample size for all three investigated proportions, except for the Wilson, Agresti-Coull, and JeffreysCIs, which did not include 5% for any sample size with a 10% observed proportion. The normal approximation CI was most sensitive to sample size. These data illustrate that the PCA/t-test protocol should be used and interpreted as any hypothesis testing procedure and is dependent on sample size as well as binomial CI estimation procedure. The PCA/t-test protocol should not be viewed as a “definite” test of unidimensionality and does not replace an integrated quantitative/qualitative interpretation based on an explicit variable definition in view of the perspective, context and purpose of measurement.展开更多
The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynam...The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynamic updating of database.Firstly, arming at the deficiency of OO-Solid model in the aspect of dynamic updating,modeling primitives of OO-Solid model were modified.And then the algorithms of dynamic updating of 3D geological model with the node data,line data or surface data change were dis- cussed.The core algorithms was done by establishing space index,following the way of facing the object from bottom to top,namely the dynamic updating from the node to arc, and then to polygon,then to the face of the component and finally to the geological object. The research has important theoretical and practical values in the field of three dimen- sional geological modeling and is significant in the field of mineral resources.展开更多
Different phenomenological equations based on plasticity, primary creep (as a viscoplastic mechanism), secondary creep (as another viscoplastic mechanism) and different combinations of these equations are presente...Different phenomenological equations based on plasticity, primary creep (as a viscoplastic mechanism), secondary creep (as another viscoplastic mechanism) and different combinations of these equations are presented and used to describe the material inelastic deformation in uniaxial test. Agreement of the models with experimental results and with the theoretical concepts and physical realities is the criterion of choosing the most appropriate formulation for uniaxial test. A model is thus proposed in which plastic deformation, primary creep and secondary creep contribute to the inelastic deformation. However, it is believed that the hardening parameter is composed of plastic and primary creep parts. Accordingly, the axial plastic strain in a uniaxial test may no longer be considered as the hardening parameter. Therefore, a proportionality concept is proposed to calculate the plastic contribution of deformation.展开更多
In this article,a high-order scheme,which is formulated by combining the quadratic finite element method in space with a second-order time discrete scheme,is developed for looking for the numerical solution of a two-d...In this article,a high-order scheme,which is formulated by combining the quadratic finite element method in space with a second-order time discrete scheme,is developed for looking for the numerical solution of a two-dimensional nonlinear time fractional thermal diffusion model.The time Caputo fractional derivative is approximated by using the L2-1formula,the first-order derivative and nonlinear term are discretized by some second-order approximation formulas,and the quadratic finite element is used to approximate the spatial direction.The error accuracy O(h3+t2)is obtained,which is verified by the numerical results.展开更多
The research work has been seldom done about cloverleaf junction expression in a 3-dimensional city model (3DCM). The main reason is that the cloverleaf junction is often in a complex and enormous construction. Its ma...The research work has been seldom done about cloverleaf junction expression in a 3-dimensional city model (3DCM). The main reason is that the cloverleaf junction is often in a complex and enormous construction. Its main body is bestraddle in air,and has aerial intersections between its parts. This complex feature made cloverleaf junction quite different from buildings and terrain, therefore, it is difficult to express this kind of spatial objects in the same way as for buildings and terrain. In this paper,authors analyze spatial characteristics of cloverleaf junction, propose an all-constraint points TIN algorithm to partition cloverleaf junction road surface, and develop a method to visualize cloverleaf junction road surface using TIN. In order to manage cloverleaf junction data efficiently, the authors also analyzed the mechanism of 3DCM data management, extended BLOB type in relational database, and combined R-tree index to manage 3D spatial data. Based on this extension, an appropriate data展开更多
In gravity-anomaly-based prospecting, the computational and memory requirements for practical numerical modeling are potentially enormous. Achieving an efficient and precise inversion for gravity anomaly imaging over ...In gravity-anomaly-based prospecting, the computational and memory requirements for practical numerical modeling are potentially enormous. Achieving an efficient and precise inversion for gravity anomaly imaging over large-scale and complex terrain requires additional methods. To this end, we have proposed a new topography-capable By performing a two-dimensional Fourier transform in the horizontal directions, threedimensional partial differential equations in the spatial domain were transformed into a group of independent, one-dimensional differential equations engaged with different wave numbers. These independent differential equations are highly parallel across different wave numbers. differential equations with different wave numbers, and the efficiency of solving fixedbandwidth linear equations was further improved by a chasing method. In a synthetic test, a prism model was used to verify the accuracy and reliability of the proposed algorithm by comparing the numerical solution with the analytical solution. We studied the computational precision and efficiency with and without topography using different Fourier transform methods. The results showed that the Guass-FFT method has higher numerical precision, while the standard FFT method is superior, in terms of computation time, for inversion and quantitative interpretation under complicated terrain.展开更多
Water quality models are important tools to support the optimization of aquatic ecosystem rehabilitation programs and assess their efficiency. Basing on the flow conditions of the Daqinghe River Mouth of the Dianchi L...Water quality models are important tools to support the optimization of aquatic ecosystem rehabilitation programs and assess their efficiency. Basing on the flow conditions of the Daqinghe River Mouth of the Dianchi Lake, China, a two-dimensional water quality model was developed in the research. The hydrodynamics module was numerically solved by the alternating direction iteration (ADI) method. The parameters of the water quality module were obtained through the in situ experiments and the laboratory analyses that were conducted from 2006 to 2007. The model was calibrated and verified by the observation data in 2007. Among the four modelled key variables, i.e., water level, COD (in CODcr), NH4+-N and PO43-P the minimum value of the coefficient of determination (COD) was 0.69, indicating the model performed reasonably well. The developed model was then applied to simulate the water quality changes at a downstream cross-section assuming that the designed restoration programs were implemented. According to the simulated results, the restoration programs could cut down the loads of COD and PO43-P about 15%. Such a load reduction, unfortunately, would have very little effect on the NH4^+-N removal. Moreover, the water quality at the outlet cross-section would be still in class V (3838-02), indicating more measures should be taken to further reduce the loads. The study demonstrated the capability of water quality models to support aquatic ecosystem restorations.展开更多
A new three-dimensional semi-implicit finite-volume ocean model has been developed for simulating the coastal ocean circulation, which is based on the staggered C-unstructured non-orthogonal grid in the hor- izontal d...A new three-dimensional semi-implicit finite-volume ocean model has been developed for simulating the coastal ocean circulation, which is based on the staggered C-unstructured non-orthogonal grid in the hor- izontal direction and z-level grid in the vertical direction. The three-dimensional model is discretized by the semi-implicit finite-volume method, in that the free-surface and the vertical diffusion are semi-implicit, thereby removing stability limitations associated with the surface gravity wave and vertical diffusion terms. The remaining terms in the momentum equations are discretized explicitly by an integral method. The partial cell method is used for resolving topography, which enables the model to better represent irregular topography. The model has been tested against analytical cases for wind and tidal oscillation circulation, and is applied to simulating the tidal flow in the Bohal Sea. The results are in good agreement both with the analytical solutions and measurement results.展开更多
A four-dimensional variational (4D-Var) data assimilation method is implemented in an improved intermediate coupled model (ICM) of the tropical Pacific. A twin experiment is designed to evaluate the impact of the ...A four-dimensional variational (4D-Var) data assimilation method is implemented in an improved intermediate coupled model (ICM) of the tropical Pacific. A twin experiment is designed to evaluate the impact of the 4D-Var data assimilation algorithm on ENSO analysis and prediction based on the ICM. The model error is assumed to arise only from the parameter uncertainty. The "observation" of the SST anomaly, which is sampled from a "truth" model simulation that takes default parameter values and has Gaussian noise added, is directly assimilated into the assimilation model with its parameters set erroneously. Results show that 4D-Var effectively reduces the error of ENSO analysis and therefore improves the prediction skill of ENSO events compared with the non-assimilation case. These results provide a promising way for the ICM to achieve better real-time ENSO prediction.展开更多
Instead of the capillary plasma generator(CPG),a discharge rod plasma generator(DRPG)is used in the30 mm electrothermal-chemical(ETC)gun to improve the ignition uniformity of the solid propellant.An axisymmetric two-d...Instead of the capillary plasma generator(CPG),a discharge rod plasma generator(DRPG)is used in the30 mm electrothermal-chemical(ETC)gun to improve the ignition uniformity of the solid propellant.An axisymmetric two-dimensional interior ballistics model of the solid propellant ETC gun(2D-IB-SPETCG)is presented to describe the process of the ETC launch.Both calculated pressure and projectile muzzle velocity accord well with the experimental results.The feasibility of the 2D-IB-SPETCG model is proved.Depending on the experimental data and initial parameters,detailed distribution of the ballistics parameters can be simulated.With the distribution of pressure and temperature of the gas phase and the propellant,the influence of plasma during the ignition process can be analyzed.Because of the radial flowing plasma,the propellant in the area of the DRPG is ignited within 0.01 ms,while all propellant in the chamber is ignited within 0.09 ms.The radial ignition delay time is much less than the axial delay time.During the ignition process,the radial pressure difference is less than 5 MPa at the place 0.025 m away from the breech.The radial ignition uniformity is proved.The temperature of the gas increases from several thousand K(conventional ignition)to several ten thousand K(plasma ignition).Compare the distribution of the density and temperature of the gas,we know that low density and high temperature gas appears near the exits of the DRPG,while high density and low temperature gas appears at the wall near the breech.The simulation of the 2D-IB-SPETCG model is an effective way to investigate the interior ballistics process of the ETC launch.The 2D-IB-SPETC model can be used for prediction and improvement of experiments.展开更多
基金supported in part by the National Natural Science Foundation of China(NSFC)(92167106,61833014)Key Research and Development Program of Zhejiang Province(2022C01206)。
文摘The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.
文摘It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.
文摘In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.
文摘The Indo-Gangetic Plain(IGP)is one of the most seismically vulnerable areas due to its proximity to the Himalayas.Geographic information system(GIS)-based seismic characterization of the IGP was performed based on the degree of deformation and fractal dimension.The zone between the Main Boundary Thrust(MBT)and the Main Central Thrust(MCT)in the Himalayan Mountain Range(HMR)experienced large variations in earthquake magnitude,which were identified by Number-Size(NS)fractal modeling.The central IGP zone experienced only moderate to low mainshock levels.Fractal analysis of earthquake epicenters reveals a large scattering of earthquake epicenters in the HMR and central IGP zones.Similarly,the fault fractal analysis identifies the HMR,central IGP,and south-western IGP zones as having more faults.Overall,the seismicity of the study region is strong in the central IGP,south-western IGP,and HMR zones,moderate in the western and southern IGP,and low in the northern,eastern,and south-eastern IGP zones.
文摘The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.
文摘The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performances.To overcome this,dimensionality reduction techniques are widely used.Traditional image processing applications recently propose numerous deep learning models.However,in hyperspectral image classification,the features of deep learning models are less explored.Thus,for efficient hyperspectral image classification,a depth-wise convolutional neural network is presented in this research work.To handle the dimensionality issue in the classification process,an optimized self-organized map model is employed using a water strider optimization algorithm.The network parameters of the self-organized map are optimized by the water strider optimization which reduces the dimensionality issues and enhances the classification performances.Standard datasets such as Indian Pines and the University of Pavia(UP)are considered for experimental analysis.Existing dimensionality reduction methods like Enhanced Hybrid-Graph Discriminant Learning(EHGDL),local geometric structure Fisher analysis(LGSFA),Discriminant Hyper-Laplacian projection(DHLP),Group-based tensor model(GBTM),and Lower rank tensor approximation(LRTA)methods are compared with proposed optimized SOM model.Results confirm the superior performance of the proposed model of 98.22%accuracy for the Indian pines dataset and 98.21%accuracy for the University of Pavia dataset over the existing maximum likelihood classifier,and Support vector machine(SVM).
基金Supported by the National Natural Science Foundation of China (No.40671158), the National 863 Program of China(No.2006AA12Z224) and the Program for New Century Excellent Talents in University (No.NCET-05-0626).
文摘The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is pro- posed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighbor- ing triangle location method by making full use of the surface normal information. Experimental results prove that this algo- rithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automati- cally reconstructed surface has only small topological difference from the true surface. This algorithm has potential applica- tions to virtual environments, computer vision, and so on.
基金supported by National High Technology Research and Development Program of China (863 Program)(No. 2009AA04Z162)National Nature Science Foundation of China(No. 60825302, No. 60934007, No. 61074061)+1 种基金Program of Shanghai Subject Chief Scientist,"Shu Guang" project supported by Shang-hai Municipal Education Commission and Shanghai Education Development FoundationKey Project of Shanghai Science and Technology Commission, China (No. 10JC1403400)
文摘In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.
文摘Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a Principal Component Analysis (PCA) of residuals, estimate separate person measures based on the two item sets, compare the two estimates on a person-by-person basis using t-tests and determine the number of cases that differ significantly at the 0.05-level;if ≤5% of tests are significant, or the lower bound of a binomial 95% confidence interval (CI) of the observed proportion overlaps 5%, then it is suggested that strict unidimensionality can be inferred;otherwise the scale is multidimensional. Given its proposed significance and potential implications, this procedure needs detailed scrutiny. This paper explores the impact of sample size and method of estimating the 95% binomial CI upon conclusions according to recommended conventions. Normal approximation, “exact”, Wilson, Agresti-Coull, and Jeffreys binomial CIs were calculated for observed proportions of 0.06, 0.08 and 0.10 and sample sizes from n= 100 to n= 2500. Lower 95%CI boundaries were inspected regarding coverage of the 5% threshold. Results showed that all binomial 95% CIs included as well as excluded 5% as an effect of sample size for all three investigated proportions, except for the Wilson, Agresti-Coull, and JeffreysCIs, which did not include 5% for any sample size with a 10% observed proportion. The normal approximation CI was most sensitive to sample size. These data illustrate that the PCA/t-test protocol should be used and interpreted as any hypothesis testing procedure and is dependent on sample size as well as binomial CI estimation procedure. The PCA/t-test protocol should not be viewed as a “definite” test of unidimensionality and does not replace an integrated quantitative/qualitative interpretation based on an explicit variable definition in view of the perspective, context and purpose of measurement.
基金the National Natural Science Foundation of China(40572165)
文摘The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynamic updating of database.Firstly, arming at the deficiency of OO-Solid model in the aspect of dynamic updating,modeling primitives of OO-Solid model were modified.And then the algorithms of dynamic updating of 3D geological model with the node data,line data or surface data change were dis- cussed.The core algorithms was done by establishing space index,following the way of facing the object from bottom to top,namely the dynamic updating from the node to arc, and then to polygon,then to the face of the component and finally to the geological object. The research has important theoretical and practical values in the field of three dimen- sional geological modeling and is significant in the field of mineral resources.
文摘Different phenomenological equations based on plasticity, primary creep (as a viscoplastic mechanism), secondary creep (as another viscoplastic mechanism) and different combinations of these equations are presented and used to describe the material inelastic deformation in uniaxial test. Agreement of the models with experimental results and with the theoretical concepts and physical realities is the criterion of choosing the most appropriate formulation for uniaxial test. A model is thus proposed in which plastic deformation, primary creep and secondary creep contribute to the inelastic deformation. However, it is believed that the hardening parameter is composed of plastic and primary creep parts. Accordingly, the axial plastic strain in a uniaxial test may no longer be considered as the hardening parameter. Therefore, a proportionality concept is proposed to calculate the plastic contribution of deformation.
基金the National Natural Science Fund(11661058,11761053)Natural Science Fund of Inner Mongolia Autonomous Region(2016MS0102,2017MS0107)+1 种基金Program for Young Talents of Science and Technology in Universities of Inner Mongolia Autonomous Region(NJYT-17-A07)National Undergraduate Innovative Training Project of Inner Mongolia University(201710126026).
文摘In this article,a high-order scheme,which is formulated by combining the quadratic finite element method in space with a second-order time discrete scheme,is developed for looking for the numerical solution of a two-dimensional nonlinear time fractional thermal diffusion model.The time Caputo fractional derivative is approximated by using the L2-1formula,the first-order derivative and nonlinear term are discretized by some second-order approximation formulas,and the quadratic finite element is used to approximate the spatial direction.The error accuracy O(h3+t2)is obtained,which is verified by the numerical results.
文摘The research work has been seldom done about cloverleaf junction expression in a 3-dimensional city model (3DCM). The main reason is that the cloverleaf junction is often in a complex and enormous construction. Its main body is bestraddle in air,and has aerial intersections between its parts. This complex feature made cloverleaf junction quite different from buildings and terrain, therefore, it is difficult to express this kind of spatial objects in the same way as for buildings and terrain. In this paper,authors analyze spatial characteristics of cloverleaf junction, propose an all-constraint points TIN algorithm to partition cloverleaf junction road surface, and develop a method to visualize cloverleaf junction road surface using TIN. In order to manage cloverleaf junction data efficiently, the authors also analyzed the mechanism of 3DCM data management, extended BLOB type in relational database, and combined R-tree index to manage 3D spatial data. Based on this extension, an appropriate data
基金supported by the Natural Science Foundation of China(No.41574127)the China Postdoctoral Science Foundation(No.2017M622608)the project for the independent exploration of graduate students at Central South University(No.2017zzts008)
文摘In gravity-anomaly-based prospecting, the computational and memory requirements for practical numerical modeling are potentially enormous. Achieving an efficient and precise inversion for gravity anomaly imaging over large-scale and complex terrain requires additional methods. To this end, we have proposed a new topography-capable By performing a two-dimensional Fourier transform in the horizontal directions, threedimensional partial differential equations in the spatial domain were transformed into a group of independent, one-dimensional differential equations engaged with different wave numbers. These independent differential equations are highly parallel across different wave numbers. differential equations with different wave numbers, and the efficiency of solving fixedbandwidth linear equations was further improved by a chasing method. In a synthetic test, a prism model was used to verify the accuracy and reliability of the proposed algorithm by comparing the numerical solution with the analytical solution. We studied the computational precision and efficiency with and without topography using different Fourier transform methods. The results showed that the Guass-FFT method has higher numerical precision, while the standard FFT method is superior, in terms of computation time, for inversion and quantitative interpretation under complicated terrain.
基金supported by the National Hi-Tech Research and Development Program (863) of China (No.2007AA06A405, 2005AA6010100401)
文摘Water quality models are important tools to support the optimization of aquatic ecosystem rehabilitation programs and assess their efficiency. Basing on the flow conditions of the Daqinghe River Mouth of the Dianchi Lake, China, a two-dimensional water quality model was developed in the research. The hydrodynamics module was numerically solved by the alternating direction iteration (ADI) method. The parameters of the water quality module were obtained through the in situ experiments and the laboratory analyses that were conducted from 2006 to 2007. The model was calibrated and verified by the observation data in 2007. Among the four modelled key variables, i.e., water level, COD (in CODcr), NH4+-N and PO43-P the minimum value of the coefficient of determination (COD) was 0.69, indicating the model performed reasonably well. The developed model was then applied to simulate the water quality changes at a downstream cross-section assuming that the designed restoration programs were implemented. According to the simulated results, the restoration programs could cut down the loads of COD and PO43-P about 15%. Such a load reduction, unfortunately, would have very little effect on the NH4^+-N removal. Moreover, the water quality at the outlet cross-section would be still in class V (3838-02), indicating more measures should be taken to further reduce the loads. The study demonstrated the capability of water quality models to support aquatic ecosystem restorations.
基金The Major State Basic Research Program of China under contract No. 2012CB417002the National Natural Science Foundation of China under contract Nos 50909065 and 51109039
文摘A new three-dimensional semi-implicit finite-volume ocean model has been developed for simulating the coastal ocean circulation, which is based on the staggered C-unstructured non-orthogonal grid in the hor- izontal direction and z-level grid in the vertical direction. The three-dimensional model is discretized by the semi-implicit finite-volume method, in that the free-surface and the vertical diffusion are semi-implicit, thereby removing stability limitations associated with the surface gravity wave and vertical diffusion terms. The remaining terms in the momentum equations are discretized explicitly by an integral method. The partial cell method is used for resolving topography, which enables the model to better represent irregular topography. The model has been tested against analytical cases for wind and tidal oscillation circulation, and is applied to simulating the tidal flow in the Bohal Sea. The results are in good agreement both with the analytical solutions and measurement results.
基金supported by the National Natural Science Foundation of China(Grant Nos.41490644,41475101 and 41421005)the CAS Strategic Priority Project(the Western Pacific Ocean System+2 种基金Project Nos.XDA11010105,XDA11020306 and XDA11010301)the NSFC-Shandong Joint Fund for Marine Science Research Centers(Grant No.U1406401)the NSFC Innovative Group Grant(Project No.41421005)
文摘A four-dimensional variational (4D-Var) data assimilation method is implemented in an improved intermediate coupled model (ICM) of the tropical Pacific. A twin experiment is designed to evaluate the impact of the 4D-Var data assimilation algorithm on ENSO analysis and prediction based on the ICM. The model error is assumed to arise only from the parameter uncertainty. The "observation" of the SST anomaly, which is sampled from a "truth" model simulation that takes default parameter values and has Gaussian noise added, is directly assimilated into the assimilation model with its parameters set erroneously. Results show that 4D-Var effectively reduces the error of ENSO analysis and therefore improves the prediction skill of ENSO events compared with the non-assimilation case. These results provide a promising way for the ICM to achieve better real-time ENSO prediction.
文摘Instead of the capillary plasma generator(CPG),a discharge rod plasma generator(DRPG)is used in the30 mm electrothermal-chemical(ETC)gun to improve the ignition uniformity of the solid propellant.An axisymmetric two-dimensional interior ballistics model of the solid propellant ETC gun(2D-IB-SPETCG)is presented to describe the process of the ETC launch.Both calculated pressure and projectile muzzle velocity accord well with the experimental results.The feasibility of the 2D-IB-SPETCG model is proved.Depending on the experimental data and initial parameters,detailed distribution of the ballistics parameters can be simulated.With the distribution of pressure and temperature of the gas phase and the propellant,the influence of plasma during the ignition process can be analyzed.Because of the radial flowing plasma,the propellant in the area of the DRPG is ignited within 0.01 ms,while all propellant in the chamber is ignited within 0.09 ms.The radial ignition delay time is much less than the axial delay time.During the ignition process,the radial pressure difference is less than 5 MPa at the place 0.025 m away from the breech.The radial ignition uniformity is proved.The temperature of the gas increases from several thousand K(conventional ignition)to several ten thousand K(plasma ignition).Compare the distribution of the density and temperature of the gas,we know that low density and high temperature gas appears near the exits of the DRPG,while high density and low temperature gas appears at the wall near the breech.The simulation of the 2D-IB-SPETCG model is an effective way to investigate the interior ballistics process of the ETC launch.The 2D-IB-SPETC model can be used for prediction and improvement of experiments.