The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased si...The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.展开更多
It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limit...It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.展开更多
In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified fro...In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.展开更多
In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems ...In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.展开更多
Different phenomenological equations based on plasticity, primary creep (as a viscoplastic mechanism), secondary creep (as another viscoplastic mechanism) and different combinations of these equations are presente...Different phenomenological equations based on plasticity, primary creep (as a viscoplastic mechanism), secondary creep (as another viscoplastic mechanism) and different combinations of these equations are presented and used to describe the material inelastic deformation in uniaxial test. Agreement of the models with experimental results and with the theoretical concepts and physical realities is the criterion of choosing the most appropriate formulation for uniaxial test. A model is thus proposed in which plastic deformation, primary creep and secondary creep contribute to the inelastic deformation. However, it is believed that the hardening parameter is composed of plastic and primary creep parts. Accordingly, the axial plastic strain in a uniaxial test may no longer be considered as the hardening parameter. Therefore, a proportionality concept is proposed to calculate the plastic contribution of deformation.展开更多
Hydraulic models for the generation of flood inundation maps are not commonly applied in mountain river basins because of the difficulty in modeling the hydraulic behavior and the complex topography. This paper presen...Hydraulic models for the generation of flood inundation maps are not commonly applied in mountain river basins because of the difficulty in modeling the hydraulic behavior and the complex topography. This paper presents a comparative analysis of the performance of four twodimensional hydraulic models (HEC-RAS 2D, Iber 2D, Flood Modeller 2D, and PCSWMM 2D) with respect to the generation of flood inundation maps. The study area covers a 5-km reach of the Santa B-arbara River located in the Ecuadorian Andes, at 2330 masl, in Gualaceo. The model's performance was evaluated based on the water surface elevation and flood extent, in terms of the mean absolute difference and measure of fit. The analysis revealed that, for a given case, Iber 2D has the best performance in simulating the water level and inundation for flood events with 20- and 50-year return periods, respectively, followed by Flood Modeller 2D, HEC-RAS 2D, and PCSWMM 2D in terms of their performance. Grid resolution, the way in which hydraulic structures are mimicked, the model code, and the default value of the parameters are considered the main sources of prediction uncertainty.展开更多
The Indo-Gangetic Plain(IGP)is one of the most seismically vulnerable areas due to its proximity to the Himalayas.Geographic information system(GIS)-based seismic characterization of the IGP was performed based on the...The Indo-Gangetic Plain(IGP)is one of the most seismically vulnerable areas due to its proximity to the Himalayas.Geographic information system(GIS)-based seismic characterization of the IGP was performed based on the degree of deformation and fractal dimension.The zone between the Main Boundary Thrust(MBT)and the Main Central Thrust(MCT)in the Himalayan Mountain Range(HMR)experienced large variations in earthquake magnitude,which were identified by Number-Size(NS)fractal modeling.The central IGP zone experienced only moderate to low mainshock levels.Fractal analysis of earthquake epicenters reveals a large scattering of earthquake epicenters in the HMR and central IGP zones.Similarly,the fault fractal analysis identifies the HMR,central IGP,and south-western IGP zones as having more faults.Overall,the seismicity of the study region is strong in the central IGP,south-western IGP,and HMR zones,moderate in the western and southern IGP,and low in the northern,eastern,and south-eastern IGP zones.展开更多
Three-dimensional models, consisting of the flame kernel formation model, flame kernel development model and natural gas single step reaction model, are used to analyze the contribution of cyclic equivalence ratio var...Three-dimensional models, consisting of the flame kernel formation model, flame kernel development model and natural gas single step reaction model, are used to analyze the contribution of cyclic equivalence ratio variations to cyclic variations in the compressed natural gas (CNG) lean burn spark ignition engine. Computational results including the contributions of equivalence ratio cyclic variations to each combustion stage and effects of engine speed to the extent of combustion variations are discussed. It is concluded that the equivalence ratio variations affect mostly the main stage of combustion and hardly influence initial kernel development stage.展开更多
This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficient...This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficients of the parameter part of the Varying Index Coefficient Model (VICM), while the unknown function part uses the B-spline to expand. Moreover, we combine the above two estimation methods under the assumption of high-dimensional data. The results of data simulation and empirical analysis show that for the varying index coefficient model, the re-adjusted cross-validation method is better in terms of accuracy and stability than traditional methods based on ordinary least squares.展开更多
Aim: Maxillary dental arch widths were evaluated in individuals having unilateral (UCLP) and bilateral (BCLP) cleft lip and palate (CLP) using three-dimensional (3D) digital models. Material and Method: The study had ...Aim: Maxillary dental arch widths were evaluated in individuals having unilateral (UCLP) and bilateral (BCLP) cleft lip and palate (CLP) using three-dimensional (3D) digital models. Material and Method: The study had been conducted on 80 individuals aged between 14 - 17 years having UCLP and BCLP. 40 of the individuals had UCLP, whereas 40 had BCLP. The maxillary dental models taken from patients before the treatment were scanned using Orthomodel Programme (v.1.01, Orthomodel Inc., Istanbul, Turkey) to obtain 3D imagery. Student’s t-test was used in order to assess the data obtained by using SPSS software version 22.0. Results: In BCLP, the average inter-canine distance was 17.44 ± 1.31 mm, the average inter-molar distance was 36.57 ± 1.12 mm, while inter-canine/inter-molar ratio was 0.47. Whereas in UCLP, it was 25.10 ± 0.63 mm, 42.20 ± 0.53 mm and 0.59. The inter-canine distance in UCLP was found to be large enough to be statistically significant (p 0.05), even though there were differences in inter-molar widths. Conclusion: For the stable orthodontic treatment results, one of the most important points is arch form and widths to be coherent with each other. In our study, the increase of inter-canine distance seen in UCLP indicates that in the cleft region, the maxillary arch is inclined over to the back, while the same situation in BCLP suggests that the maxillary segments are collapsed inside. The difference in the arch is highly affected by the primary surgical treatment.展开更多
The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally in...The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.展开更多
We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforat...We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforating arteries is difficult to obtain from individual radiological data sets. Perforators are therefore reproduced based on previous anatomical knowledge instead of personal data. Due to their fragility, perforating arteries are attached to the model using hard materials. At the same time, hollow models are useful for practicing clip application. We made a model for practicing the application of fenestrated clips for paraclinoid internal carotid aneurysms. Situating aneurysm models in the fissure of a brain model simulates the real surgical field and is helpful for educational demonstrations.展开更多
The simulation of salinity at different locations of a tidal river using physically-based hydrodynamic models is quite cumbersome because it requires many types of data, such as hydrological and hydraulic time series ...The simulation of salinity at different locations of a tidal river using physically-based hydrodynamic models is quite cumbersome because it requires many types of data, such as hydrological and hydraulic time series at boundaries, river geometry, and adjusted coefficients. Therefore, an artificial neural network (ANN) technique using a back-propagation neural network (BPNN) and a radial basis function neural network (RBFNN) is adopted as an effective alternative in salinity simulation studies. The present study focuses on comparing the performance of BPNN, RBFNN, and three-dimensional hydrodynamic models as applied to a tidal estuarine system. The observed salinity data sets collected from 18 to 22 May, 16 to 22 October, and 26 to 30 October 2002 (totaling 4320 data points) were used for BPNN and RBFNN model training and for hydrodynamic model calibration. The data sets collected from 30 May to 2 June and 11 to 15 November 2002 (totaling 2592 data points) were adopted for BPNN and RBFNN model verification and for hydrodynamic model verification. The results revealed that the ANN (BPNN and RBFNN) models were capable of predicting the nonlinear time series behavior of salinity to the multiple forcing signals of water stages at different stations and freshwater input at upstream boundaries. The salinity predicted by the ANN models was better than that predicted by the physically based hydrodynamic model. This study suggests that BPNN and RBFNN models are easy-to-use modeling tools for simulating the salinity variation in a tidal estuarine system.展开更多
The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performanc...The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performances.To overcome this,dimensionality reduction techniques are widely used.Traditional image processing applications recently propose numerous deep learning models.However,in hyperspectral image classification,the features of deep learning models are less explored.Thus,for efficient hyperspectral image classification,a depth-wise convolutional neural network is presented in this research work.To handle the dimensionality issue in the classification process,an optimized self-organized map model is employed using a water strider optimization algorithm.The network parameters of the self-organized map are optimized by the water strider optimization which reduces the dimensionality issues and enhances the classification performances.Standard datasets such as Indian Pines and the University of Pavia(UP)are considered for experimental analysis.Existing dimensionality reduction methods like Enhanced Hybrid-Graph Discriminant Learning(EHGDL),local geometric structure Fisher analysis(LGSFA),Discriminant Hyper-Laplacian projection(DHLP),Group-based tensor model(GBTM),and Lower rank tensor approximation(LRTA)methods are compared with proposed optimized SOM model.Results confirm the superior performance of the proposed model of 98.22%accuracy for the Indian pines dataset and 98.21%accuracy for the University of Pavia dataset over the existing maximum likelihood classifier,and Support vector machine(SVM).展开更多
The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrai...The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is pro- posed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighbor- ing triangle location method by making full use of the surface normal information. Experimental results prove that this algo- rithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automati- cally reconstructed surface has only small topological difference from the true surface. This algorithm has potential applica- tions to virtual environments, computer vision, and so on.展开更多
Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a...Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a Principal Component Analysis (PCA) of residuals, estimate separate person measures based on the two item sets, compare the two estimates on a person-by-person basis using t-tests and determine the number of cases that differ significantly at the 0.05-level;if ≤5% of tests are significant, or the lower bound of a binomial 95% confidence interval (CI) of the observed proportion overlaps 5%, then it is suggested that strict unidimensionality can be inferred;otherwise the scale is multidimensional. Given its proposed significance and potential implications, this procedure needs detailed scrutiny. This paper explores the impact of sample size and method of estimating the 95% binomial CI upon conclusions according to recommended conventions. Normal approximation, “exact”, Wilson, Agresti-Coull, and Jeffreys binomial CIs were calculated for observed proportions of 0.06, 0.08 and 0.10 and sample sizes from n= 100 to n= 2500. Lower 95%CI boundaries were inspected regarding coverage of the 5% threshold. Results showed that all binomial 95% CIs included as well as excluded 5% as an effect of sample size for all three investigated proportions, except for the Wilson, Agresti-Coull, and JeffreysCIs, which did not include 5% for any sample size with a 10% observed proportion. The normal approximation CI was most sensitive to sample size. These data illustrate that the PCA/t-test protocol should be used and interpreted as any hypothesis testing procedure and is dependent on sample size as well as binomial CI estimation procedure. The PCA/t-test protocol should not be viewed as a “definite” test of unidimensionality and does not replace an integrated quantitative/qualitative interpretation based on an explicit variable definition in view of the perspective, context and purpose of measurement.展开更多
The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynam...The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynamic updating of database.Firstly, arming at the deficiency of OO-Solid model in the aspect of dynamic updating,modeling primitives of OO-Solid model were modified.And then the algorithms of dynamic updating of 3D geological model with the node data,line data or surface data change were dis- cussed.The core algorithms was done by establishing space index,following the way of facing the object from bottom to top,namely the dynamic updating from the node to arc, and then to polygon,then to the face of the component and finally to the geological object. The research has important theoretical and practical values in the field of three dimen- sional geological modeling and is significant in the field of mineral resources.展开更多
In this article,a high-order scheme,which is formulated by combining the quadratic finite element method in space with a second-order time discrete scheme,is developed for looking for the numerical solution of a two-d...In this article,a high-order scheme,which is formulated by combining the quadratic finite element method in space with a second-order time discrete scheme,is developed for looking for the numerical solution of a two-dimensional nonlinear time fractional thermal diffusion model.The time Caputo fractional derivative is approximated by using the L2-1formula,the first-order derivative and nonlinear term are discretized by some second-order approximation formulas,and the quadratic finite element is used to approximate the spatial direction.The error accuracy O(h3+t2)is obtained,which is verified by the numerical results.展开更多
基金supported in part by the National Natural Science Foundation of China(NSFC)(92167106,61833014)Key Research and Development Program of Zhejiang Province(2022C01206)。
文摘The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.
文摘It is common for datasets to contain both categorical and continuous variables. However, many feature screening methods designed for high-dimensional classification assume that the variables are continuous. This limits the applicability of existing methods in handling this complex scenario. To address this issue, we propose a model-free feature screening approach for ultra-high-dimensional multi-classification that can handle both categorical and continuous variables. Our proposed feature screening method utilizes the Maximal Information Coefficient to assess the predictive power of the variables. By satisfying certain regularity conditions, we have proven that our screening procedure possesses the sure screening property and ranking consistency properties. To validate the effectiveness of our approach, we conduct simulation studies and provide real data analysis examples to demonstrate its performance in finite samples. In summary, our proposed method offers a solution for effectively screening features in ultra-high-dimensional datasets with a mixture of categorical and continuous covariates.
文摘In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.
基金supported by National High Technology Research and Development Program of China (863 Program)(No. 2009AA04Z162)National Nature Science Foundation of China(No. 60825302, No. 60934007, No. 61074061)+1 种基金Program of Shanghai Subject Chief Scientist,"Shu Guang" project supported by Shang-hai Municipal Education Commission and Shanghai Education Development FoundationKey Project of Shanghai Science and Technology Commission, China (No. 10JC1403400)
文摘In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.
文摘Different phenomenological equations based on plasticity, primary creep (as a viscoplastic mechanism), secondary creep (as another viscoplastic mechanism) and different combinations of these equations are presented and used to describe the material inelastic deformation in uniaxial test. Agreement of the models with experimental results and with the theoretical concepts and physical realities is the criterion of choosing the most appropriate formulation for uniaxial test. A model is thus proposed in which plastic deformation, primary creep and secondary creep contribute to the inelastic deformation. However, it is believed that the hardening parameter is composed of plastic and primary creep parts. Accordingly, the axial plastic strain in a uniaxial test may no longer be considered as the hardening parameter. Therefore, a proportionality concept is proposed to calculate the plastic contribution of deformation.
基金supported by the Research Directorate of the University of Cuenca(DIUC)
文摘Hydraulic models for the generation of flood inundation maps are not commonly applied in mountain river basins because of the difficulty in modeling the hydraulic behavior and the complex topography. This paper presents a comparative analysis of the performance of four twodimensional hydraulic models (HEC-RAS 2D, Iber 2D, Flood Modeller 2D, and PCSWMM 2D) with respect to the generation of flood inundation maps. The study area covers a 5-km reach of the Santa B-arbara River located in the Ecuadorian Andes, at 2330 masl, in Gualaceo. The model's performance was evaluated based on the water surface elevation and flood extent, in terms of the mean absolute difference and measure of fit. The analysis revealed that, for a given case, Iber 2D has the best performance in simulating the water level and inundation for flood events with 20- and 50-year return periods, respectively, followed by Flood Modeller 2D, HEC-RAS 2D, and PCSWMM 2D in terms of their performance. Grid resolution, the way in which hydraulic structures are mimicked, the model code, and the default value of the parameters are considered the main sources of prediction uncertainty.
文摘The Indo-Gangetic Plain(IGP)is one of the most seismically vulnerable areas due to its proximity to the Himalayas.Geographic information system(GIS)-based seismic characterization of the IGP was performed based on the degree of deformation and fractal dimension.The zone between the Main Boundary Thrust(MBT)and the Main Central Thrust(MCT)in the Himalayan Mountain Range(HMR)experienced large variations in earthquake magnitude,which were identified by Number-Size(NS)fractal modeling.The central IGP zone experienced only moderate to low mainshock levels.Fractal analysis of earthquake epicenters reveals a large scattering of earthquake epicenters in the HMR and central IGP zones.Similarly,the fault fractal analysis identifies the HMR,central IGP,and south-western IGP zones as having more faults.Overall,the seismicity of the study region is strong in the central IGP,south-western IGP,and HMR zones,moderate in the western and southern IGP,and low in the northern,eastern,and south-eastern IGP zones.
基金Sponsored by the National Natural Science Foundation of China(50406003)
文摘Three-dimensional models, consisting of the flame kernel formation model, flame kernel development model and natural gas single step reaction model, are used to analyze the contribution of cyclic equivalence ratio variations to cyclic variations in the compressed natural gas (CNG) lean burn spark ignition engine. Computational results including the contributions of equivalence ratio cyclic variations to each combustion stage and effects of engine speed to the extent of combustion variations are discussed. It is concluded that the equivalence ratio variations affect mostly the main stage of combustion and hardly influence initial kernel development stage.
文摘This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficients of the parameter part of the Varying Index Coefficient Model (VICM), while the unknown function part uses the B-spline to expand. Moreover, we combine the above two estimation methods under the assumption of high-dimensional data. The results of data simulation and empirical analysis show that for the varying index coefficient model, the re-adjusted cross-validation method is better in terms of accuracy and stability than traditional methods based on ordinary least squares.
文摘Aim: Maxillary dental arch widths were evaluated in individuals having unilateral (UCLP) and bilateral (BCLP) cleft lip and palate (CLP) using three-dimensional (3D) digital models. Material and Method: The study had been conducted on 80 individuals aged between 14 - 17 years having UCLP and BCLP. 40 of the individuals had UCLP, whereas 40 had BCLP. The maxillary dental models taken from patients before the treatment were scanned using Orthomodel Programme (v.1.01, Orthomodel Inc., Istanbul, Turkey) to obtain 3D imagery. Student’s t-test was used in order to assess the data obtained by using SPSS software version 22.0. Results: In BCLP, the average inter-canine distance was 17.44 ± 1.31 mm, the average inter-molar distance was 36.57 ± 1.12 mm, while inter-canine/inter-molar ratio was 0.47. Whereas in UCLP, it was 25.10 ± 0.63 mm, 42.20 ± 0.53 mm and 0.59. The inter-canine distance in UCLP was found to be large enough to be statistically significant (p 0.05), even though there were differences in inter-molar widths. Conclusion: For the stable orthodontic treatment results, one of the most important points is arch form and widths to be coherent with each other. In our study, the increase of inter-canine distance seen in UCLP indicates that in the cleft region, the maxillary arch is inclined over to the back, while the same situation in BCLP suggests that the maxillary segments are collapsed inside. The difference in the arch is highly affected by the primary surgical treatment.
文摘The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.
文摘We modified a three-dimensional cerebral aneurysm model for surgical simulation and educational demonstration. Novel models are made showing perforating arteries arising around the aneurysm. Information about perforating arteries is difficult to obtain from individual radiological data sets. Perforators are therefore reproduced based on previous anatomical knowledge instead of personal data. Due to their fragility, perforating arteries are attached to the model using hard materials. At the same time, hollow models are useful for practicing clip application. We made a model for practicing the application of fenestrated clips for paraclinoid internal carotid aneurysms. Situating aneurysm models in the fissure of a brain model simulates the real surgical field and is helpful for educational demonstrations.
文摘The simulation of salinity at different locations of a tidal river using physically-based hydrodynamic models is quite cumbersome because it requires many types of data, such as hydrological and hydraulic time series at boundaries, river geometry, and adjusted coefficients. Therefore, an artificial neural network (ANN) technique using a back-propagation neural network (BPNN) and a radial basis function neural network (RBFNN) is adopted as an effective alternative in salinity simulation studies. The present study focuses on comparing the performance of BPNN, RBFNN, and three-dimensional hydrodynamic models as applied to a tidal estuarine system. The observed salinity data sets collected from 18 to 22 May, 16 to 22 October, and 26 to 30 October 2002 (totaling 4320 data points) were used for BPNN and RBFNN model training and for hydrodynamic model calibration. The data sets collected from 30 May to 2 June and 11 to 15 November 2002 (totaling 2592 data points) were adopted for BPNN and RBFNN model verification and for hydrodynamic model verification. The results revealed that the ANN (BPNN and RBFNN) models were capable of predicting the nonlinear time series behavior of salinity to the multiple forcing signals of water stages at different stations and freshwater input at upstream boundaries. The salinity predicted by the ANN models was better than that predicted by the physically based hydrodynamic model. This study suggests that BPNN and RBFNN models are easy-to-use modeling tools for simulating the salinity variation in a tidal estuarine system.
文摘The high dimensionalhyperspectral image classification is a challenging task due to the spectral feature vectors.The high correlation between these features and the noises greatly affects the classification performances.To overcome this,dimensionality reduction techniques are widely used.Traditional image processing applications recently propose numerous deep learning models.However,in hyperspectral image classification,the features of deep learning models are less explored.Thus,for efficient hyperspectral image classification,a depth-wise convolutional neural network is presented in this research work.To handle the dimensionality issue in the classification process,an optimized self-organized map model is employed using a water strider optimization algorithm.The network parameters of the self-organized map are optimized by the water strider optimization which reduces the dimensionality issues and enhances the classification performances.Standard datasets such as Indian Pines and the University of Pavia(UP)are considered for experimental analysis.Existing dimensionality reduction methods like Enhanced Hybrid-Graph Discriminant Learning(EHGDL),local geometric structure Fisher analysis(LGSFA),Discriminant Hyper-Laplacian projection(DHLP),Group-based tensor model(GBTM),and Lower rank tensor approximation(LRTA)methods are compared with proposed optimized SOM model.Results confirm the superior performance of the proposed model of 98.22%accuracy for the Indian pines dataset and 98.21%accuracy for the University of Pavia dataset over the existing maximum likelihood classifier,and Support vector machine(SVM).
基金Supported by the National Natural Science Foundation of China (No.40671158), the National 863 Program of China(No.2006AA12Z224) and the Program for New Century Excellent Talents in University (No.NCET-05-0626).
文摘The problem of taking an unorganized point cloud in 3D space and fitting a polyhedral surface to those points is both important and difficult. Aiming at increasing applications of full three dimensional digital terrain surface modeling, a new algorithm for the automatic generation of three dimensional triangulated irregular network from a point cloud is pro- posed. Based on the local topological consistency test, a combined algorithm of constrained 3D Delaunay triangulation and region-growing is extended to ensure topologically correct reconstruction. This paper also introduced an efficient neighbor- ing triangle location method by making full use of the surface normal information. Experimental results prove that this algo- rithm can efficiently obtain the most reasonable reconstructed mesh surface with arbitrary topology, wherein the automati- cally reconstructed surface has only small topological difference from the true surface. This algorithm has potential applica- tions to virtual environments, computer vision, and so on.
文摘Psychometric theory requires unidimensionality (i.e., scale items should represent a common latent variable). One advocated approach to test unidimensionality within the Rasch model is to identify two item sets from a Principal Component Analysis (PCA) of residuals, estimate separate person measures based on the two item sets, compare the two estimates on a person-by-person basis using t-tests and determine the number of cases that differ significantly at the 0.05-level;if ≤5% of tests are significant, or the lower bound of a binomial 95% confidence interval (CI) of the observed proportion overlaps 5%, then it is suggested that strict unidimensionality can be inferred;otherwise the scale is multidimensional. Given its proposed significance and potential implications, this procedure needs detailed scrutiny. This paper explores the impact of sample size and method of estimating the 95% binomial CI upon conclusions according to recommended conventions. Normal approximation, “exact”, Wilson, Agresti-Coull, and Jeffreys binomial CIs were calculated for observed proportions of 0.06, 0.08 and 0.10 and sample sizes from n= 100 to n= 2500. Lower 95%CI boundaries were inspected regarding coverage of the 5% threshold. Results showed that all binomial 95% CIs included as well as excluded 5% as an effect of sample size for all three investigated proportions, except for the Wilson, Agresti-Coull, and JeffreysCIs, which did not include 5% for any sample size with a 10% observed proportion. The normal approximation CI was most sensitive to sample size. These data illustrate that the PCA/t-test protocol should be used and interpreted as any hypothesis testing procedure and is dependent on sample size as well as binomial CI estimation procedure. The PCA/t-test protocol should not be viewed as a “definite” test of unidimensionality and does not replace an integrated quantitative/qualitative interpretation based on an explicit variable definition in view of the perspective, context and purpose of measurement.
基金the National Natural Science Foundation of China(40572165)
文摘The dynamic updating of the model included: the change of space border,addi- tion and reduction of spatial component (disappearing,dividing and merging),the change of the topological relationship and synchronous dynamic updating of database.Firstly, arming at the deficiency of OO-Solid model in the aspect of dynamic updating,modeling primitives of OO-Solid model were modified.And then the algorithms of dynamic updating of 3D geological model with the node data,line data or surface data change were dis- cussed.The core algorithms was done by establishing space index,following the way of facing the object from bottom to top,namely the dynamic updating from the node to arc, and then to polygon,then to the face of the component and finally to the geological object. The research has important theoretical and practical values in the field of three dimen- sional geological modeling and is significant in the field of mineral resources.
基金the National Natural Science Fund(11661058,11761053)Natural Science Fund of Inner Mongolia Autonomous Region(2016MS0102,2017MS0107)+1 种基金Program for Young Talents of Science and Technology in Universities of Inner Mongolia Autonomous Region(NJYT-17-A07)National Undergraduate Innovative Training Project of Inner Mongolia University(201710126026).
文摘In this article,a high-order scheme,which is formulated by combining the quadratic finite element method in space with a second-order time discrete scheme,is developed for looking for the numerical solution of a two-dimensional nonlinear time fractional thermal diffusion model.The time Caputo fractional derivative is approximated by using the L2-1formula,the first-order derivative and nonlinear term are discretized by some second-order approximation formulas,and the quadratic finite element is used to approximate the spatial direction.The error accuracy O(h3+t2)is obtained,which is verified by the numerical results.