We propose a novel framework for learning a low-dimensional representation of data based on nonlinear dynamical systems,which we call the dynamical dimension reduction(DDR).In the DDR model,each point is evolved via a...We propose a novel framework for learning a low-dimensional representation of data based on nonlinear dynamical systems,which we call the dynamical dimension reduction(DDR).In the DDR model,each point is evolved via a nonlinear flow towards a lower-dimensional subspace;the projection onto the subspace gives the low-dimensional embedding.Training the model involves identifying the nonlinear flow and the subspace.Following the equation discovery method,we represent the vector field that defines the flow using a linear combination of dictionary elements,where each element is a pre-specified linear/nonlinear candidate function.A regularization term for the average total kinetic energy is also introduced and motivated by the optimal transport theory.We prove that the resulting optimization problem is well-posed and establish several properties of the DDR method.We also show how the DDR method can be trained using a gradient-based optimization method,where the gradients are computed using the adjoint method from the optimal control theory.The DDR method is implemented and compared on synthetic and example data sets to other dimension reduction methods,including the PCA,t-SNE,and Umap.展开更多
This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted av...This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.展开更多
The equipment used in various fields contains an increasing number of parts with curved surfaces of increasing size.Five-axis computer numerical control(CNC)milling is the main parts machining method,while dynamics an...The equipment used in various fields contains an increasing number of parts with curved surfaces of increasing size.Five-axis computer numerical control(CNC)milling is the main parts machining method,while dynamics analysis has always been a research hotspot.The cutting conditions determined by the cutter axis,tool path,and workpiece geometry are complex and changeable,which has made dynamics research a major challenge.For this reason,this paper introduces the innovative idea of applying dimension reduction and mapping to the five-axis machining of curved surfaces,and proposes an efficient dynamics analysis model.To simplify the research object,the cutter position points along the tool path were discretized into inclined plane five-axis machining.The cutter dip angle and feed deflection angle were used to define the spatial position relationship in five-axis machining.These were then taken as the new base variables to construct an abstract two-dimensional space and establish the mapping relationship between the cutter position point and space point sets to further simplify the dimensions of the research object.Based on the in-cut cutting edge solved by the space limitation method,the dynamics of the inclined plane five-axis machining unit were studied,and the results were uniformly stored in the abstract space to produce a database.Finally,the prediction of the milling force and vibration state along the tool path became a data extraction process that significantly improved efficiency.Two experiments were also conducted which proved the accuracy and efficiency of the proposed dynamics analysis model.This study has great potential for the online synchronization of intelligent machining of large surfaces.展开更多
This study highlighted the physical transformation that agri-food products undergo during their drying. This transformation enormously affects the customer’s choice and the profit margin of the dried product promoter...This study highlighted the physical transformation that agri-food products undergo during their drying. This transformation enormously affects the customer’s choice and the profit margin of the dried product promoter. The example of the experimental study of the potato reveals that the product continually changes its dimensions during its drying. The more the product loses its water, the more the dimensions decrease. The results initially showed that the water parameters such as mass or water content decrease according to the drying principle. The dimensions length L., width l and thickness e. decrease following a linear trend whose mathematical equations which describe them are determined using the office tool, excel. This trend has repercussions on the surface and volume parameters which in turn decreases almost linearly with the product’s water content. Note that the coefficient R2 is not always acceptable, confirming the complex nature of the behavior of organic products.展开更多
Data-driven surrogate models that assist with efficient evolutionary algorithms to find the optimal development scheme have been widely used to solve reservoir production optimization problems.However,existing researc...Data-driven surrogate models that assist with efficient evolutionary algorithms to find the optimal development scheme have been widely used to solve reservoir production optimization problems.However,existing research suggests that the effectiveness of a surrogate model can vary depending on the complexity of the design problem.A surrogate model that has demonstrated success in one scenario may not perform as well in others.In the absence of prior knowledge,finding a promising surrogate model that performs well for an unknown reservoir is challenging.Moreover,the optimization process often relies on a single evolutionary algorithm,which can yield varying results across different cases.To address these limitations,this paper introduces a novel approach called the multi-surrogate framework with an adaptive selection mechanism(MSFASM)to tackle production optimization problems.MSFASM consists of two stages.In the first stage,a reduced-dimensional broad learning system(BLS)is used to adaptively select the evolutionary algorithm with the best performance during the current optimization period.In the second stage,the multi-objective algorithm,non-dominated sorting genetic algorithm II(NSGA-II),is used as an optimizer to find a set of Pareto solutions with good performance on multiple surrogate models.A novel optimal point criterion is utilized in this stage to select the Pareto solutions,thereby obtaining the desired development schemes without increasing the computational load of the numerical simulator.The two stages are combined using sequential transfer learning.From the two most important perspectives of an evolutionary algorithm and a surrogate model,the proposed method improves adaptability to optimization problems of various reservoir types.To verify the effectiveness of the proposed method,four 100-dimensional benchmark functions and two reservoir models are tested,and the results are compared with those obtained by six other surrogate-model-based methods.The results demonstrate that our approach can obtain the maximum net present value(NPV)of the target production optimization problems.展开更多
Ecosystems generally have the self-adapting ability to resist various external pressures or disturbances,which is always called resilience.However,once the external disturbances exceed the tipping points of the system...Ecosystems generally have the self-adapting ability to resist various external pressures or disturbances,which is always called resilience.However,once the external disturbances exceed the tipping points of the system resilience,the consequences would be catastrophic,and eventually lead the ecosystem to complete collapse.We capture the collapse process of ecosystems represented by plant-pollinator networks with the k-core nested structural method,and find that a sufficiently weak interaction strength or a sufficiently large competition weight can cause the structure of the ecosystem to collapse from its smallest k-core towards its largest k-core.Then we give the tipping points of structure and dynamic collapse of the entire system from the one-dimensional dynamic function of the ecosystem.Our work provides an intuitive and precise description of the dynamic process of ecosystem collapse under multiple interactions,and provides theoretical insights into further avoiding the occurrence of ecosystem collapse.展开更多
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi...With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.展开更多
We explores Hamiltonian reduction in pulse-controlled finite-dimensional quantum systems with near-degenerate eigenstates. A quantum system with a non-degenerate ground state and several near-degenerate excited states...We explores Hamiltonian reduction in pulse-controlled finite-dimensional quantum systems with near-degenerate eigenstates. A quantum system with a non-degenerate ground state and several near-degenerate excited states is controlled by a short pulse, and the objective is to maximize the collective population on all excited states when we treat all of them as one level. Two cases of the systems are shown to be equivalent to effective two-level systems. When the pulse is weak, simple relations between the original systems and the reduced systems are obtained. When the pulse is strong, these relations are still available for pulses with only one frequency under the first-order approximation.展开更多
In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems ...In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.展开更多
An automated method to optimize the definition of the progress variables in the flamelet-based dimension reduction is proposed. The performance of these optimized progress variables in coupling the flamelets and flow ...An automated method to optimize the definition of the progress variables in the flamelet-based dimension reduction is proposed. The performance of these optimized progress variables in coupling the flamelets and flow solver is presented. In the proposed method, the progress variables are defined according to the first two principal components (PCs) from the principal component analysis (PCA) or kernel-density-weighted PCA (KEDPCA) of a set of flamelets. These flamelets can then be mapped to these new progress variables instead of the mixture fraction/conventional progress variables. Thus, a new chemistry look-up table is constructed. A priori validation of these optimized progress variables and the new chemistry table is implemented in a CH4/N2/air lift-off flame. The reconstruction of the lift-off flame shows that the optimized progress variables perform better than the conventional ones, especially in the high temperature area. The coefficient determinations (R2 statistics) show that the KEDPCA performs slightly better than the PCA except for some minor species. The main advantage of the KEDPCA is that it is less sensitive to the database. Meanwhile, the criteria for the optimization are proposed and discussed. The constraint that the progress variables should monotonically evolve from fresh gas to burnt gas is analyzed in detail.展开更多
In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the ...In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the samplings of training data are deficient.This paper proposes a dimension-reduced approach to alleviate this problem.The dimension reduction includes two steps:firstly,the full array is divided into several subarrays;secondly,the test data and the training data at each subarray are transformed into the modal domain from the hydrophone domain.Then the modal-domain test data and training data at each subarray are processed to formulate the subarray statistic by using the GLRT theory.The final test statistic of the dimension-reduced ASD(DR-ASD)is obtained by summing all the subarray statistics.After the dimension reduction,the unknown parameters can be estimated more accurately so the DR-ASD achieves a better detection performance than the ASD.In order to achieve the optimal detection performance,the processing gain of the DR-ASD is deduced to choose a proper number of subarrays.Simulation experiments verify the improved detection performance of the DR-ASD compared with the ASD.展开更多
Sustainable Development Capacity (SDC) is a comprehensive concept. In order to obtain a relatively objective evaluation of it, many indices of various aspects are often used in assessing index systems. However, the ov...Sustainable Development Capacity (SDC) is a comprehensive concept. In order to obtain a relatively objective evaluation of it, many indices of various aspects are often used in assessing index systems. However, the overlapping information of indices is a frequent source deviating the result from the truth. In this paper, 48 indices are selected as original variables in assessing SDC of China's coastal areas. The mathematical method of dimension reducing treatment is used for eliminating the overlapping information in 48 variables. Five new comprehensive indices are extracted bearing efficient messages of original indices. On the base of new indices values, the sequencing of 12 coastal areas SDC is gained, and five patterns of sustainable development regions are sorted. Then, the leading factors and their relations of SDC in these patterns are analyzed. The gains of research are discussed in the end.展开更多
The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for...The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD. Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little. Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.展开更多
The precision of the kernel independent component analysis( KICA) algorithm depends on the type and parameter values of kernel function. Therefore,it's of great significance to study the choice method of KICA'...The precision of the kernel independent component analysis( KICA) algorithm depends on the type and parameter values of kernel function. Therefore,it's of great significance to study the choice method of KICA's kernel parameters for improving its feature dimension reduction result. In this paper, a fitness function was established by use of the ideal of Fisher discrimination function firstly. Then the global optimal solution of fitness function was searched by particle swarm optimization( PSO) algorithm and a multi-state information dimension reduction algorithm based on PSO-KICA was established. Finally,the validity of this algorithm to enhance the precision of feature dimension reduction has been proven.展开更多
In our previous work, we have given an algorithm for segmenting a simplex in the n-dimensional space into rt n+ 1 polyhedrons and provided map F which maps the n-dimensional unit cube to these polyhedrons. In this pa...In our previous work, we have given an algorithm for segmenting a simplex in the n-dimensional space into rt n+ 1 polyhedrons and provided map F which maps the n-dimensional unit cube to these polyhedrons. In this paper, we prove that the map F is a one to one correspondence at least in lower dimensional spaces (n _〈 3). Moreover, we propose the approximating subdivision and the interpolatory subdivision schemes and the estimation of computational complexity for triangular Bézier patches on a 2-dimensional space. Finally, we compare our schemes with Goldman's in computational complexity and speed.展开更多
<strong>Purpose:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> This study sought to review the characteristics, strengths, weak...<strong>Purpose:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> This study sought to review the characteristics, strengths, weaknesses variants, applications areas and data types applied on the various </span><span><span style="font-family:Verdana;">Dimension Reduction techniques. </span><b><span style="font-family:Verdana;">Methodology: </span></b><span style="font-family:Verdana;">The most commonly used databases employed to search for the papers were ScienceDirect, Scopus, Google Scholar, IEEE Xplore and Mendeley. An integrative review was used for the study where </span></span></span><span style="font-family:Verdana;">341</span><span style="font-family:;" "=""><span style="font-family:Verdana;"> papers were reviewed. </span><b><span style="font-family:Verdana;">Results:</span></b><span style="font-family:Verdana;"> The linear techniques considered were Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Singular Value Decomposition (SVD), Latent Semantic Analysis (LSA), Locality Preserving Projections (LPP), Independent Component Analysis (ICA) and Project Pursuit (PP). The non-linear techniques which were developed to work with applications that ha</span></span><span style="font-family:Verdana;">ve</span><span style="font-family:Verdana;"> complex non-linear structures considered were Kernel Principal Component Analysis (KPC</span><span style="font-family:Verdana;">A), Multi</span><span style="font-family:Verdana;">-</span><span style="font-family:;" "=""><span style="font-family:Verdana;">dimensional Scaling (MDS), Isomap, Locally Linear Embedding (LLE), Self-Organizing Map (SOM), Latent Vector Quantization (LVQ), t-Stochastic </span><span style="font-family:Verdana;">neighbor embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP). DR techniques can further be categorized into supervised, unsupervised and more recently semi-supervised learning methods. The supervised versions are the LDA and LVQ. All the other techniques are unsupervised. Supervised variants of PCA, LPP, KPCA and MDS have </span><span style="font-family:Verdana;">been developed. Supervised and semi-supervised variants of PP and t-SNE have also been developed and a semi supervised version of the LDA has been developed. </span><b><span style="font-family:Verdana;">Conclusion:</span></b><span style="font-family:Verdana;"> The various application areas, strengths, weaknesses and variants of the DR techniques were explored. The different data types that have been applied on the various DR techniques were also explored.</span></span>展开更多
Finding a suitable space is one of the most critical problems for dimensionality reduction. Each space corresponds to a distance metric defined on the sample attributes, and thus finding a suitable space can be conver...Finding a suitable space is one of the most critical problems for dimensionality reduction. Each space corresponds to a distance metric defined on the sample attributes, and thus finding a suitable space can be converted to develop an effective distance metric. Most existing dimensionality reduction methods use a fixed pre-specified distance metric. However, this easy treatment has some limitations in practice due to the fact the pre-specified metric is not going to warranty that the closest samples are the truly similar ones. In this paper, we present an adaptive metric learning method for dimensionality reduction, called AML. The adaptive metric learning model is developed by maximizing the difference of the distances between the data pairs in cannot-links and those in must-links. Different from many existing papers that use the traditional Euclidean distance, we use the more generalized l<sub>2,p</sub>-norm distance to reduce sensitivity to noise and outliers, which incorporates additional flexibility and adaptability due to the selection of appropriate p-values for different data sets. Moreover, considering traditional metric learning methods usually project samples into a linear subspace, which is overstrict. We extend the basic linear method to a more powerful nonlinear kernel case so that well capturing complex nonlinear relationship between data. To solve our objective, we have derived an efficient iterative algorithm. Extensive experiments for dimensionality reduction are provided to demonstrate the superiority of our method over state-of-the-art approaches.展开更多
Stochastic fractional differential systems are important and useful in the mathematics,physics,and engineering fields.However,the determination of their probabilistic responses is difficult due to their non-Markovian ...Stochastic fractional differential systems are important and useful in the mathematics,physics,and engineering fields.However,the determination of their probabilistic responses is difficult due to their non-Markovian property.The recently developed globally-evolving-based generalized density evolution equation(GE-GDEE),which is a unified partial differential equation(PDE)governing the transient probability density function(PDF)of a generic path-continuous process,including non-Markovian ones,provides a feasible tool to solve this problem.In the paper,the GE-GDEE for multi-dimensional linear fractional differential systems subject to Gaussian white noise is established.In particular,it is proved that in the GE-GDEE corresponding to the state-quantities of interest,the intrinsic drift coefficient is a time-varying linear function,and can be analytically determined.In this sense,an alternative low-dimensional equivalent linear integer-order differential system with exact closed-form coefficients for the original highdimensional linear fractional differential system can be constructed such that their transient PDFs are identical.Specifically,for a multi-dimensional linear fractional differential system,if only one or two quantities are of interest,GE-GDEE is only in one or two dimensions,and the surrogate system would be a one-or two-dimensional linear integer-order system.Several examples are studied to assess the merit of the proposed method.Though presently the closed-form intrinsic drift coefficient is only available for linear stochastic fractional differential systems,the findings in the present paper provide a remarkable demonstration on the existence and eligibility of GE-GDEE for the case that the original high-dimensional system itself is non-Markovian,and provide insights for the physical-mechanism-informed determination of intrinsic drift and diffusion coefficients of GE-GDEE of more generic complex nonlinear systems.展开更多
This work proposes a Tensor Train Random Projection(TTRP)method for dimension reduction,where pairwise distances can be approximately preserved.Our TTRP is systematically constructed through a Tensor Train(TT)represen...This work proposes a Tensor Train Random Projection(TTRP)method for dimension reduction,where pairwise distances can be approximately preserved.Our TTRP is systematically constructed through a Tensor Train(TT)representation with TT-ranks equal to one.Based on the tensor train format,this random projection method can speed up the dimension reduction procedure for high-dimensional datasets and requires fewer storage costs with little loss in accuracy,comparedwith existingmethods.We provide a theoretical analysis of the bias and the variance of TTRP,which shows that this approach is an expected isometric projectionwith bounded variance,and we show that the scaling Rademacher variable is an optimal choice for generating the corresponding TT-cores.Detailed numerical experiments with synthetic datasets and theMNIST dataset are conducted to demonstrate the efficiency of TTRP.展开更多
The concise and informative representation of hyperspectral imagery is achieved via the introduced diffusion geometric coordinates derived from nonlinear dimension reduction maps - diffusion maps. The huge-volume high...The concise and informative representation of hyperspectral imagery is achieved via the introduced diffusion geometric coordinates derived from nonlinear dimension reduction maps - diffusion maps. The huge-volume high- dimensional spectral measurements are organized by the affinity graph where each node in this graph only connects to its local neighbors and each edge in this graph represents local similarity information. By normalizing the affinity graph appropriately, the diffusion operator of the underlying hyperspectral imagery is well-defined, which means that the Markov random walk can be simulated on the hyperspectral imagery. Therefore, the diffusion geometric coordinates, derived from the eigenfunctions and the associated eigenvalues of the diffusion operator, can capture the intrinsic geometric information of the hyperspectral imagery well, which gives more enhanced representation results than traditional linear methods, such as principal component analysis based methods. For large-scale full scene hyperspectral imagery, by exploiting the backbone approach, the computation complexity and the memory requirements are acceptable. Experiments also show that selecting suitable symmetrization normalization techniques while forming the diffusion operator is important to hyperspectral imagery representation.展开更多
文摘We propose a novel framework for learning a low-dimensional representation of data based on nonlinear dynamical systems,which we call the dynamical dimension reduction(DDR).In the DDR model,each point is evolved via a nonlinear flow towards a lower-dimensional subspace;the projection onto the subspace gives the low-dimensional embedding.Training the model involves identifying the nonlinear flow and the subspace.Following the equation discovery method,we represent the vector field that defines the flow using a linear combination of dictionary elements,where each element is a pre-specified linear/nonlinear candidate function.A regularization term for the average total kinetic energy is also introduced and motivated by the optimal transport theory.We prove that the resulting optimization problem is well-posed and establish several properties of the DDR method.We also show how the DDR method can be trained using a gradient-based optimization method,where the gradients are computed using the adjoint method from the optimal control theory.The DDR method is implemented and compared on synthetic and example data sets to other dimension reduction methods,including the PCA,t-SNE,and Umap.
文摘This paper presents a new dimension reduction strategy for medium and large-scale linear programming problems. The proposed method uses a subset of the original constraints and combines two algorithms: the weighted average and the cosine simplex algorithm. The first approach identifies binding constraints by using the weighted average of each constraint, whereas the second algorithm is based on the cosine similarity between the vector of the objective function and the constraints. These two approaches are complementary, and when used together, they locate the essential subset of initial constraints required for solving medium and large-scale linear programming problems. After reducing the dimension of the linear programming problem using the subset of the essential constraints, the solution method can be chosen from any suitable method for linear programming. The proposed approach was applied to a set of well-known benchmarks as well as more than 2000 random medium and large-scale linear programming problems. The results are promising, indicating that the new approach contributes to the reduction of both the size of the problems and the total number of iterations required. A tree-based classification model also confirmed the need for combining the two approaches. A detailed numerical example, the general numerical results, and the statistical analysis for the decision tree procedure are presented.
基金Supported by National Natural Science Foundation of China(Grant Nos.52005078,U1908231,52075076).
文摘The equipment used in various fields contains an increasing number of parts with curved surfaces of increasing size.Five-axis computer numerical control(CNC)milling is the main parts machining method,while dynamics analysis has always been a research hotspot.The cutting conditions determined by the cutter axis,tool path,and workpiece geometry are complex and changeable,which has made dynamics research a major challenge.For this reason,this paper introduces the innovative idea of applying dimension reduction and mapping to the five-axis machining of curved surfaces,and proposes an efficient dynamics analysis model.To simplify the research object,the cutter position points along the tool path were discretized into inclined plane five-axis machining.The cutter dip angle and feed deflection angle were used to define the spatial position relationship in five-axis machining.These were then taken as the new base variables to construct an abstract two-dimensional space and establish the mapping relationship between the cutter position point and space point sets to further simplify the dimensions of the research object.Based on the in-cut cutting edge solved by the space limitation method,the dynamics of the inclined plane five-axis machining unit were studied,and the results were uniformly stored in the abstract space to produce a database.Finally,the prediction of the milling force and vibration state along the tool path became a data extraction process that significantly improved efficiency.Two experiments were also conducted which proved the accuracy and efficiency of the proposed dynamics analysis model.This study has great potential for the online synchronization of intelligent machining of large surfaces.
文摘This study highlighted the physical transformation that agri-food products undergo during their drying. This transformation enormously affects the customer’s choice and the profit margin of the dried product promoter. The example of the experimental study of the potato reveals that the product continually changes its dimensions during its drying. The more the product loses its water, the more the dimensions decrease. The results initially showed that the water parameters such as mass or water content decrease according to the drying principle. The dimensions length L., width l and thickness e. decrease following a linear trend whose mathematical equations which describe them are determined using the office tool, excel. This trend has repercussions on the surface and volume parameters which in turn decreases almost linearly with the product’s water content. Note that the coefficient R2 is not always acceptable, confirming the complex nature of the behavior of organic products.
基金This work is supported by the National Natural Science Foundation of China under Grant 52274057,52074340 and 51874335the Major Scientific and Technological Projects of CNPC under Grant ZD2019-183-008+2 种基金the Major Scientific and Technological Projects of CNOOC under Grant CCL2022RCPS0397RSNthe Science and Technology Support Plan for Youth Innovation of University in Shandong Province under Grant 2019KJH002111 Project under Grant B08028.
文摘Data-driven surrogate models that assist with efficient evolutionary algorithms to find the optimal development scheme have been widely used to solve reservoir production optimization problems.However,existing research suggests that the effectiveness of a surrogate model can vary depending on the complexity of the design problem.A surrogate model that has demonstrated success in one scenario may not perform as well in others.In the absence of prior knowledge,finding a promising surrogate model that performs well for an unknown reservoir is challenging.Moreover,the optimization process often relies on a single evolutionary algorithm,which can yield varying results across different cases.To address these limitations,this paper introduces a novel approach called the multi-surrogate framework with an adaptive selection mechanism(MSFASM)to tackle production optimization problems.MSFASM consists of two stages.In the first stage,a reduced-dimensional broad learning system(BLS)is used to adaptively select the evolutionary algorithm with the best performance during the current optimization period.In the second stage,the multi-objective algorithm,non-dominated sorting genetic algorithm II(NSGA-II),is used as an optimizer to find a set of Pareto solutions with good performance on multiple surrogate models.A novel optimal point criterion is utilized in this stage to select the Pareto solutions,thereby obtaining the desired development schemes without increasing the computational load of the numerical simulator.The two stages are combined using sequential transfer learning.From the two most important perspectives of an evolutionary algorithm and a surrogate model,the proposed method improves adaptability to optimization problems of various reservoir types.To verify the effectiveness of the proposed method,four 100-dimensional benchmark functions and two reservoir models are tested,and the results are compared with those obtained by six other surrogate-model-based methods.The results demonstrate that our approach can obtain the maximum net present value(NPV)of the target production optimization problems.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.72071153 and 72231008)the Natural Science Foundation of Shaanxi Province(Grant No.2020JM-486)the Fund of the Key Laboratory of Equipment Integrated Support Technology(Grant No.6142003190102)。
文摘Ecosystems generally have the self-adapting ability to resist various external pressures or disturbances,which is always called resilience.However,once the external disturbances exceed the tipping points of the system resilience,the consequences would be catastrophic,and eventually lead the ecosystem to complete collapse.We capture the collapse process of ecosystems represented by plant-pollinator networks with the k-core nested structural method,and find that a sufficiently weak interaction strength or a sufficiently large competition weight can cause the structure of the ecosystem to collapse from its smallest k-core towards its largest k-core.Then we give the tipping points of structure and dynamic collapse of the entire system from the one-dimensional dynamic function of the ecosystem.Our work provides an intuitive and precise description of the dynamic process of ecosystem collapse under multiple interactions,and provides theoretical insights into further avoiding the occurrence of ecosystem collapse.
文摘With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.
基金ACKNOWLEDGMENTS This work was supported by the National Natural Science Foundation of China (No.61074052 and No.61072032). Herschel Rabitz acknowledges the support from Army Research Office (ARO).
文摘We explores Hamiltonian reduction in pulse-controlled finite-dimensional quantum systems with near-degenerate eigenstates. A quantum system with a non-degenerate ground state and several near-degenerate excited states is controlled by a short pulse, and the objective is to maximize the collective population on all excited states when we treat all of them as one level. Two cases of the systems are shown to be equivalent to effective two-level systems. When the pulse is weak, simple relations between the original systems and the reduced systems are obtained. When the pulse is strong, these relations are still available for pulses with only one frequency under the first-order approximation.
基金supported by National High Technology Research and Development Program of China (863 Program)(No. 2009AA04Z162)National Nature Science Foundation of China(No. 60825302, No. 60934007, No. 61074061)+1 种基金Program of Shanghai Subject Chief Scientist,"Shu Guang" project supported by Shang-hai Municipal Education Commission and Shanghai Education Development FoundationKey Project of Shanghai Science and Technology Commission, China (No. 10JC1403400)
文摘In this paper, a low-dimensional multiple-input and multiple-output (MIMO) model predictive control (MPC) configuration is presented for partial differential equation (PDE) unknown spatially-distributed systems (SDSs). First, the dimension reduction with principal component analysis (PCA) is used to transform the high-dimensional spatio-temporal data into a low-dimensional time domain. The MPC strategy is proposed based on the online correction low-dimensional models, where the state of the system at a previous time is used to correct the output of low-dimensional models. Sufficient conditions for closed-loop stability are presented and proven. Simulations demonstrate the accuracy and efficiency of the proposed methodologies.
基金Project supported by the National Natural Science Foundation of China(Nos.50936005,51576182,and 11172296)
文摘An automated method to optimize the definition of the progress variables in the flamelet-based dimension reduction is proposed. The performance of these optimized progress variables in coupling the flamelets and flow solver is presented. In the proposed method, the progress variables are defined according to the first two principal components (PCs) from the principal component analysis (PCA) or kernel-density-weighted PCA (KEDPCA) of a set of flamelets. These flamelets can then be mapped to these new progress variables instead of the mixture fraction/conventional progress variables. Thus, a new chemistry look-up table is constructed. A priori validation of these optimized progress variables and the new chemistry table is implemented in a CH4/N2/air lift-off flame. The reconstruction of the lift-off flame shows that the optimized progress variables perform better than the conventional ones, especially in the high temperature area. The coefficient determinations (R2 statistics) show that the KEDPCA performs slightly better than the PCA except for some minor species. The main advantage of the KEDPCA is that it is less sensitive to the database. Meanwhile, the criteria for the optimization are proposed and discussed. The constraint that the progress variables should monotonically evolve from fresh gas to burnt gas is analyzed in detail.
基金the National Natural Science Foundation of China (Grant No. 11534009, 11974285) to provide fund for conducting this research
文摘In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the samplings of training data are deficient.This paper proposes a dimension-reduced approach to alleviate this problem.The dimension reduction includes two steps:firstly,the full array is divided into several subarrays;secondly,the test data and the training data at each subarray are transformed into the modal domain from the hydrophone domain.Then the modal-domain test data and training data at each subarray are processed to formulate the subarray statistic by using the GLRT theory.The final test statistic of the dimension-reduced ASD(DR-ASD)is obtained by summing all the subarray statistics.After the dimension reduction,the unknown parameters can be estimated more accurately so the DR-ASD achieves a better detection performance than the ASD.In order to achieve the optimal detection performance,the processing gain of the DR-ASD is deduced to choose a proper number of subarrays.Simulation experiments verify the improved detection performance of the DR-ASD compared with the ASD.
基金Knowledge Innovation Project of Chinese Academy of Sciences (KZCX2-307-05) Knowledge Innovation Project of Institute of Geograp
文摘Sustainable Development Capacity (SDC) is a comprehensive concept. In order to obtain a relatively objective evaluation of it, many indices of various aspects are often used in assessing index systems. However, the overlapping information of indices is a frequent source deviating the result from the truth. In this paper, 48 indices are selected as original variables in assessing SDC of China's coastal areas. The mathematical method of dimension reducing treatment is used for eliminating the overlapping information in 48 variables. Five new comprehensive indices are extracted bearing efficient messages of original indices. On the base of new indices values, the sequencing of 12 coastal areas SDC is gained, and five patterns of sustainable development regions are sorted. Then, the leading factors and their relations of SDC in these patterns are analyzed. The gains of research are discussed in the end.
基金Supported by the National Basic Research Program of China (973 Program) (No.2007CB311104)
文摘The performance of the traditional Voice Activity Detection (VAD) algorithms declines sharply in lower Signal-to-Noise Ratio (SNR) environments. In this paper, a feature weighting likelihood method is proposed for noise-robust VAD. The contribution of dynamic features to likelihood score can be increased via the method, which improves consequently the noise robustness of VAD. Divergence based dimension reduction method is proposed for saving computation, which reduces these feature dimensions with smaller divergence value at the cost of degrading the performance a little. Experimental results on Aurora Ⅱ database show that the detection performance in noise environments can remarkably be improved by the proposed method when the model trained in clean data is used to detect speech endpoints. Using weighting likelihood on the dimension-reduced features obtains comparable, even better, performance compared to original full-dimensional feature.
文摘The precision of the kernel independent component analysis( KICA) algorithm depends on the type and parameter values of kernel function. Therefore,it's of great significance to study the choice method of KICA's kernel parameters for improving its feature dimension reduction result. In this paper, a fitness function was established by use of the ideal of Fisher discrimination function firstly. Then the global optimal solution of fitness function was searched by particle swarm optimization( PSO) algorithm and a multi-state information dimension reduction algorithm based on PSO-KICA was established. Finally,the validity of this algorithm to enhance the precision of feature dimension reduction has been proven.
文摘In our previous work, we have given an algorithm for segmenting a simplex in the n-dimensional space into rt n+ 1 polyhedrons and provided map F which maps the n-dimensional unit cube to these polyhedrons. In this paper, we prove that the map F is a one to one correspondence at least in lower dimensional spaces (n _〈 3). Moreover, we propose the approximating subdivision and the interpolatory subdivision schemes and the estimation of computational complexity for triangular Bézier patches on a 2-dimensional space. Finally, we compare our schemes with Goldman's in computational complexity and speed.
文摘<strong>Purpose:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> This study sought to review the characteristics, strengths, weaknesses variants, applications areas and data types applied on the various </span><span><span style="font-family:Verdana;">Dimension Reduction techniques. </span><b><span style="font-family:Verdana;">Methodology: </span></b><span style="font-family:Verdana;">The most commonly used databases employed to search for the papers were ScienceDirect, Scopus, Google Scholar, IEEE Xplore and Mendeley. An integrative review was used for the study where </span></span></span><span style="font-family:Verdana;">341</span><span style="font-family:;" "=""><span style="font-family:Verdana;"> papers were reviewed. </span><b><span style="font-family:Verdana;">Results:</span></b><span style="font-family:Verdana;"> The linear techniques considered were Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Singular Value Decomposition (SVD), Latent Semantic Analysis (LSA), Locality Preserving Projections (LPP), Independent Component Analysis (ICA) and Project Pursuit (PP). The non-linear techniques which were developed to work with applications that ha</span></span><span style="font-family:Verdana;">ve</span><span style="font-family:Verdana;"> complex non-linear structures considered were Kernel Principal Component Analysis (KPC</span><span style="font-family:Verdana;">A), Multi</span><span style="font-family:Verdana;">-</span><span style="font-family:;" "=""><span style="font-family:Verdana;">dimensional Scaling (MDS), Isomap, Locally Linear Embedding (LLE), Self-Organizing Map (SOM), Latent Vector Quantization (LVQ), t-Stochastic </span><span style="font-family:Verdana;">neighbor embedding (t-SNE) and Uniform Manifold Approximation and Projection (UMAP). DR techniques can further be categorized into supervised, unsupervised and more recently semi-supervised learning methods. The supervised versions are the LDA and LVQ. All the other techniques are unsupervised. Supervised variants of PCA, LPP, KPCA and MDS have </span><span style="font-family:Verdana;">been developed. Supervised and semi-supervised variants of PP and t-SNE have also been developed and a semi supervised version of the LDA has been developed. </span><b><span style="font-family:Verdana;">Conclusion:</span></b><span style="font-family:Verdana;"> The various application areas, strengths, weaknesses and variants of the DR techniques were explored. The different data types that have been applied on the various DR techniques were also explored.</span></span>
文摘Finding a suitable space is one of the most critical problems for dimensionality reduction. Each space corresponds to a distance metric defined on the sample attributes, and thus finding a suitable space can be converted to develop an effective distance metric. Most existing dimensionality reduction methods use a fixed pre-specified distance metric. However, this easy treatment has some limitations in practice due to the fact the pre-specified metric is not going to warranty that the closest samples are the truly similar ones. In this paper, we present an adaptive metric learning method for dimensionality reduction, called AML. The adaptive metric learning model is developed by maximizing the difference of the distances between the data pairs in cannot-links and those in must-links. Different from many existing papers that use the traditional Euclidean distance, we use the more generalized l<sub>2,p</sub>-norm distance to reduce sensitivity to noise and outliers, which incorporates additional flexibility and adaptability due to the selection of appropriate p-values for different data sets. Moreover, considering traditional metric learning methods usually project samples into a linear subspace, which is overstrict. We extend the basic linear method to a more powerful nonlinear kernel case so that well capturing complex nonlinear relationship between data. To solve our objective, we have derived an efficient iterative algorithm. Extensive experiments for dimensionality reduction are provided to demonstrate the superiority of our method over state-of-the-art approaches.
基金The supports of the National Natural Science Foundation of China(Grant Nos.51725804 and U1711264)the Research Fund for State Key Laboratories of Ministry of Science and Technology of China(SLDRCE19-B-23)the Shanghai Post-Doctoral Excellence Program(2022558)。
文摘Stochastic fractional differential systems are important and useful in the mathematics,physics,and engineering fields.However,the determination of their probabilistic responses is difficult due to their non-Markovian property.The recently developed globally-evolving-based generalized density evolution equation(GE-GDEE),which is a unified partial differential equation(PDE)governing the transient probability density function(PDF)of a generic path-continuous process,including non-Markovian ones,provides a feasible tool to solve this problem.In the paper,the GE-GDEE for multi-dimensional linear fractional differential systems subject to Gaussian white noise is established.In particular,it is proved that in the GE-GDEE corresponding to the state-quantities of interest,the intrinsic drift coefficient is a time-varying linear function,and can be analytically determined.In this sense,an alternative low-dimensional equivalent linear integer-order differential system with exact closed-form coefficients for the original highdimensional linear fractional differential system can be constructed such that their transient PDFs are identical.Specifically,for a multi-dimensional linear fractional differential system,if only one or two quantities are of interest,GE-GDEE is only in one or two dimensions,and the surrogate system would be a one-or two-dimensional linear integer-order system.Several examples are studied to assess the merit of the proposed method.Though presently the closed-form intrinsic drift coefficient is only available for linear stochastic fractional differential systems,the findings in the present paper provide a remarkable demonstration on the existence and eligibility of GE-GDEE for the case that the original high-dimensional system itself is non-Markovian,and provide insights for the physical-mechanism-informed determination of intrinsic drift and diffusion coefficients of GE-GDEE of more generic complex nonlinear systems.
基金supported by the NationalNatural Science Foundation of China(No.12071291)the Science and Technology Commission of Shanghai Municipality(No.20JC1414300)the Natural Science Foundation of Shanghai(No.20ZR1436200).
文摘This work proposes a Tensor Train Random Projection(TTRP)method for dimension reduction,where pairwise distances can be approximately preserved.Our TTRP is systematically constructed through a Tensor Train(TT)representation with TT-ranks equal to one.Based on the tensor train format,this random projection method can speed up the dimension reduction procedure for high-dimensional datasets and requires fewer storage costs with little loss in accuracy,comparedwith existingmethods.We provide a theoretical analysis of the bias and the variance of TTRP,which shows that this approach is an expected isometric projectionwith bounded variance,and we show that the scaling Rademacher variable is an optimal choice for generating the corresponding TT-cores.Detailed numerical experiments with synthetic datasets and theMNIST dataset are conducted to demonstrate the efficiency of TTRP.
基金The National Key Technologies R & D Program during the 11th Five-Year Plan Period (No.2006BAB15B01)
文摘The concise and informative representation of hyperspectral imagery is achieved via the introduced diffusion geometric coordinates derived from nonlinear dimension reduction maps - diffusion maps. The huge-volume high- dimensional spectral measurements are organized by the affinity graph where each node in this graph only connects to its local neighbors and each edge in this graph represents local similarity information. By normalizing the affinity graph appropriately, the diffusion operator of the underlying hyperspectral imagery is well-defined, which means that the Markov random walk can be simulated on the hyperspectral imagery. Therefore, the diffusion geometric coordinates, derived from the eigenfunctions and the associated eigenvalues of the diffusion operator, can capture the intrinsic geometric information of the hyperspectral imagery well, which gives more enhanced representation results than traditional linear methods, such as principal component analysis based methods. For large-scale full scene hyperspectral imagery, by exploiting the backbone approach, the computation complexity and the memory requirements are acceptable. Experiments also show that selecting suitable symmetrization normalization techniques while forming the diffusion operator is important to hyperspectral imagery representation.