With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direc...With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.展开更多
We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold i...We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.展开更多
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities...The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.展开更多
Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approac...Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams.展开更多
The invariant subspace method is used to construct the explicit solution of a nonlinear evolution equation. The second-order nonlinear differential operators that possess invariant subspaces of submaximal dimension ar...The invariant subspace method is used to construct the explicit solution of a nonlinear evolution equation. The second-order nonlinear differential operators that possess invariant subspaces of submaximal dimension are described. There are second-order nonlinear differential operators, including cubic operators and quadratic operators, which preserve an invariant subspace of submaximal dimension. A full. description, of the second-order cubic operators with constant coefficients admitting a four-dimensional invariant subspace is given. It is shown that the maximal dimension of invaxiant subspaces preserved by a second-order cubic operator is four. Several examples are given for the construction of the exact solutions to nonlinear evolution equations with cubic nonlinearities. These solutions blow up in a finite展开更多
Let B^H={B^H(t),t∈R^N+}be a real-valued(N,d)fractional Brownian sheet with Hurst index H=(H1,…,HN).The characteristics of the polar functions for B^H are discussed.The relationship between the class of contin...Let B^H={B^H(t),t∈R^N+}be a real-valued(N,d)fractional Brownian sheet with Hurst index H=(H1,…,HN).The characteristics of the polar functions for B^H are discussed.The relationship between the class of continuous functions satisfying Lipschitz condition and the class of polar-functions of B^H is obtained.The Hausdorff dimension about the fixed points and the inequality about the Kolmogorov’s entropy index for B^H are presented.Furthermore,it is proved that any two independent fractional Brownian sheets are nonintersecting in some conditions.A problem proposed by LeGall about the existence of no-polar continuous functions satisfying the Holder condition is also solved.展开更多
We report our recent work on a second-order Krylov subspace and the corresponding second-order Arnoldi procedure for generating its orthonormal basis. The second-order Krylov subspace is spanned by a sequence of vecto...We report our recent work on a second-order Krylov subspace and the corresponding second-order Arnoldi procedure for generating its orthonormal basis. The second-order Krylov subspace is spanned by a sequence of vectors defined via a second-order linear homogeneous recurrence relation with coefficient matrices A and B and an initial vector u. It generalizes the well-known Krylov subspace K n(A;v), which is spanned by a sequence of vectors defined via a first-order linear homogeneous recurrence relation with a single coefficient matrix A and an initial vector v. The applications are shown for the solution of quadratic eigenvalue problems and dimension reduction of second-order dynamical systems. The new approaches preserve essential structures and properties of the quadratic eigenvalue problem and second-order system, and demonstrate superior numerical results over the common approaches based on linearization of these second-order problems.展开更多
Let X^(H)={X^(H)(s),s∈R^(N_(1))}and X^(K)={X^(K)(t),t∈R^(N_(2))}be two independent time-space anisotropic random fields with indices H∈(0,1)^(N_(1)) and K∈(0,1)^(N_(2)),which may not possess Gaussianity,and which ...Let X^(H)={X^(H)(s),s∈R^(N_(1))}and X^(K)={X^(K)(t),t∈R^(N_(2))}be two independent time-space anisotropic random fields with indices H∈(0,1)^(N_(1)) and K∈(0,1)^(N_(2)),which may not possess Gaussianity,and which take values in R^(d) with a space metric τ.Under certain general conditions with density functions defined on a bounded interval,we study problems regarding the hitting probabilities of time-space anisotropic random fields and the existence of intersections of the sample paths of random fields X^(H) and X^(K).More generally,for any Borel set F⊂R^(d),the conditions required for F to contain intersection points of X^(H) and X^(K) are established.As an application,we give an example of an anisotropic non-Gaussian random field to show that these results are applicable to the solutions of non-linear systems of stochastic fractional heat equations.展开更多
Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subsp...Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subspace clustering algorithm. In the proposed algorithm, a novel objective function is firstly designed by considering the fuzzy weighting within-cluster compactness and the between-cluster separation, and loosening the constraints of dimension weight matrix. Then gradual membership and improved Cuckoo search, a global search strategy, are introduced to optimize the objective function and search subspace clusters, giving novel learning rules for clustering. At last, the performance of the proposed algorithm on the clustering analysis of various low and high dimensional datasets is experimentally compared with that of several competitive subspace clustering algorithms. Experimental studies demonstrate that the proposed algorithm can obtain better performance than most of the existing soft subspace clustering algorithms.展开更多
In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the ...In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the samplings of training data are deficient.This paper proposes a dimension-reduced approach to alleviate this problem.The dimension reduction includes two steps:firstly,the full array is divided into several subarrays;secondly,the test data and the training data at each subarray are transformed into the modal domain from the hydrophone domain.Then the modal-domain test data and training data at each subarray are processed to formulate the subarray statistic by using the GLRT theory.The final test statistic of the dimension-reduced ASD(DR-ASD)is obtained by summing all the subarray statistics.After the dimension reduction,the unknown parameters can be estimated more accurately so the DR-ASD achieves a better detection performance than the ASD.In order to achieve the optimal detection performance,the processing gain of the DR-ASD is deduced to choose a proper number of subarrays.Simulation experiments verify the improved detection performance of the DR-ASD compared with the ASD.展开更多
We introduce and develop a novel approach to outlier detection based on adaptation of random subspace learning. Our proposed method handles both high-dimension low-sample size and traditional low-dimensional high-samp...We introduce and develop a novel approach to outlier detection based on adaptation of random subspace learning. Our proposed method handles both high-dimension low-sample size and traditional low-dimensional high-sample size datasets. Essentially, we avoid the computational bottleneck of techniques like Minimum Covariance Determinant (MCD) by computing the needed determinants and associated measures in much lower dimensional subspaces. Both theoretical and computational development of our approach reveal that it is computationally more efficient than the regularized methods in high-dimensional low-sample size, and often competes favorably with existing methods as far as the percentage of correct outlier detection are concerned.展开更多
The main aim of data stream subspace clustering is to find clusters in subspace in rational time accurately. The existing data stream subspace clustering algorithms are greatly influenced by parameters. Due to the fla...The main aim of data stream subspace clustering is to find clusters in subspace in rational time accurately. The existing data stream subspace clustering algorithms are greatly influenced by parameters. Due to the flaws of traditional data stream subspace clustering algorithms, we propose SCRP, a new data stream subspace clustering algorithm. SCRP has the advantages of fast clustering and being insensitive to outliers. When data stream changes, the changes will be recorded by the data structure named Region-tree, and the corresponding statistics information will be updated. Further SCRP can regulate clustering results in time when data stream changes. According to the experiments on real datasets and synthetic datasets, SCRP is superior to the existing data stream subspace clustering algorithms on both clustering precision and clustering speed, and it has good scalability to the number of clusters and dimensions.展开更多
In this paper, we introduce the definition of L-fuzzy vector subspace, define its dimension by an L-fuzzy natural number. For a finite-dimensional L-fuzzy vector subspace, we prove that the equality holds without any ...In this paper, we introduce the definition of L-fuzzy vector subspace, define its dimension by an L-fuzzy natural number. For a finite-dimensional L-fuzzy vector subspace, we prove that the equality holds without any restricted conditions. At the same time, we deduce that the formula holds.展开更多
We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of columnpartitioned matrices and sparse low-rank appr...We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of columnpartitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning.展开更多
Let X^H = {X^H(8),8∈ R^N1} and XK = {X^K(t),t ∈R^2} be two independent anisotropic Gaussian random fields with values in R^d with indices H = (H1,... ,HN1) ∈ (0, 1)^N1, K = (K1,..., KN2)∈ (0, 1)^N2, r...Let X^H = {X^H(8),8∈ R^N1} and XK = {X^K(t),t ∈R^2} be two independent anisotropic Gaussian random fields with values in R^d with indices H = (H1,... ,HN1) ∈ (0, 1)^N1, K = (K1,..., KN2)∈ (0, 1)^N2, respectively. Existence of intersections of the sample paths of XH and XK is studied. More generally, let E1 R^N1, E2 R^N2 and F R^d be Borel sets. A necessary condition and a sufficient condition for P{(X^H(E1) ∩ X^K(E2)) ∩ F ≠ Ф} 〉 0 in terms of the Bessel-Riesz type capacity and Hausdorff measure of E1 x E2 x F in the metric space (R^N1+N2+d, ρ) are proved, whereρ is a metric defined in terms of H and K. These results are applicable to solutions of stochastic heat equations driven by space-time Gaussian noise and fractional Brownian sheets.展开更多
The t-wise intersection of constant-weight codes are computed.Based on the above result,the t-wise intersection of relative two-weight codes are determined by using the finite geometric structure of relative two-weigh...The t-wise intersection of constant-weight codes are computed.Based on the above result,the t-wise intersection of relative two-weight codes are determined by using the finite geometric structure of relative two-weight codes.展开更多
The query space of a similarity query is usually narrowed down by pruning inactive query subspaces which contain no query results and keeping active query subspaces which may contain objects corre- sponding to the req...The query space of a similarity query is usually narrowed down by pruning inactive query subspaces which contain no query results and keeping active query subspaces which may contain objects corre- sponding to the request. However, some active query subspaces may contain no query results at all, those are called false active query subspaces. It is obvious that the performance of query processing degrades in the presence of false active query subspaces. Our experiments show that this problem becomes seriously when the data are high dimensional and the number of accesses to false active subspaces increases as the dimensionality increases. In order to solve this problem, this paper proposes a space mapping approach to reducing such unnecessary accesses. A given query space can be refined by filtering within its mapped space. To do so, a mapping strategy called maxgap is proposed to improve the efficiency of the refinement processing. Based on the mapping strategy, an index structure called MS-tree and algorithms of query processing are presented in this paper. Finally, the performance of MS-tree is compared with that of other competitors in terms of range queries on a real data set.展开更多
Sparse subspace clustering(SSC)is a spectral clustering methodology.Since high-dimensional data are often dispersed over the union of many low-dimensional subspaces,their representation in a suitable dictionary is spa...Sparse subspace clustering(SSC)is a spectral clustering methodology.Since high-dimensional data are often dispersed over the union of many low-dimensional subspaces,their representation in a suitable dictionary is sparse.Therefore,SSC is an effective technology for diagnosing mechanical system faults.Its main purpose is to create a representation model that can reveal the real subspace structure of high-dimensional data,construct a similarity matrix by using the sparse representation coefficients of high-dimensional data,and then cluster the obtained representation coefficients and similarity matrix in subspace.However,the design of SSC algorithm is based on global expression in which each data point is represented by all possible cluster data points.This leads to nonzero terms in nondiagonal blocks of similar matrices,which reduces the recognition performance of matrices.To improve the clustering ability of SSC for rolling bearing and the robustness of the algorithm in the presence of a large number of background noise,a simultaneous dimensionality reduction subspace clustering technology is provided in this work.Through the feature extraction of envelope signal,the dimension of the feature matrix is reduced by singular value decomposition,and the Euclidean distance between samples is replaced by correlation distance.A dimension reduction graph-based SSC technology is established.Simulation and bearing data of Western Reserve University show that the proposed algorithm can improve the accuracy and compactness of clustering.展开更多
Many recently proposed subspace clustering methods suffer from two severe problems.First,the algorithms typically scale exponentially with the data dimensionality or the subspace dimensionality of clusters.Second,the ...Many recently proposed subspace clustering methods suffer from two severe problems.First,the algorithms typically scale exponentially with the data dimensionality or the subspace dimensionality of clusters.Second,the clustering results are often sensitive to input parameters.In this paper,a fast algorithm of subspace clustering using attribute clustering is proposed to overcome these limitations.This algorithm first filters out redundant attributes by computing the Gini coef-ficient.To evaluate the correlation of every two non-redundant attributes,the relation matrix of non-redund-ant attributes is constructed based on the relation function of two dimensional united Gini coefficients.After applying an overlapping clustering algorithm on the relation matrix,the candidate of all interesting subspaces is achieved.Finally,all subspace clusters can be derived by clustering on interesting subspaces.Experiments on both synthesis and real datasets show that the new algorithm not only achieves a significant gain of runtime and quality to find subspace clusters,but also is insensitive to input parameters.展开更多
Analysis of a four-dimensional displacement vector on the fabric of space-time in the special or general case into two Four-dimensional vectors, according to specific conditions leads to the splitting of the total fab...Analysis of a four-dimensional displacement vector on the fabric of space-time in the special or general case into two Four-dimensional vectors, according to specific conditions leads to the splitting of the total fabric of space-time into a positive subspace-time that represents the space of causality and a negative subspace-time which represents a space without causality, thus, in the special case, we have new transformations for the coordinates of space and time modified from Lorentz transformations specific to each subspace, where the contraction of length disappears and the speed of light is no longer a universal constant. In the general case, we have new types of matric tensor, one for positive subspace-time and the other for negative subspace-time. We also find that the speed of the photon decreases in positive subspace-time until it reaches zero and increases in negative subspace-time until it reaches the speed of light when the photon reaches the Schwarzschild radius.展开更多
基金supported by the National Basic Research Program of China。
文摘With the extensive application of large-scale array antennas,the increasing number of array elements leads to the increasing dimension of received signals,making it difficult to meet the real-time requirement of direction of arrival(DOA)estimation due to the computational complexity of algorithms.Traditional subspace algorithms require estimation of the covariance matrix,which has high computational complexity and is prone to producing spurious peaks.In order to reduce the computational complexity of DOA estimation algorithms and improve their estimation accuracy under large array elements,this paper proposes a DOA estimation method based on Krylov subspace and weighted l_(1)-norm.The method uses the multistage Wiener filter(MSWF)iteration to solve the basis of the Krylov subspace as an estimate of the signal subspace,further uses the measurement matrix to reduce the dimensionality of the signal subspace observation,constructs a weighted matrix,and combines the sparse reconstruction to establish a convex optimization function based on the residual sum of squares and weighted l_(1)-norm to solve the target DOA.Simulation results show that the proposed method has high resolution under large array conditions,effectively suppresses spurious peaks,reduces computational complexity,and has good robustness for low signal to noise ratio(SNR)environment.
文摘We present a new algorithm for manifold learning and nonlinear dimensionality reduction. Based on a set of unorganized data points sampled with noise from a parameterized manifold, the local geometry of the manifold is learned by constructing an approximation for the tangent space at each point, and those tangent spaces are then aligned to give the global coordinates of the data points with respect to the underlying manifold. We also present an error analysis of our algorithm showing that reconstruction errors can be quite small in some cases. We illustrate our algorithm using curves and surfaces both in 2D/3D Euclidean spaces and higher dimensional Euclidean spaces. We also address several theoretical and algorithmic issues for further research and improvements.
基金Supported by the National Natural Science Foundation of China(No.61502475)the Importation and Development of High-Caliber Talents Project of the Beijing Municipal Institutions(No.CIT&TCD201504039)
文摘The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.
文摘Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams.
基金Project supported by the National Natural Science Foundation of China(Grant No.10926082)the Natural Science Foundation of Anhui Province of China(Grant No.KJ2010A128)the Fund for Youth of Anhui Normal University,China(Grant No.2009xqn55)
文摘The invariant subspace method is used to construct the explicit solution of a nonlinear evolution equation. The second-order nonlinear differential operators that possess invariant subspaces of submaximal dimension are described. There are second-order nonlinear differential operators, including cubic operators and quadratic operators, which preserve an invariant subspace of submaximal dimension. A full. description, of the second-order cubic operators with constant coefficients admitting a four-dimensional invariant subspace is given. It is shown that the maximal dimension of invaxiant subspaces preserved by a second-order cubic operator is four. Several examples are given for the construction of the exact solutions to nonlinear evolution equations with cubic nonlinearities. These solutions blow up in a finite
基金the Key Research Base for Humanities and Social Sciences of Zhejiang Provincial High Education Talents(Statistics of Zhejiang Gongshang University)the Natural ScienceFoundation of Shaanxi Province(2005A08,2006A14)
文摘Let B^H={B^H(t),t∈R^N+}be a real-valued(N,d)fractional Brownian sheet with Hurst index H=(H1,…,HN).The characteristics of the polar functions for B^H are discussed.The relationship between the class of continuous functions satisfying Lipschitz condition and the class of polar-functions of B^H is obtained.The Hausdorff dimension about the fixed points and the inequality about the Kolmogorov’s entropy index for B^H are presented.Furthermore,it is proved that any two independent fractional Brownian sheets are nonintersecting in some conditions.A problem proposed by LeGall about the existence of no-polar continuous functions satisfying the Holder condition is also solved.
文摘We report our recent work on a second-order Krylov subspace and the corresponding second-order Arnoldi procedure for generating its orthonormal basis. The second-order Krylov subspace is spanned by a sequence of vectors defined via a second-order linear homogeneous recurrence relation with coefficient matrices A and B and an initial vector u. It generalizes the well-known Krylov subspace K n(A;v), which is spanned by a sequence of vectors defined via a first-order linear homogeneous recurrence relation with a single coefficient matrix A and an initial vector v. The applications are shown for the solution of quadratic eigenvalue problems and dimension reduction of second-order dynamical systems. The new approaches preserve essential structures and properties of the quadratic eigenvalue problem and second-order system, and demonstrate superior numerical results over the common approaches based on linearization of these second-order problems.
基金supported by National NaturalScience Foundation of China(11971432)Natural Science Foundation of Zhejiang Province(LY21G010003)+1 种基金First Class Discipline of Zhejiang-A(Zhejiang Gongshang University-Statistics)the Natural Science Foundation of Chuzhou University(zrjz2019012)。
文摘Let X^(H)={X^(H)(s),s∈R^(N_(1))}and X^(K)={X^(K)(t),t∈R^(N_(2))}be two independent time-space anisotropic random fields with indices H∈(0,1)^(N_(1)) and K∈(0,1)^(N_(2)),which may not possess Gaussianity,and which take values in R^(d) with a space metric τ.Under certain general conditions with density functions defined on a bounded interval,we study problems regarding the hitting probabilities of time-space anisotropic random fields and the existence of intersections of the sample paths of random fields X^(H) and X^(K).More generally,for any Borel set F⊂R^(d),the conditions required for F to contain intersection points of X^(H) and X^(K) are established.As an application,we give an example of an anisotropic non-Gaussian random field to show that these results are applicable to the solutions of non-linear systems of stochastic fractional heat equations.
基金supported in part by the National Natural Science Foundation of China (Nos. 61303074, 61309013)the Programs for Science, National Key Basic Research and Development Program ("973") of China (No. 2012CB315900)Technology Development of Henan province (Nos.12210231003, 13210231002)
文摘Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subspace clustering algorithm. In the proposed algorithm, a novel objective function is firstly designed by considering the fuzzy weighting within-cluster compactness and the between-cluster separation, and loosening the constraints of dimension weight matrix. Then gradual membership and improved Cuckoo search, a global search strategy, are introduced to optimize the objective function and search subspace clusters, giving novel learning rules for clustering. At last, the performance of the proposed algorithm on the clustering analysis of various low and high dimensional datasets is experimentally compared with that of several competitive subspace clustering algorithms. Experimental studies demonstrate that the proposed algorithm can obtain better performance than most of the existing soft subspace clustering algorithms.
基金the National Natural Science Foundation of China (Grant No. 11534009, 11974285) to provide fund for conducting this research
文摘In the underwater waveguide,the conventional adaptive subspace detector(ASD),derived by using the generalized likelihood ratio test(GLRT)theory,suffers from a significant degradation in detection performance when the samplings of training data are deficient.This paper proposes a dimension-reduced approach to alleviate this problem.The dimension reduction includes two steps:firstly,the full array is divided into several subarrays;secondly,the test data and the training data at each subarray are transformed into the modal domain from the hydrophone domain.Then the modal-domain test data and training data at each subarray are processed to formulate the subarray statistic by using the GLRT theory.The final test statistic of the dimension-reduced ASD(DR-ASD)is obtained by summing all the subarray statistics.After the dimension reduction,the unknown parameters can be estimated more accurately so the DR-ASD achieves a better detection performance than the ASD.In order to achieve the optimal detection performance,the processing gain of the DR-ASD is deduced to choose a proper number of subarrays.Simulation experiments verify the improved detection performance of the DR-ASD compared with the ASD.
文摘We introduce and develop a novel approach to outlier detection based on adaptation of random subspace learning. Our proposed method handles both high-dimension low-sample size and traditional low-dimensional high-sample size datasets. Essentially, we avoid the computational bottleneck of techniques like Minimum Covariance Determinant (MCD) by computing the needed determinants and associated measures in much lower dimensional subspaces. Both theoretical and computational development of our approach reveal that it is computationally more efficient than the regularized methods in high-dimensional low-sample size, and often competes favorably with existing methods as far as the percentage of correct outlier detection are concerned.
文摘The main aim of data stream subspace clustering is to find clusters in subspace in rational time accurately. The existing data stream subspace clustering algorithms are greatly influenced by parameters. Due to the flaws of traditional data stream subspace clustering algorithms, we propose SCRP, a new data stream subspace clustering algorithm. SCRP has the advantages of fast clustering and being insensitive to outliers. When data stream changes, the changes will be recorded by the data structure named Region-tree, and the corresponding statistics information will be updated. Further SCRP can regulate clustering results in time when data stream changes. According to the experiments on real datasets and synthetic datasets, SCRP is superior to the existing data stream subspace clustering algorithms on both clustering precision and clustering speed, and it has good scalability to the number of clusters and dimensions.
文摘In this paper, we introduce the definition of L-fuzzy vector subspace, define its dimension by an L-fuzzy natural number. For a finite-dimensional L-fuzzy vector subspace, we prove that the equality holds without any restricted conditions. At the same time, we deduce that the formula holds.
基金This work was supported in part by the Special Funds for Major State Basic Research Projectsthe National Natural Science Foundation of China(Grants No.60372033 and 9901936)NSF CCR9901986,DMS 0311800.
文摘We present our recent work on both linear and nonlinear data reduction methods and algorithms: for the linear case we discuss results on structure analysis of SVD of columnpartitioned matrices and sparse low-rank approximation; for the nonlinear case we investigate methods for nonlinear dimensionality reduction and manifold learning. The problems we address have attracted great deal of interest in data mining and machine learning.
基金supported by Zhejiang Provincial Natural Science Foundation of China(Grant No. Y6100663)National Science Foundation of US (Grant No. DMS-1006903)
文摘Let X^H = {X^H(8),8∈ R^N1} and XK = {X^K(t),t ∈R^2} be two independent anisotropic Gaussian random fields with values in R^d with indices H = (H1,... ,HN1) ∈ (0, 1)^N1, K = (K1,..., KN2)∈ (0, 1)^N2, respectively. Existence of intersections of the sample paths of XH and XK is studied. More generally, let E1 R^N1, E2 R^N2 and F R^d be Borel sets. A necessary condition and a sufficient condition for P{(X^H(E1) ∩ X^K(E2)) ∩ F ≠ Ф} 〉 0 in terms of the Bessel-Riesz type capacity and Hausdorff measure of E1 x E2 x F in the metric space (R^N1+N2+d, ρ) are proved, whereρ is a metric defined in terms of H and K. These results are applicable to solutions of stochastic heat equations driven by space-time Gaussian noise and fractional Brownian sheets.
基金supported by National Natural Science Foundation of China(Grant Nos.11171366 and 61170257)
文摘The t-wise intersection of constant-weight codes are computed.Based on the above result,the t-wise intersection of relative two-weight codes are determined by using the finite geometric structure of relative two-weight codes.
基金Supported by National Basic Research Program of China (Grant No.2006CB303103)the National Natural Science Foundation of China (Grant Nos.60873011,60802026,60773219,60773021)the High Technology Program (Grant No.2007AA01Z192)
文摘The query space of a similarity query is usually narrowed down by pruning inactive query subspaces which contain no query results and keeping active query subspaces which may contain objects corre- sponding to the request. However, some active query subspaces may contain no query results at all, those are called false active query subspaces. It is obvious that the performance of query processing degrades in the presence of false active query subspaces. Our experiments show that this problem becomes seriously when the data are high dimensional and the number of accesses to false active subspaces increases as the dimensionality increases. In order to solve this problem, this paper proposes a space mapping approach to reducing such unnecessary accesses. A given query space can be refined by filtering within its mapped space. To do so, a mapping strategy called maxgap is proposed to improve the efficiency of the refinement processing. Based on the mapping strategy, an index structure called MS-tree and algorithms of query processing are presented in this paper. Finally, the performance of MS-tree is compared with that of other competitors in terms of range queries on a real data set.
基金The present work is supported by the National Key R&D Program(No.2020YFB2007700)the National Natural Science Foundation of China(Nos.11790282,11802184,11902205,12002221,12032017)+1 种基金the S&T Program of Hebei(No.20310803D)the Natural Science Foundation of Hebei Province(No.A2020210028).
文摘Sparse subspace clustering(SSC)is a spectral clustering methodology.Since high-dimensional data are often dispersed over the union of many low-dimensional subspaces,their representation in a suitable dictionary is sparse.Therefore,SSC is an effective technology for diagnosing mechanical system faults.Its main purpose is to create a representation model that can reveal the real subspace structure of high-dimensional data,construct a similarity matrix by using the sparse representation coefficients of high-dimensional data,and then cluster the obtained representation coefficients and similarity matrix in subspace.However,the design of SSC algorithm is based on global expression in which each data point is represented by all possible cluster data points.This leads to nonzero terms in nondiagonal blocks of similar matrices,which reduces the recognition performance of matrices.To improve the clustering ability of SSC for rolling bearing and the robustness of the algorithm in the presence of a large number of background noise,a simultaneous dimensionality reduction subspace clustering technology is provided in this work.Through the feature extraction of envelope signal,the dimension of the feature matrix is reduced by singular value decomposition,and the Euclidean distance between samples is replaced by correlation distance.A dimension reduction graph-based SSC technology is established.Simulation and bearing data of Western Reserve University show that the proposed algorithm can improve the accuracy and compactness of clustering.
基金This work was supported by the National Basic Research Program of China(No.2007CB307100)the National Natural Science Foundation of China(Grant No.60432010).
文摘Many recently proposed subspace clustering methods suffer from two severe problems.First,the algorithms typically scale exponentially with the data dimensionality or the subspace dimensionality of clusters.Second,the clustering results are often sensitive to input parameters.In this paper,a fast algorithm of subspace clustering using attribute clustering is proposed to overcome these limitations.This algorithm first filters out redundant attributes by computing the Gini coef-ficient.To evaluate the correlation of every two non-redundant attributes,the relation matrix of non-redund-ant attributes is constructed based on the relation function of two dimensional united Gini coefficients.After applying an overlapping clustering algorithm on the relation matrix,the candidate of all interesting subspaces is achieved.Finally,all subspace clusters can be derived by clustering on interesting subspaces.Experiments on both synthesis and real datasets show that the new algorithm not only achieves a significant gain of runtime and quality to find subspace clusters,but also is insensitive to input parameters.
文摘Analysis of a four-dimensional displacement vector on the fabric of space-time in the special or general case into two Four-dimensional vectors, according to specific conditions leads to the splitting of the total fabric of space-time into a positive subspace-time that represents the space of causality and a negative subspace-time which represents a space without causality, thus, in the special case, we have new transformations for the coordinates of space and time modified from Lorentz transformations specific to each subspace, where the contraction of length disappears and the speed of light is no longer a universal constant. In the general case, we have new types of matric tensor, one for positive subspace-time and the other for negative subspace-time. We also find that the speed of the photon decreases in positive subspace-time until it reaches zero and increases in negative subspace-time until it reaches the speed of light when the photon reaches the Schwarzschild radius.