The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based o...The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.展开更多
Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approac...Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams.展开更多
In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified fro...In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.展开更多
An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sp...An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sparse Feature Vector', thus reduces the data scaleenormously, and can get the clustering result with only one data scan. Both theoretical analysis andempirical tests showed that CABOSFV is of low computational complexity. The algorithm findsclusters in high dimensional large datasets efficiently and handles noise effectively.展开更多
Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data ...Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data were introduced, and similarity measure analysis was also illustrated and compared with conventional similarity measure. As a result, overlapped data comparison was possible to present similarity with conventional similarity measure. Non-overlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considering high dimensional data analysis was designed with consideration of neighborhoods information. Conservative and strict solutions were proposed. Proposed similarity measure was applied to express financial fraud among multi dimensional datasets. In illustrative example, financial fraud similarity with respect to age, gender, qualification and job was presented. And with the proposed similarity measure, high dimensional personal data were calculated to evaluate how similar to the financial fraud. Calculation results show that the actual fraud has rather high similarity measure compared to the average, from minimal 0.0609 to maximal 0.1667.展开更多
Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper...Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper was obtained by spatial transformation of 16 subjects by means of tensor-based standardization.Then,high dimensional standardization algorithms for diffusion tensor images,including fractional anisotropy(FA)based diffeomorphic registration algorithm,FA based elastic registration algorithm and tensor-based registration algorithm,were performed.Finally,7 kinds of evaluation methods,including normalized standard deviation,dyadic coherence,diffusion cross-correlation,overlap of eigenvalue-eigenvector pairs,Euclidean distance of diffusion tensor,and Euclidean distance of the deviatoric tensor and deviatoric of tensors,were used to qualitatively compare and summarize the above standardization algorithms.Experimental results revealed that the high-dimensional tensor-based standardization algorithms perform well and can maintain the consistency of anatomical structures.展开更多
The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities...The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.展开更多
Based on the particle-in-cell technology and the secondary electron emission theory, a three-dimensional simulation method for multipactor is presented in this paper. By combining the finite difference time domain met...Based on the particle-in-cell technology and the secondary electron emission theory, a three-dimensional simulation method for multipactor is presented in this paper. By combining the finite difference time domain method and the panicle tracing method, such an algorithm is self-consistent and accurate since the interaction between electromagnetic fields and particles is properly modeled. In the time domain aspect, the generation of multipactor can be easily visualized, which makes it possible to gain a deeper insight into the physical mechanism of this effect. In addition to the classic secondary electron emission model, the measured practical secondary electron yield is used, which increases the accuracy of the algorithm. In order to validate the method, the impedance transformer and ridge waveguide filter are studied. By analyzing the evolution of the secondaries obtained by our method, multipactor thresholds of these components are estimated, which show good agreement with the experimental results. Furthermore, the most sensitive positions where multipactor occurs are determined from the phase focusing phenomenon, which is very meaningful for multipactor analysis and design.展开更多
Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for...Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results.展开更多
Although the recent advances in stem cell engineering have gained a great deal of attention due to their high potential in clinical research,the applicability of stem cells for preclinical screening in the drug discov...Although the recent advances in stem cell engineering have gained a great deal of attention due to their high potential in clinical research,the applicability of stem cells for preclinical screening in the drug discovery process is still challenging due to difficulties in controlling the stem cell microenvironment and the limited availability of high-throughput systems.Recently,researchers have been actively developing and evaluating three-dimensional(3D)cell culture-based platforms using microfluidic technologies,such as organ-on-a-chip and organoid-on-a-chip platforms,and they have achieved promising breakthroughs in stem cell engineering.In this review,we start with a comprehensive discussion on the importance of microfluidic 3D cell culture techniques in stem cell research and their technical strategies in the field of drug discovery.In a subsequent section,we discuss microfluidic 3D cell culture techniques for high-throughput analysis for use in stem cell research.In addition,some potential and practical applications of organ-on-a-chip or organoid-on-a-chip platforms using stem cells as drug screening and disease models are highlighted.展开更多
In this paper, the global controllability for a class of high dimensional polynomial systems has been investigated and a constructive algebraic criterion algorithm for their global controllability has been obtained. B...In this paper, the global controllability for a class of high dimensional polynomial systems has been investigated and a constructive algebraic criterion algorithm for their global controllability has been obtained. By the criterion algorithm, the global controllability can be determined in finite steps of arithmetic operations. The algorithm is imposed on the coefficients of the polynomials only and the analysis technique is based on Sturm Theorem in real algebraic geometry and its modern progress. Finally, the authors will give some examples to show the application of our results.展开更多
This paper mainly concerns oblique derivative problems for nonlinear nondivergent elliptic equations of second order with measurable coefficients in a multiply connected domain. Under certain condition, we derive a pr...This paper mainly concerns oblique derivative problems for nonlinear nondivergent elliptic equations of second order with measurable coefficients in a multiply connected domain. Under certain condition, we derive a priori estimates of solutions. By using these estimates and the fixed-point theorem, we prove the existence of solutions.展开更多
The samples of InxGa(1-x)As/In(0.52)Al(0.48)As two-dimensional electron gas(2DEG)are grown by molecular beam epitaxy(MBE).In the sample preparation process,the In content and spacer layer thickness are chang...The samples of InxGa(1-x)As/In(0.52)Al(0.48)As two-dimensional electron gas(2DEG)are grown by molecular beam epitaxy(MBE).In the sample preparation process,the In content and spacer layer thickness are changed and two kinds of methods,i.e.,contrast body doping andδ-doping are used.The samples are analyzed by the Hall measurements at 300 Kand 77 K.The InxGa1-xAs/In0.52Al0.48As 2DEG channel structures with mobilities as high as 10289 cm^2/V·s(300 K)and42040 cm^2/V·s(77 K)are obtained,and the values of carrier concentration(Nc)are 3.465×10^12/cm^2 and 2.502×10^12/cm^2,respectively.The THz response rates of In P-based high electron mobility transistor(HEMT)structures with different gate lengths at 300 K and 77 K temperatures are calculated based on the shallow water wave instability theory.The results provide a reference for the research and preparation of In P-based HEMT THz detectors.展开更多
Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dim...Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes.展开更多
This paper deals with the representation of the solutions of a polynomial system, and concentrates on the high-dimensional case. Based on the rational univari- ate representation of zero-dimensional polynomial systems...This paper deals with the representation of the solutions of a polynomial system, and concentrates on the high-dimensional case. Based on the rational univari- ate representation of zero-dimensional polynomial systems, we give a new description called rational representation for the solutions of a high-dimensional polynomial sys- tem and propose an algorithm for computing it. By this way all the solutions of any high-dimensional polynomial system can be represented by a set of so-called rational- representation sets.展开更多
Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subsp...Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subspace clustering algorithm. In the proposed algorithm, a novel objective function is firstly designed by considering the fuzzy weighting within-cluster compactness and the between-cluster separation, and loosening the constraints of dimension weight matrix. Then gradual membership and improved Cuckoo search, a global search strategy, are introduced to optimize the objective function and search subspace clusters, giving novel learning rules for clustering. At last, the performance of the proposed algorithm on the clustering analysis of various low and high dimensional datasets is experimentally compared with that of several competitive subspace clustering algorithms. Experimental studies demonstrate that the proposed algorithm can obtain better performance than most of the existing soft subspace clustering algorithms.展开更多
A fixed-geometry two-dimensional mixed-compression supersonic inlet with sweep-forward high-light and bleed slot in an inverted "X"-form layout was tested in a wind tunnel. Results indicate: (1) with increases of...A fixed-geometry two-dimensional mixed-compression supersonic inlet with sweep-forward high-light and bleed slot in an inverted "X"-form layout was tested in a wind tunnel. Results indicate: (1) with increases of the free stream Mach number, the total pressure recovery decreases, while the mass flow ratio increases to the maximum at the design point and then decreases; (2) when the angle of attack, a, is less than 6°, the total pressure recovery of both side inlets tends to decrease, but, on the lee side inlet, its values are higher than those on the windward side inlet, and the mass flow ratio on lee side inlet increases first and then falls, while on the windward side it keeps declining slowly with the sum of mass flow on both sides remaining almost constant; (3) with the attack angle, a, rising from 6° to 9°, both total pressure recovery and mass flow ratio on the lee side inlet fall quickly, but on the windward side inlet can be observed decreases in the total pressure recovery and increases in the mass flow ratio; (4) by comparing the velocity and back pressure characterristics of the inlet with a bleed slot to those of the inlet without, it stands to reason that the existence of a bleed slot has not only widened the steady working range of inlet, but also made an enormous improvement in its performance at high Mach numbers. Besides, this paper also presents an example to show how this type of inlet is designed.展开更多
With the abundance of exceptionally High Dimensional data, feature selection has become an essential element in the Data Mining process. In this paper, we investigate the problem of efficient feature selection for cla...With the abundance of exceptionally High Dimensional data, feature selection has become an essential element in the Data Mining process. In this paper, we investigate the problem of efficient feature selection for classification on High Dimensional datasets. We present a novel filter based approach for feature selection that sorts out the features based on a score and then we measure the performance of four different Data Mining classification algorithms on the resulting data. In the proposed approach, we partition the sorted feature and search the important feature in forward manner as well as in reversed manner, while starting from first and last feature simultaneously in the sorted list. The proposed approach is highly scalable and effective as it parallelizes over both attribute and tuples simultaneously allowing us to evaluate many of potential features for High Dimensional datasets. The newly proposed framework for feature selection is experimentally shown to be very valuable with real and synthetic High Dimensional datasets which improve the precision of selected features. We have also tested it to measure classification accuracy against various feature selection process.展开更多
We present two protocols for the controlled remote implementation of quantum operations between three-party high-dimensional systems. Firstly, the controlled teleportation of an arbitrary unitary operation by bidirect...We present two protocols for the controlled remote implementation of quantum operations between three-party high-dimensional systems. Firstly, the controlled teleportation of an arbitrary unitary operation by bidirectional quantum state teleportaion (BQST) with high-dimensional systems is considered. Then, instead of using the BQST method, a protocol for controlled remote implementation of partially unknown operations belonging to some restricted sets in high-dimensional systems is proposed. It is shown that, in these protocols, if and only if the controller would like to help the sender with the remote operations, the controlled remote implementation of quantum operations for high-dimensional systems can be completed.展开更多
The J-V characteristics of AltGa1 tN/GaN high electron mobility transistors(HEMTs) are investigated and simulated using the self-consistent solution of the Schro dinger and Poisson equations for a two-dimensional el...The J-V characteristics of AltGa1 tN/GaN high electron mobility transistors(HEMTs) are investigated and simulated using the self-consistent solution of the Schro dinger and Poisson equations for a two-dimensional electron gas(2DEG) in a triangular potential well with the Al mole fraction t = 0.3 as an example.Using a simple analytical model,the electronic drift velocity in a 2DEG channel is obtained.It is found that the current density through the 2DEG channel is on the order of 10^13 A/m^2 within a very narrow region(about 5 nm).For a current density of 7 × 10^13 A/m62 passing through the 2DEG channel with a 2DEG density of above 1.2 × 10^17 m^-2 under a drain voltage Vds = 1.5 V at room temperature,the barrier thickness Lb should be more than 10 nm and the gate bias must be higher than 2 V.展开更多
文摘The estimation of covariance matrices is very important in many fields, such as statistics. In real applications, data are frequently influenced by high dimensions and noise. However, most relevant studies are based on complete data. This paper studies the optimal estimation of high-dimensional covariance matrices based on missing and noisy sample under the norm. First, the model with sub-Gaussian additive noise is presented. The generalized sample covariance is then modified to define a hard thresholding estimator , and the minimax upper bound is derived. After that, the minimax lower bound is derived, and it is concluded that the estimator presented in this article is rate-optimal. Finally, numerical simulation analysis is performed. The result shows that for missing samples with sub-Gaussian noise, if the true covariance matrix is sparse, the hard thresholding estimator outperforms the traditional estimate method.
文摘Clustering high dimensional data is challenging as data dimensionality increases the distance between data points,resulting in sparse regions that degrade clustering performance.Subspace clustering is a common approach for processing high-dimensional data by finding relevant features for each cluster in the data space.Subspace clustering methods extend traditional clustering to account for the constraints imposed by data streams.Data streams are not only high-dimensional,but also unbounded and evolving.This necessitates the development of subspace clustering algorithms that can handle high dimensionality and adapt to the unique characteristics of data streams.Although many articles have contributed to the literature review on data stream clustering,there is currently no specific review on subspace clustering algorithms in high-dimensional data streams.Therefore,this article aims to systematically review the existing literature on subspace clustering of data streams in high-dimensional streaming environments.The review follows a systematic methodological approach and includes 18 articles for the final analysis.The analysis focused on two research questions related to the general clustering process and dealing with the unbounded and evolving characteristics of data streams.The main findings relate to six elements:clustering process,cluster search,subspace search,synopsis structure,cluster maintenance,and evaluation measures.Most algorithms use a two-phase clustering approach consisting of an initialization stage,a refinement stage,a cluster maintenance stage,and a final clustering stage.The density-based top-down subspace clustering approach is more widely used than the others because it is able to distinguish true clusters and outliers using projected microclusters.Most algorithms implicitly adapt to the evolving nature of the data stream by using a time fading function that is sensitive to outliers.Future work can focus on the clustering framework,parameter optimization,subspace search techniques,memory-efficient synopsis structures,explicit cluster change detection,and intrinsic performance metrics.This article can serve as a guide for researchers interested in high-dimensional subspace clustering methods for data streams.
文摘In ultra-high-dimensional data, it is common for the response variable to be multi-classified. Therefore, this paper proposes a model-free screening method for variables whose response variable is multi-classified from the point of view of introducing Jensen-Shannon divergence to measure the importance of covariates. The idea of the method is to calculate the Jensen-Shannon divergence between the conditional probability distribution of the covariates on a given response variable and the unconditional probability distribution of the covariates, and then use the probabilities of the response variables as weights to calculate the weighted Jensen-Shannon divergence, where a larger weighted Jensen-Shannon divergence means that the covariates are more important. Additionally, we also investigated an adapted version of the method, which is to measure the relationship between the covariates and the response variable using the weighted Jensen-Shannon divergence adjusted by the logarithmic factor of the number of categories when the number of categories in each covariate varies. Then, through both theoretical and simulation experiments, it was demonstrated that the proposed methods have sure screening and ranking consistency properties. Finally, the results from simulation and real-dataset experiments show that in feature screening, the proposed methods investigated are robust in performance and faster in computational speed compared with an existing method.
文摘An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sparse Feature Vector', thus reduces the data scaleenormously, and can get the clustering result with only one data scan. Both theoretical analysis andempirical tests showed that CABOSFV is of low computational complexity. The algorithm findsclusters in high dimensional large datasets efficiently and handles noise effectively.
基金Project(RDF 11-02-03)supported by the Research Development Fund of XJTLU,China
文摘Information analysis of high dimensional data was carried out through similarity measure application. High dimensional data were considered as the a typical structure. Additionally, overlapped and non-overlapped data were introduced, and similarity measure analysis was also illustrated and compared with conventional similarity measure. As a result, overlapped data comparison was possible to present similarity with conventional similarity measure. Non-overlapped data similarity analysis provided the clue to solve the similarity of high dimensional data. Considering high dimensional data analysis was designed with consideration of neighborhoods information. Conservative and strict solutions were proposed. Proposed similarity measure was applied to express financial fraud among multi dimensional datasets. In illustrative example, financial fraud similarity with respect to age, gender, qualification and job was presented. And with the proposed similarity measure, high dimensional personal data were calculated to evaluate how similar to the financial fraud. Calculation results show that the actual fraud has rather high similarity measure compared to the average, from minimal 0.0609 to maximal 0.1667.
基金Supported by the National Key Research and Development Program of China(2016YFC0100300)the National Natural Science Foundation of China(61402371,61771369)+1 种基金the Natural Science Basic Research Plan in Shaanxi Province of China(2017JM6008)the Fundamental Research Funds for the Central Universities of China(3102017zy032,3102018zy020)
文摘Three high dimensional spatial standardization algorithms are used for diffusion tensor image(DTI)registration,and seven kinds of methods are used to evaluate their performances.Firstly,the template used in this paper was obtained by spatial transformation of 16 subjects by means of tensor-based standardization.Then,high dimensional standardization algorithms for diffusion tensor images,including fractional anisotropy(FA)based diffeomorphic registration algorithm,FA based elastic registration algorithm and tensor-based registration algorithm,were performed.Finally,7 kinds of evaluation methods,including normalized standard deviation,dyadic coherence,diffusion cross-correlation,overlap of eigenvalue-eigenvector pairs,Euclidean distance of diffusion tensor,and Euclidean distance of the deviatoric tensor and deviatoric of tensors,were used to qualitatively compare and summarize the above standardization algorithms.Experimental results revealed that the high-dimensional tensor-based standardization algorithms perform well and can maintain the consistency of anatomical structures.
基金Supported by the National Natural Science Foundation of China(No.61502475)the Importation and Development of High-Caliber Talents Project of the Beijing Municipal Institutions(No.CIT&TCD201504039)
文摘The performance of conventional similarity measurement methods is affected seriously by the curse of dimensionality of high-dimensional data.The reason is that data difference between sparse and noisy dimensionalities occupies a large proportion of the similarity,leading to the dissimilarities between any results.A similarity measurement method of high-dimensional data based on normalized net lattice subspace is proposed.The data range of each dimension is divided into several intervals,and the components in different dimensions are mapped onto the corresponding interval.Only the component in the same or adjacent interval is used to calculate the similarity.To validate this method,three data types are used,and seven common similarity measurement methods are compared.The experimental result indicates that the relative difference of the method is increasing with the dimensionality and is approximately two or three orders of magnitude higher than the conventional method.In addition,the similarity range of this method in different dimensions is [0,1],which is fit for similarity analysis after dimensionality reduction.
基金Project supported by the National Key Laboratory Foundation,China(Grant No.9140C530103110C5301)
文摘Based on the particle-in-cell technology and the secondary electron emission theory, a three-dimensional simulation method for multipactor is presented in this paper. By combining the finite difference time domain method and the panicle tracing method, such an algorithm is self-consistent and accurate since the interaction between electromagnetic fields and particles is properly modeled. In the time domain aspect, the generation of multipactor can be easily visualized, which makes it possible to gain a deeper insight into the physical mechanism of this effect. In addition to the classic secondary electron emission model, the measured practical secondary electron yield is used, which increases the accuracy of the algorithm. In order to validate the method, the impedance transformer and ridge waveguide filter are studied. By analyzing the evolution of the secondaries obtained by our method, multipactor thresholds of these components are estimated, which show good agreement with the experimental results. Furthermore, the most sensitive positions where multipactor occurs are determined from the phase focusing phenomenon, which is very meaningful for multipactor analysis and design.
基金supported by Shaanxi Province Natural Science Foundation of Research Projects(2016JM6014)the Innovation Foundation of High-Tech Institute of Xi’an(2015ZZDJJ03)the Youth Foundation of HighTech Institute of Xi’an(2016QNJJ004)
文摘Guaranteed cost consensus analysis and design problems for high-dimensional multi-agent systems with time varying delays are investigated. The idea of guaranteed cost con trol is introduced into consensus problems for high-dimensiona multi-agent systems with time-varying delays, where a cos function is defined based on state errors among neighboring agents and control inputs of all the agents. By the state space decomposition approach and the linear matrix inequality(LMI)sufficient conditions for guaranteed cost consensus and consensu alization are given. Moreover, a guaranteed cost upper bound o the cost function is determined. It should be mentioned that these LMI criteria are dependent on the change rate of time delays and the maximum time delay, the guaranteed cost upper bound is only dependent on the maximum time delay but independen of the Laplacian matrix. Finally, numerical simulations are given to demonstrate theoretical results.
基金supported by the National Research Foundation of Korea (NRF) (NRF2017R1C1B2002377, NRF-2016R1A5A1010148, and NRF2019R1A2C1003111)funded by the Ministry of Science and ICT (MSIT)partly supported by the Technology Innovation Program (No.10067787)funded by the Ministry of Trade, Industry & Energy (MOTE, Korea)
文摘Although the recent advances in stem cell engineering have gained a great deal of attention due to their high potential in clinical research,the applicability of stem cells for preclinical screening in the drug discovery process is still challenging due to difficulties in controlling the stem cell microenvironment and the limited availability of high-throughput systems.Recently,researchers have been actively developing and evaluating three-dimensional(3D)cell culture-based platforms using microfluidic technologies,such as organ-on-a-chip and organoid-on-a-chip platforms,and they have achieved promising breakthroughs in stem cell engineering.In this review,we start with a comprehensive discussion on the importance of microfluidic 3D cell culture techniques in stem cell research and their technical strategies in the field of drug discovery.In a subsequent section,we discuss microfluidic 3D cell culture techniques for high-throughput analysis for use in stem cell research.In addition,some potential and practical applications of organ-on-a-chip or organoid-on-a-chip platforms using stem cells as drug screening and disease models are highlighted.
基金supported by the Natural Science Foundation of China under Grant Nos.60804008,61174048and 11071263the Fundamental Research Funds for the Central Universities and Guangdong Province Key Laboratory of Computational Science at Sun Yat-Sen University
文摘In this paper, the global controllability for a class of high dimensional polynomial systems has been investigated and a constructive algebraic criterion algorithm for their global controllability has been obtained. By the criterion algorithm, the global controllability can be determined in finite steps of arithmetic operations. The algorithm is imposed on the coefficients of the polynomials only and the analysis technique is based on Sturm Theorem in real algebraic geometry and its modern progress. Finally, the authors will give some examples to show the application of our results.
文摘This paper mainly concerns oblique derivative problems for nonlinear nondivergent elliptic equations of second order with measurable coefficients in a multiply connected domain. Under certain condition, we derive a priori estimates of solutions. By using these estimates and the fixed-point theorem, we prove the existence of solutions.
基金Project supported by the Foundation for Scientific Instrument and Equipment Development,Chinese Academy of Sciences(Grant No.YJKYYQ20170032)the National Natural Science Foundation of China(Grant No.61435012)
文摘The samples of InxGa(1-x)As/In(0.52)Al(0.48)As two-dimensional electron gas(2DEG)are grown by molecular beam epitaxy(MBE).In the sample preparation process,the In content and spacer layer thickness are changed and two kinds of methods,i.e.,contrast body doping andδ-doping are used.The samples are analyzed by the Hall measurements at 300 Kand 77 K.The InxGa1-xAs/In0.52Al0.48As 2DEG channel structures with mobilities as high as 10289 cm^2/V·s(300 K)and42040 cm^2/V·s(77 K)are obtained,and the values of carrier concentration(Nc)are 3.465×10^12/cm^2 and 2.502×10^12/cm^2,respectively.The THz response rates of In P-based high electron mobility transistor(HEMT)structures with different gate lengths at 300 K and 77 K temperatures are calculated based on the shallow water wave instability theory.The results provide a reference for the research and preparation of In P-based HEMT THz detectors.
基金supported by the National Natural Science Foundations of China(Nos.51205193,51475221)
文摘Image matching technology is theoretically significant and practically promising in the field of autonomous navigation.Addressing shortcomings of existing image matching navigation technologies,the concept of high-dimensional combined feature is presented based on sequence image matching navigation.To balance between the distribution of high-dimensional combined features and the shortcomings of the only use of geometric relations,we propose a method based on Delaunay triangulation to improve the feature,and add the regional characteristics of the features together with their geometric characteristics.Finally,k-nearest neighbor(KNN)algorithm is adopted to optimize searching process.Simulation results show that the matching can be realized at the rotation angle of-8°to 8°and the scale factor of 0.9 to 1.1,and when the image size is 160 pixel×160 pixel,the matching time is less than 0.5 s.Therefore,the proposed algorithm can substantially reduce computational complexity,improve the matching speed,and exhibit robustness to the rotation and scale changes.
基金The National Grand Fundamental Research 973 Program (2004CB318000) of China
文摘This paper deals with the representation of the solutions of a polynomial system, and concentrates on the high-dimensional case. Based on the rational univari- ate representation of zero-dimensional polynomial systems, we give a new description called rational representation for the solutions of a high-dimensional polynomial sys- tem and propose an algorithm for computing it. By this way all the solutions of any high-dimensional polynomial system can be represented by a set of so-called rational- representation sets.
基金supported in part by the National Natural Science Foundation of China (Nos. 61303074, 61309013)the Programs for Science, National Key Basic Research and Development Program ("973") of China (No. 2012CB315900)Technology Development of Henan province (Nos.12210231003, 13210231002)
文摘Aimed at the issue that traditional clustering methods are not appropriate to high-dimensional data, a cuckoo search fuzzy-weighting algorithm for subspace clustering is presented on the basis of the exited soft subspace clustering algorithm. In the proposed algorithm, a novel objective function is firstly designed by considering the fuzzy weighting within-cluster compactness and the between-cluster separation, and loosening the constraints of dimension weight matrix. Then gradual membership and improved Cuckoo search, a global search strategy, are introduced to optimize the objective function and search subspace clusters, giving novel learning rules for clustering. At last, the performance of the proposed algorithm on the clustering analysis of various low and high dimensional datasets is experimentally compared with that of several competitive subspace clustering algorithms. Experimental studies demonstrate that the proposed algorithm can obtain better performance than most of the existing soft subspace clustering algorithms.
文摘A fixed-geometry two-dimensional mixed-compression supersonic inlet with sweep-forward high-light and bleed slot in an inverted "X"-form layout was tested in a wind tunnel. Results indicate: (1) with increases of the free stream Mach number, the total pressure recovery decreases, while the mass flow ratio increases to the maximum at the design point and then decreases; (2) when the angle of attack, a, is less than 6°, the total pressure recovery of both side inlets tends to decrease, but, on the lee side inlet, its values are higher than those on the windward side inlet, and the mass flow ratio on lee side inlet increases first and then falls, while on the windward side it keeps declining slowly with the sum of mass flow on both sides remaining almost constant; (3) with the attack angle, a, rising from 6° to 9°, both total pressure recovery and mass flow ratio on the lee side inlet fall quickly, but on the windward side inlet can be observed decreases in the total pressure recovery and increases in the mass flow ratio; (4) by comparing the velocity and back pressure characterristics of the inlet with a bleed slot to those of the inlet without, it stands to reason that the existence of a bleed slot has not only widened the steady working range of inlet, but also made an enormous improvement in its performance at high Mach numbers. Besides, this paper also presents an example to show how this type of inlet is designed.
文摘With the abundance of exceptionally High Dimensional data, feature selection has become an essential element in the Data Mining process. In this paper, we investigate the problem of efficient feature selection for classification on High Dimensional datasets. We present a novel filter based approach for feature selection that sorts out the features based on a score and then we measure the performance of four different Data Mining classification algorithms on the resulting data. In the proposed approach, we partition the sorted feature and search the important feature in forward manner as well as in reversed manner, while starting from first and last feature simultaneously in the sorted list. The proposed approach is highly scalable and effective as it parallelizes over both attribute and tuples simultaneously allowing us to evaluate many of potential features for High Dimensional datasets. The newly proposed framework for feature selection is experimentally shown to be very valuable with real and synthetic High Dimensional datasets which improve the precision of selected features. We have also tested it to measure classification accuracy against various feature selection process.
基金Project supported by the National Natural Science Foundation of China (Grant No. 11074088)
文摘We present two protocols for the controlled remote implementation of quantum operations between three-party high-dimensional systems. Firstly, the controlled teleportation of an arbitrary unitary operation by bidirectional quantum state teleportaion (BQST) with high-dimensional systems is considered. Then, instead of using the BQST method, a protocol for controlled remote implementation of partially unknown operations belonging to some restricted sets in high-dimensional systems is proposed. It is shown that, in these protocols, if and only if the controller would like to help the sender with the remote operations, the controlled remote implementation of quantum operations for high-dimensional systems can be completed.
基金Project supported by the National Natural Science Foundation of China (Grant No. 60976070)the Excellent Science and Technology Innovation Program from Beijing Jiaotong University,China
文摘The J-V characteristics of AltGa1 tN/GaN high electron mobility transistors(HEMTs) are investigated and simulated using the self-consistent solution of the Schro dinger and Poisson equations for a two-dimensional electron gas(2DEG) in a triangular potential well with the Al mole fraction t = 0.3 as an example.Using a simple analytical model,the electronic drift velocity in a 2DEG channel is obtained.It is found that the current density through the 2DEG channel is on the order of 10^13 A/m^2 within a very narrow region(about 5 nm).For a current density of 7 × 10^13 A/m62 passing through the 2DEG channel with a 2DEG density of above 1.2 × 10^17 m^-2 under a drain voltage Vds = 1.5 V at room temperature,the barrier thickness Lb should be more than 10 nm and the gate bias must be higher than 2 V.