Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts itera...Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost. Hence, determining how to accelerate the training process for LF models has become a significant issue. To address this, this work proposes a randomized latent factor(RLF) model. It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices, thereby greatly alleviating computational burden. It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models, RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices, which is especially desired for industrial applications demanding highly efficient models.展开更多
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi...The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.展开更多
High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurat...High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices.展开更多
Most research on anomaly detection has focused on event that is different from its spatial-temporal neighboring events.It is still a significant challenge to detect anomalies that involve multiple normal events intera...Most research on anomaly detection has focused on event that is different from its spatial-temporal neighboring events.It is still a significant challenge to detect anomalies that involve multiple normal events interacting in an unusual pattern.In this work,a novel unsupervised method based on sparse topic model was proposed to capture motion patterns and detect anomalies in traffic surveillance.scale-invariant feature transform(SIFT)flow was used to improve the dense trajectory in order to extract interest points and the corresponding descriptors with less interference.For the purpose of strengthening the relationship of interest points on the same trajectory,the fisher kernel method was applied to obtain the representation of trajectory which was quantized into visual word.Then the sparse topic model was proposed to explore the latent motion patterns and achieve a sparse representation for the video scene.Finally,two anomaly detection algorithms were compared based on video clip detection and visual word analysis respectively.Experiments were conducted on QMUL Junction dataset and AVSS dataset.The results demonstrated the superior efficiency of the proposed method.展开更多
To reduce high computational cost of existing Direction-Of-Arrival(DOA) estimation techniques within a sparse representation framework,a novel method with low computational com-plexity is proposed.Firstly,a sparse lin...To reduce high computational cost of existing Direction-Of-Arrival(DOA) estimation techniques within a sparse representation framework,a novel method with low computational com-plexity is proposed.Firstly,a sparse linear model constructed from the eigenvectors of covariance matrix of array received signals is built.Then based on the FOCal Underdetermined System Solver(FOCUSS) algorithm,a sparse solution finding algorithm to solve the model is developed.Compared with other state-of-the-art methods using a sparse representation,our approach also can resolve closely and highly correlated sources without a priori knowledge of the number of sources.However,our method has lower computational complexity and performs better in low Signal-to-Noise Ratio(SNR).Lastly,the performance of the proposed method is illustrated by computer simulations.展开更多
Addressing the difficulties of scattered and sparse observational data in ocean science,a new interpolation technique based on information diffusion is proposed in this paper.Based on a fuzzy mapping idea,sparse data ...Addressing the difficulties of scattered and sparse observational data in ocean science,a new interpolation technique based on information diffusion is proposed in this paper.Based on a fuzzy mapping idea,sparse data samples are diffused and mapped into corresponding fuzzy sets in the form of probability in an interpolation ellipse model.To avoid the shortcoming of normal diffusion function on the asymmetric structure,a kind of asymmetric information diffusion function is developed and a corresponding algorithm-ellipse model for diffusion of asymmetric information is established.Through interpolation experiments and contrast analysis of the sea surface temperature data with ARGO data,the rationality and validity of the ellipse model are assessed.展开更多
Because all the known integrable models possess Schwarzian forms with Mobious transformation invariance,it may be one of the best ways to find new integrable models starting from some suitable Mobious transformation i...Because all the known integrable models possess Schwarzian forms with Mobious transformation invariance,it may be one of the best ways to find new integrable models starting from some suitable Mobious transformation invariant equations. In this paper, we study the Painlevé integrability of some special (3+1)-dimensional Schwarzian models.展开更多
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It au...The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expec- tation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHO-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval SHC-EM outperforms the traditional variational 1earning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning.展开更多
A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection ...A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection speed. Bayesian probability model and Gaussian kernel function were applied to calculate the saliency of traffic videos. The method of multiscale saliency was used and the final saliency was the average of all scales, which increased the detection rates extraordinarily. The detection results of several typical traffic dangers show that the proposed method has higher detection rates and speed, which meets the requirement of real-time detection of traffic dangers.展开更多
Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a n...Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a new non-linear generalized model to describe Cyber-Physical Systems.This model includes unknown multivariable discrete and continuous-time functions and different multiplicative noises to represent the evolution of physical processes and randomeffects in the physical and computationalworlds.Besides,the digitalization stage in hardware devices is represented too.Attackers and most critical sparse sensor attacks are described through a stochastic process.The reconstruction and protectionmechanisms are based on aweighted stochasticmodel.Error probability in data samples is estimated through different indicators commonly employed in non-linear dynamics(such as the Fourier transform,first-return maps,or the probability density function).A decision algorithm calculates the final reconstructed value considering the previous error probability.An experimental validation based on simulation tools and real deployments is also carried out.Both,the new technology performance and scalability are studied.Results prove that the proposed solution protects Cyber-Physical Systems against up to 92%of attacks and perturbations,with a computational delay below 2.5 s.The proposed model shows a linear complexity,as recursive or iterative structures are not employed,just algebraic and probabilistic functions.In conclusion,the new model and reconstructionmechanism can protect successfully Cyber-Physical Systems against sparse sensor attacks,even in dense or pervasive deployments and scenarios.展开更多
The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potentia...The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potential scatters' positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed 10 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.展开更多
This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficient...This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficients of the parameter part of the Varying Index Coefficient Model (VICM), while the unknown function part uses the B-spline to expand. Moreover, we combine the above two estimation methods under the assumption of high-dimensional data. The results of data simulation and empirical analysis show that for the varying index coefficient model, the re-adjusted cross-validation method is better in terms of accuracy and stability than traditional methods based on ordinary least squares.展开更多
Model averaging has attracted increasing attention in recent years for the analysis of high-dimensional data. By weighting several competing statistical models suitably, model averaging attempts to achieve stable and ...Model averaging has attracted increasing attention in recent years for the analysis of high-dimensional data. By weighting several competing statistical models suitably, model averaging attempts to achieve stable and improved prediction. To obtain a better understanding of the available model averaging methods, their properties and the relationships between them, this paper is devoted to make a review on some recent progresses in high-dimensional model averaging from the frequentist perspective. Some future research topics are also discussed.展开更多
Locality preserving projection (LPP) is a newly emerging fault detection method which can discover local manifold structure of a data set to be analyzed, but its linear assumption may lead to monitoring performance de...Locality preserving projection (LPP) is a newly emerging fault detection method which can discover local manifold structure of a data set to be analyzed, but its linear assumption may lead to monitoring performance degradation for complicated nonlinear industrial processes. In this paper, an improved LPP method, referred to as sparse kernel locality preserving projection (SKLPP) is proposed for nonlinear process fault detection. Based on the LPP model, kernel trick is applied to construct nonlinear kernel model. Furthermore, for reducing the computational complexity of kernel model, feature samples selection technique is adopted to make the kernel LPP model sparse. Lastly, two monitoring statistics of SKLPP model are built to detect process faults. Simulations on a continuous stirred tank reactor (CSTR) system show that SKLPP is more effective than LPP in terms of fault detection performance.展开更多
Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next gene...Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next generation of mobile communication.However,the inter-Doppler interference(IDI)problem caused by fractional Doppler poses great challenges to channel estimation.To avoid this problem,this paper proposes a joint time and delayDoppler(DD)domain based on sparse Bayesian learning(SBL)channel estimation algorithm.Firstly,we derive the original channel response(OCR)from the time domain channel impulse response(CIR),which can reflect the channel variation during one OTFS symbol.Compare with the traditional channel model,the OCR can avoid the IDI problem.After that,the dimension of OCR is reduced by using the basis expansion model(BEM)and the relationship between the time and DD domain channel model,so that we have turned the underdetermined problem into an overdetermined problem.Finally,in terms of sparsity of channel in delay domain,SBL algorithm is used to estimate the basis coefficients in the BEM without any priori information of channel.The simulation results show the effectiveness and superiority of the proposed channel estimation algorithm.展开更多
Many methods based on deep learning have achieved amazing results in image sentiment analysis.However,these existing methods usually pursue high accuracy,ignoring the effect on model training efficiency.Considering th...Many methods based on deep learning have achieved amazing results in image sentiment analysis.However,these existing methods usually pursue high accuracy,ignoring the effect on model training efficiency.Considering that when faced with large-scale sentiment analysis tasks,the high accuracy rate often requires long experimental time.In view of the weakness,a method that can greatly improve experimental efficiency with only small fluctuations in model accuracy is proposed,and singular value decomposition(SVD)is used to find the sparse feature of the image,which are sparse vectors with strong discriminativeness and effectively reduce redundant information;The authors propose the Fast Dictionary Learning algorithm(FDL),which can combine neural network with sparse representation.This method is based on K-Singular Value Decomposition,and through iteration,it can effectively reduce the calculation time and greatly improve the training efficiency in the case of small fluctuation of accuracy.Moreover,the effectiveness of the proposed method is evaluated on the FER2013 dataset.By adding singular value decomposition,the accuracy of the test suite increased by 0.53%,and the total experiment time was shortened by 8.2%;Fast Dictionary Learning shortened the total experiment time by 36.3%.展开更多
In order to solve the problem of the reliability of slope engineering due to complex uncertainties, the Monte Carlo simulation method is adopted. Based on the characteristics of sparse grid, an interpolation algorithm...In order to solve the problem of the reliability of slope engineering due to complex uncertainties, the Monte Carlo simulation method is adopted. Based on the characteristics of sparse grid, an interpolation algorithm, which can be applied to high dimensional problems, is introduced. A surrogate model of high dimensional implicit function is established, which makes Monte Carlo method more adaptable. Finally, a reliability analysis method is proposed to evaluate the reliability of the slope engineering, and is applied in the Sau Mau Ping slope project in Hong Kong. The reliability analysis method has great theoretical and practical significance for engineering quality evaluation and natural disaster assessment.展开更多
In this paper, we consider an extragradient thresholding algorithm for finding the sparse solution of mixed complementarity problems (MCPs). We establish a relaxation l1 regularized projection minimization model for t...In this paper, we consider an extragradient thresholding algorithm for finding the sparse solution of mixed complementarity problems (MCPs). We establish a relaxation l1 regularized projection minimization model for the original problem and design an extragradient thresholding algorithm (ETA) to solve the regularized model. Furthermore, we prove that any cluster point of the sequence generated by ETA is a solution of MCP. Finally, numerical experiments show that the ETA algorithm can effectively solve the l1 regularized projection minimization model and obtain the sparse solution of the mixed complementarity problem.展开更多
Biological slices are an effective tool for studying the physiological structure and evolutionmechanism of biological systems.However,due to the complexity of preparation technology and the presence of many uncontroll...Biological slices are an effective tool for studying the physiological structure and evolutionmechanism of biological systems.However,due to the complexity of preparation technology and the presence of many uncontrollable factors during the preparation processing,leads to problems such as difficulty in preparing slice images and breakage of slice images.Therefore,we proposed a biological slice image small-scale corruption inpainting algorithm with interpretability based on multi-layer deep sparse representation,achieving the high-fidelity reconstruction of slice images.We further discussed the relationship between deep convolutional neural networks and sparse representation,ensuring the high-fidelity characteristic of the algorithm first.A novel deep wavelet dictionary is proposed that can better obtain image prior and possess learnable feature.And multi-layer deep sparse representation is used to implement dictionary learning,acquiring better signal expression.Compared with methods such as NLABH,Shearlet,Partial Differential Equation(PDE),K-Singular Value Decomposition(K-SVD),Convolutional Sparse Coding,and Deep Image Prior,the proposed algorithm has better subjective reconstruction and objective evaluation with small-scale image data,which realized high-fidelity inpainting,under the condition of small-scale image data.And theOn2-level time complexitymakes the proposed algorithm practical.The proposed algorithm can be effectively extended to other cross-sectional image inpainting problems,such as magnetic resonance images,and computed tomography images.展开更多
The usual (1+1)-dimensional Schwartz Boussinesq equation is extended to the (1+1)-dimensional space-time symmetric form and the general (n+1)-dimensional space-time symmetric form. These extensions are Painle...The usual (1+1)-dimensional Schwartz Boussinesq equation is extended to the (1+1)-dimensional space-time symmetric form and the general (n+1)-dimensional space-time symmetric form. These extensions are Painleve integrable in the sense that they possess the Painleve property. The single soliton solutions and the periodic travelling wave solutions for arbitrary dimensional space-time symmetric form are obtained by the Painleve-Backlund transformation.展开更多
基金supported in part by the National Natural Science Foundation of China (6177249391646114)+1 种基金Chongqing research program of technology innovation and application (cstc2017rgzn-zdyfX0020)in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciences
文摘Latent factor(LF) models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS) matrices which are commonly seen in various industrial applications. An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost. Hence, determining how to accelerate the training process for LF models has become a significant issue. To address this, this work proposes a randomized latent factor(RLF) model. It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices, thereby greatly alleviating computational burden. It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models, RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices, which is especially desired for industrial applications demanding highly efficient models.
基金supported by the Innovation Fund Project of the Gansu Education Department(Grant No.2021B-099).
文摘The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.
基金supported in part by the National Natural Science Foundation of China(61702475,61772493,61902370,62002337)in part by the Natural Science Foundation of Chongqing,China(cstc2019jcyj-msxmX0578,cstc2019jcyjjqX0013)+1 种基金in part by the Chinese Academy of Sciences“Light of West China”Program,in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciencesby Technology Innovation and Application Development Project of Chongqing,China(cstc2019jscx-fxydX0027)。
文摘High-dimensional and sparse(HiDS)matrices commonly arise in various industrial applications,e.g.,recommender systems(RSs),social networks,and wireless sensor networks.Since they contain rich information,how to accurately represent them is of great significance.A latent factor(LF)model is one of the most popular and successful ways to address this issue.Current LF models mostly adopt L2-norm-oriented Loss to represent an HiDS matrix,i.e.,they sum the errors between observed data and predicted ones with L2-norm.Yet L2-norm is sensitive to outlier data.Unfortunately,outlier data usually exist in such matrices.For example,an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users.To address this issue,this work proposes a smooth L1-norm-oriented latent factor(SL-LF)model.Its main idea is to adopt smooth L1-norm rather than L2-norm to form its Loss,making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix.Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices.
基金Project(50808025)supported by the National Natural Science Foundation of ChinaProject(20090162110057)supported by the Doctoral Fund of Ministry of Education,China
文摘Most research on anomaly detection has focused on event that is different from its spatial-temporal neighboring events.It is still a significant challenge to detect anomalies that involve multiple normal events interacting in an unusual pattern.In this work,a novel unsupervised method based on sparse topic model was proposed to capture motion patterns and detect anomalies in traffic surveillance.scale-invariant feature transform(SIFT)flow was used to improve the dense trajectory in order to extract interest points and the corresponding descriptors with less interference.For the purpose of strengthening the relationship of interest points on the same trajectory,the fisher kernel method was applied to obtain the representation of trajectory which was quantized into visual word.Then the sparse topic model was proposed to explore the latent motion patterns and achieve a sparse representation for the video scene.Finally,two anomaly detection algorithms were compared based on video clip detection and visual word analysis respectively.Experiments were conducted on QMUL Junction dataset and AVSS dataset.The results demonstrated the superior efficiency of the proposed method.
基金Supported by the National Natural Science Foundation of China (No. 60502040)the Innovation Foundation for Outstanding Postgraduates in the Electronic Engineering Institute of PLA (No. 2009YB005)
文摘To reduce high computational cost of existing Direction-Of-Arrival(DOA) estimation techniques within a sparse representation framework,a novel method with low computational com-plexity is proposed.Firstly,a sparse linear model constructed from the eigenvectors of covariance matrix of array received signals is built.Then based on the FOCal Underdetermined System Solver(FOCUSS) algorithm,a sparse solution finding algorithm to solve the model is developed.Compared with other state-of-the-art methods using a sparse representation,our approach also can resolve closely and highly correlated sources without a priori knowledge of the number of sources.However,our method has lower computational complexity and performs better in low Signal-to-Noise Ratio(SNR).Lastly,the performance of the proposed method is illustrated by computer simulations.
基金Project of Natural Science Foundation of China (41276088)
文摘Addressing the difficulties of scattered and sparse observational data in ocean science,a new interpolation technique based on information diffusion is proposed in this paper.Based on a fuzzy mapping idea,sparse data samples are diffused and mapped into corresponding fuzzy sets in the form of probability in an interpolation ellipse model.To avoid the shortcoming of normal diffusion function on the asymmetric structure,a kind of asymmetric information diffusion function is developed and a corresponding algorithm-ellipse model for diffusion of asymmetric information is established.Through interpolation experiments and contrast analysis of the sea surface temperature data with ARGO data,the rationality and validity of the ellipse model are assessed.
文摘Because all the known integrable models possess Schwarzian forms with Mobious transformation invariance,it may be one of the best ways to find new integrable models starting from some suitable Mobious transformation invariant equations. In this paper, we study the Painlevé integrability of some special (3+1)-dimensional Schwarzian models.
基金Supported by the National Natural Science Foundation of China under Grant No 60972106the China Postdoctoral Science Foundation under Grant No 2014M561053+1 种基金the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108the Hebei Province Natural Science Foundation under Grant No E2016202341
文摘The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expec- tation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHO-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval SHC-EM outperforms the traditional variational 1earning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning.
基金Project(50808025)supported by the National Natural Science Foundation of ChinaProject(20090162110057)supported by the Doctoral Fund of Ministry of Education of China
文摘A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection speed. Bayesian probability model and Gaussian kernel function were applied to calculate the saliency of traffic videos. The method of multiscale saliency was used and the final saliency was the average of all scales, which increased the detection rates extraordinarily. The detection results of several typical traffic dangers show that the proposed method has higher detection rates and speed, which meets the requirement of real-time detection of traffic dangers.
基金supported by Comunidad de Madrid within the framework of the Multiannual Agreement with Universidad Politécnica de Madrid to encourage research by young doctors(PRINCE).
文摘Cyber-Physical Systems are very vulnerable to sparse sensor attacks.But current protection mechanisms employ linear and deterministic models which cannot detect attacks precisely.Therefore,in this paper,we propose a new non-linear generalized model to describe Cyber-Physical Systems.This model includes unknown multivariable discrete and continuous-time functions and different multiplicative noises to represent the evolution of physical processes and randomeffects in the physical and computationalworlds.Besides,the digitalization stage in hardware devices is represented too.Attackers and most critical sparse sensor attacks are described through a stochastic process.The reconstruction and protectionmechanisms are based on aweighted stochasticmodel.Error probability in data samples is estimated through different indicators commonly employed in non-linear dynamics(such as the Fourier transform,first-return maps,or the probability density function).A decision algorithm calculates the final reconstructed value considering the previous error probability.An experimental validation based on simulation tools and real deployments is also carried out.Both,the new technology performance and scalability are studied.Results prove that the proposed solution protects Cyber-Physical Systems against up to 92%of attacks and perturbations,with a computational delay below 2.5 s.The proposed model shows a linear complexity,as recursive or iterative structures are not employed,just algebraic and probabilistic functions.In conclusion,the new model and reconstructionmechanism can protect successfully Cyber-Physical Systems against sparse sensor attacks,even in dense or pervasive deployments and scenarios.
基金Project(61171133)supported by the National Natural Science Foundation of ChinaProject(11JJ1010)supported by the Natural Science Fund for Distinguished Young Scholars of Hunan Province,ChinaProject(61101182)supported by National Natural Science Foundation for Young Scientists of China
文摘The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters' positions among a much large number of potential scatters' positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed 10 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.
文摘This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficients of the parameter part of the Varying Index Coefficient Model (VICM), while the unknown function part uses the B-spline to expand. Moreover, we combine the above two estimation methods under the assumption of high-dimensional data. The results of data simulation and empirical analysis show that for the varying index coefficient model, the re-adjusted cross-validation method is better in terms of accuracy and stability than traditional methods based on ordinary least squares.
文摘Model averaging has attracted increasing attention in recent years for the analysis of high-dimensional data. By weighting several competing statistical models suitably, model averaging attempts to achieve stable and improved prediction. To obtain a better understanding of the available model averaging methods, their properties and the relationships between them, this paper is devoted to make a review on some recent progresses in high-dimensional model averaging from the frequentist perspective. Some future research topics are also discussed.
基金Supported by the National Natural Science Foundation of China (61273160), the Natural Science Foundation of Shandong Province of China (ZR2011FM014) and the Fundamental Research Funds for the Central Universities (10CX04046A).
文摘Locality preserving projection (LPP) is a newly emerging fault detection method which can discover local manifold structure of a data set to be analyzed, but its linear assumption may lead to monitoring performance degradation for complicated nonlinear industrial processes. In this paper, an improved LPP method, referred to as sparse kernel locality preserving projection (SKLPP) is proposed for nonlinear process fault detection. Based on the LPP model, kernel trick is applied to construct nonlinear kernel model. Furthermore, for reducing the computational complexity of kernel model, feature samples selection technique is adopted to make the kernel LPP model sparse. Lastly, two monitoring statistics of SKLPP model are built to detect process faults. Simulations on a continuous stirred tank reactor (CSTR) system show that SKLPP is more effective than LPP in terms of fault detection performance.
基金supported by the Natural Science Foundation of Chongqing(No.cstc2019jcyj-msxmX0017)。
文摘Since orthogonal time-frequency space(OTFS)can effectively handle the problems caused by Doppler effect in high-mobility environment,it has gradually become a promising candidate for modulation scheme in the next generation of mobile communication.However,the inter-Doppler interference(IDI)problem caused by fractional Doppler poses great challenges to channel estimation.To avoid this problem,this paper proposes a joint time and delayDoppler(DD)domain based on sparse Bayesian learning(SBL)channel estimation algorithm.Firstly,we derive the original channel response(OCR)from the time domain channel impulse response(CIR),which can reflect the channel variation during one OTFS symbol.Compare with the traditional channel model,the OCR can avoid the IDI problem.After that,the dimension of OCR is reduced by using the basis expansion model(BEM)and the relationship between the time and DD domain channel model,so that we have turned the underdetermined problem into an overdetermined problem.Finally,in terms of sparsity of channel in delay domain,SBL algorithm is used to estimate the basis coefficients in the BEM without any priori information of channel.The simulation results show the effectiveness and superiority of the proposed channel estimation algorithm.
基金supported by the National Natural Science Foundation of China(No.61801440)the High‐quality and Cutting‐edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China),State Key Laboratory of Media Convergence and Communication(Communication University of China)the Fundamental Research Funds for the Central Universities(CUC2019B069).
文摘Many methods based on deep learning have achieved amazing results in image sentiment analysis.However,these existing methods usually pursue high accuracy,ignoring the effect on model training efficiency.Considering that when faced with large-scale sentiment analysis tasks,the high accuracy rate often requires long experimental time.In view of the weakness,a method that can greatly improve experimental efficiency with only small fluctuations in model accuracy is proposed,and singular value decomposition(SVD)is used to find the sparse feature of the image,which are sparse vectors with strong discriminativeness and effectively reduce redundant information;The authors propose the Fast Dictionary Learning algorithm(FDL),which can combine neural network with sparse representation.This method is based on K-Singular Value Decomposition,and through iteration,it can effectively reduce the calculation time and greatly improve the training efficiency in the case of small fluctuation of accuracy.Moreover,the effectiveness of the proposed method is evaluated on the FER2013 dataset.By adding singular value decomposition,the accuracy of the test suite increased by 0.53%,and the total experiment time was shortened by 8.2%;Fast Dictionary Learning shortened the total experiment time by 36.3%.
基金Supported by projects of China Ocean Research Mineral Resources R&D Association(COMRA)Special Foundation(DY135-R2-1-01,DY135-46)the Province/Jilin University Co-Construction Project-Funds for New Materials(SXGJSF2017-3)
文摘In order to solve the problem of the reliability of slope engineering due to complex uncertainties, the Monte Carlo simulation method is adopted. Based on the characteristics of sparse grid, an interpolation algorithm, which can be applied to high dimensional problems, is introduced. A surrogate model of high dimensional implicit function is established, which makes Monte Carlo method more adaptable. Finally, a reliability analysis method is proposed to evaluate the reliability of the slope engineering, and is applied in the Sau Mau Ping slope project in Hong Kong. The reliability analysis method has great theoretical and practical significance for engineering quality evaluation and natural disaster assessment.
文摘In this paper, we consider an extragradient thresholding algorithm for finding the sparse solution of mixed complementarity problems (MCPs). We establish a relaxation l1 regularized projection minimization model for the original problem and design an extragradient thresholding algorithm (ETA) to solve the regularized model. Furthermore, we prove that any cluster point of the sequence generated by ETA is a solution of MCP. Finally, numerical experiments show that the ETA algorithm can effectively solve the l1 regularized projection minimization model and obtain the sparse solution of the mixed complementarity problem.
基金supported by the National Natural Science Foundation of China(Grant No.61871380)the Shandong Provincial Natural Science Foundation(Grant No.ZR2020MF019)Beijing Natural Science Foundation(Grant No.4172034).
文摘Biological slices are an effective tool for studying the physiological structure and evolutionmechanism of biological systems.However,due to the complexity of preparation technology and the presence of many uncontrollable factors during the preparation processing,leads to problems such as difficulty in preparing slice images and breakage of slice images.Therefore,we proposed a biological slice image small-scale corruption inpainting algorithm with interpretability based on multi-layer deep sparse representation,achieving the high-fidelity reconstruction of slice images.We further discussed the relationship between deep convolutional neural networks and sparse representation,ensuring the high-fidelity characteristic of the algorithm first.A novel deep wavelet dictionary is proposed that can better obtain image prior and possess learnable feature.And multi-layer deep sparse representation is used to implement dictionary learning,acquiring better signal expression.Compared with methods such as NLABH,Shearlet,Partial Differential Equation(PDE),K-Singular Value Decomposition(K-SVD),Convolutional Sparse Coding,and Deep Image Prior,the proposed algorithm has better subjective reconstruction and objective evaluation with small-scale image data,which realized high-fidelity inpainting,under the condition of small-scale image data.And theOn2-level time complexitymakes the proposed algorithm practical.The proposed algorithm can be effectively extended to other cross-sectional image inpainting problems,such as magnetic resonance images,and computed tomography images.
基金supported by the National Natural Science Foundation of China (Grant No 10575087)the Natural Science Foundation of Zhejiang Province,China (Grant No 102053)
文摘The usual (1+1)-dimensional Schwartz Boussinesq equation is extended to the (1+1)-dimensional space-time symmetric form and the general (n+1)-dimensional space-time symmetric form. These extensions are Painleve integrable in the sense that they possess the Painleve property. The single soliton solutions and the periodic travelling wave solutions for arbitrary dimensional space-time symmetric form are obtained by the Painleve-Backlund transformation.