As the signal bandwidth and the number of channels increase, the synthetic aperture radar (SAR) imaging system produces huge amount of data according to the Shannon-Nyquist theorem, causing a huge burden for data tr...As the signal bandwidth and the number of channels increase, the synthetic aperture radar (SAR) imaging system produces huge amount of data according to the Shannon-Nyquist theorem, causing a huge burden for data transmission. This paper concerns the coprime sampl which are proposed recently but ng and nested sparse sampling, have never been applied to real world for target detection, and proposes a novel way which utilizes these new sub-Nyquist sampling structures for SAR sampling in azimuth and reconstructs the data of SAR sampling by compressive sensing (CS). Both the simulated and real data are processed to test the algorithm, and the results indicate the way which combines these new undersampling structures and CS is able to achieve the SAR imaging effectively with much less data than regularly ways required. Finally, the influence of a little sampling jitter to SAR imaging is analyzed by theoretical analysis and experimental analysis, and then it concludes a little sampling jitter have no effect on image quality of SAR.展开更多
A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection ...A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection speed. Bayesian probability model and Gaussian kernel function were applied to calculate the saliency of traffic videos. The method of multiscale saliency was used and the final saliency was the average of all scales, which increased the detection rates extraordinarily. The detection results of several typical traffic dangers show that the proposed method has higher detection rates and speed, which meets the requirement of real-time detection of traffic dangers.展开更多
Finite rate of innovation sampling is a novel sub-Nyquist sampling method that can reconstruct a signal from sparse sampling data.The application of this method in ultrasonic testing greatly reduces the signal samplin...Finite rate of innovation sampling is a novel sub-Nyquist sampling method that can reconstruct a signal from sparse sampling data.The application of this method in ultrasonic testing greatly reduces the signal sampling rate and the quantity of sampling data.However,the pulse number of the signal must be known beforehand for the signal reconstruction procedure.The accuracy of this prior information directly affects the accuracy of the estimated parameters of the signal and influences the assessment of flaws,leading to a lower defect detection ratio.Although the pulse number can be pre-given by theoretical analysis,the process is still unable to assess actual complex random orientation defects.Therefore,this paper proposes a new method that uses singular value decomposition(SVD) for estimating the pulse number from sparse sampling data and avoids the shortcoming of providing the pulse number in advance for signal reconstruction.When the sparse sampling data have been acquired from the ultrasonic signal,these data are transformed to discrete Fourier coefficients.A Hankel matrix is then constructed from these coefficients,and SVD is performed on the matrix.The decomposition coefficients reserve the information of the pulse number.When the decomposition coefficients generated by noise according to noise level are removed,the number of the remaining decomposition coefficients is the signal pulse number.The feasibility of the proposed method was verified through simulation experiments.The applicability was tested in ultrasonic experiments by using sample flawed pipelines.Results from simulations and real experiments demonstrated the efficiency of this method.展开更多
To achieve sparse sampling on a coded ultrasonic signal,the finite rate of innovation(FRI)sparse sampling technique is proposed on a binary frequency-coded(BFC)ultrasonic signal.A framework of FRI-based sparse samplin...To achieve sparse sampling on a coded ultrasonic signal,the finite rate of innovation(FRI)sparse sampling technique is proposed on a binary frequency-coded(BFC)ultrasonic signal.A framework of FRI-based sparse sampling for an ultrasonic signal pulse is presented.Differences between the pulse and the coded ultrasonic signal are analyzed,and a response mathematical model of the coded ultrasonic signal is established.A time-domain transform algorithm,called the high-order moment method,is applied to obtain a pulse stream signal to assist BFC ultrasonic signal sparse sampling.A sampling of the output signal with a uniform interval is then performed after modulating the pulse stream signal by a sampling kernel.FRI-based sparse sampling is performed using a self-made circuit on an aluminum alloy sample.Experimental results show that the sampling rate reduces to 0.5 MHz,which is at least 12.8 MHz in the Nyquist sampling mode.The echo peak amplitude and the time of flight are estimated from the sparse sampling data with maximum errors of 9.324%and 0.031%,respectively.This research can provide a theoretical basis and practical application reference for reducing the sampling rate and data volume in coded ultrasonic testing.展开更多
Face recognition has been widely used and developed rapidly in recent years.The methods based on sparse representation have made great breakthroughs,and collaborative representation-based classification(CRC)is the typ...Face recognition has been widely used and developed rapidly in recent years.The methods based on sparse representation have made great breakthroughs,and collaborative representation-based classification(CRC)is the typical representative.However,CRC cannot distinguish similar samples well,leading to a wrong classification easily.As an improved method based on CRC,the two-phase test sample sparse representation(TPTSSR)removes the samples that make little contribution to the representation of the testing sample.Nevertheless,only one removal is not sufficient,since some useless samples may still be retained,along with some useful samples maybe being removed randomly.In this work,a novel classifier,called discriminative sparse parameter(DSP)classifier with iterative removal,is proposed for face recognition.The proposed DSP classifier utilizes sparse parameter to measure the representation ability of training samples straight-forward.Moreover,to avoid some useful samples being removed randomly with only one removal,DSP classifier removes most uncorrelated samples gradually with iterations.Extensive experiments on different typical poses,expressions and noisy face datasets are conducted to assess the performance of the proposed DSP classifier.The experimental results demonstrate that DSP classifier achieves a better recognition rate than the well-known SRC,CRC,RRC,RCR,SRMVS,RFSR and TPTSSR classifiers for face recognition in various situations.展开更多
Video reconstruction quality largely depends on the ability of employed sparse domain to adequately represent the underlying video in Distributed Compressed Video Sensing (DCVS). In this paper, we propose a novel dyna...Video reconstruction quality largely depends on the ability of employed sparse domain to adequately represent the underlying video in Distributed Compressed Video Sensing (DCVS). In this paper, we propose a novel dynamic global-Principal Component Analysis (PCA) sparse representation algorithm for video based on the sparse-land model and nonlocal similarity. First, grouping by matching is realized at the decoder from key frames that are previously recovered. Second, we apply PCA to each group (sub-dataset) to compute the principle components from which the sub-dictionary is constructed. Finally, the non-key frames are reconstructed from random measurement data using a Compressed Sensing (CS) reconstruction algorithm with sparse regularization. Experimental results show that our algorithm has a better performance compared with the DCT and K-SVD dictionaries.展开更多
Dual Energy CT (DECT) has recently gained significant research interest owing to its ability to discriminate materials, and hence is widely applied in the field of nuclear safety and security inspection. With the cu...Dual Energy CT (DECT) has recently gained significant research interest owing to its ability to discriminate materials, and hence is widely applied in the field of nuclear safety and security inspection. With the current technological developments, DECT can be typically realized by using two sets of detectors, one for detecting lower energy X-rays and another for detecting higher energy X-rays. This makes the imaging system expensive, limiting its practical implementation. In 2009, our group performed a preliminary study on a new low-cost system design, using only a complete data set for lower energy level and a sparse data set for the higher energy level. This could significantly reduce the cost of the system, as it contained much smaller number of detector elements. Reconstruction method is the key point of this system. In the present study, we further validated this system and proposed a robust method, involving three main steps: (1) estimation of the missing data iteratively with TV constraints; (2) use the reconstruction from the complete lower energy CT data set to form an initial estimation of the projection data for higher energy level; (3) use ordered views to accelerate the computation. Numerical simulations with different number of detector elements have also been examined. The results obtained in this study demonstrate that 1 + 14% CT data is sufficient enough to provide a rather good reconstruction of both the effective atomic number and electron density distributions of the scanned object, instead of 2 sets CT data.展开更多
基金supported by the National Natural Science Foundation of China(61571388U1233109)
文摘As the signal bandwidth and the number of channels increase, the synthetic aperture radar (SAR) imaging system produces huge amount of data according to the Shannon-Nyquist theorem, causing a huge burden for data transmission. This paper concerns the coprime sampl which are proposed recently but ng and nested sparse sampling, have never been applied to real world for target detection, and proposes a novel way which utilizes these new sub-Nyquist sampling structures for SAR sampling in azimuth and reconstructs the data of SAR sampling by compressive sensing (CS). Both the simulated and real data are processed to test the algorithm, and the results indicate the way which combines these new undersampling structures and CS is able to achieve the SAR imaging effectively with much less data than regularly ways required. Finally, the influence of a little sampling jitter to SAR imaging is analyzed by theoretical analysis and experimental analysis, and then it concludes a little sampling jitter have no effect on image quality of SAR.
基金Project(50808025)supported by the National Natural Science Foundation of ChinaProject(20090162110057)supported by the Doctoral Fund of Ministry of Education of China
文摘A method to detect traffic dangers based on visual attention model of sparse sampling was proposed. The hemispherical sparse sampling model was used to decrease the amount of calculation which increases the detection speed. Bayesian probability model and Gaussian kernel function were applied to calculate the saliency of traffic videos. The method of multiscale saliency was used and the final saliency was the average of all scales, which increased the detection rates extraordinarily. The detection results of several typical traffic dangers show that the proposed method has higher detection rates and speed, which meets the requirement of real-time detection of traffic dangers.
基金Supported by the National Natural Science Foundation of China(Grant No.51375217)
文摘Finite rate of innovation sampling is a novel sub-Nyquist sampling method that can reconstruct a signal from sparse sampling data.The application of this method in ultrasonic testing greatly reduces the signal sampling rate and the quantity of sampling data.However,the pulse number of the signal must be known beforehand for the signal reconstruction procedure.The accuracy of this prior information directly affects the accuracy of the estimated parameters of the signal and influences the assessment of flaws,leading to a lower defect detection ratio.Although the pulse number can be pre-given by theoretical analysis,the process is still unable to assess actual complex random orientation defects.Therefore,this paper proposes a new method that uses singular value decomposition(SVD) for estimating the pulse number from sparse sampling data and avoids the shortcoming of providing the pulse number in advance for signal reconstruction.When the sparse sampling data have been acquired from the ultrasonic signal,these data are transformed to discrete Fourier coefficients.A Hankel matrix is then constructed from these coefficients,and SVD is performed on the matrix.The decomposition coefficients reserve the information of the pulse number.When the decomposition coefficients generated by noise according to noise level are removed,the number of the remaining decomposition coefficients is the signal pulse number.The feasibility of the proposed method was verified through simulation experiments.The applicability was tested in ultrasonic experiments by using sample flawed pipelines.Results from simulations and real experiments demonstrated the efficiency of this method.
基金The National Natural Science Foundation of China (No.51375217)。
文摘To achieve sparse sampling on a coded ultrasonic signal,the finite rate of innovation(FRI)sparse sampling technique is proposed on a binary frequency-coded(BFC)ultrasonic signal.A framework of FRI-based sparse sampling for an ultrasonic signal pulse is presented.Differences between the pulse and the coded ultrasonic signal are analyzed,and a response mathematical model of the coded ultrasonic signal is established.A time-domain transform algorithm,called the high-order moment method,is applied to obtain a pulse stream signal to assist BFC ultrasonic signal sparse sampling.A sampling of the output signal with a uniform interval is then performed after modulating the pulse stream signal by a sampling kernel.FRI-based sparse sampling is performed using a self-made circuit on an aluminum alloy sample.Experimental results show that the sampling rate reduces to 0.5 MHz,which is at least 12.8 MHz in the Nyquist sampling mode.The echo peak amplitude and the time of flight are estimated from the sparse sampling data with maximum errors of 9.324%and 0.031%,respectively.This research can provide a theoretical basis and practical application reference for reducing the sampling rate and data volume in coded ultrasonic testing.
基金Project(2019JJ40047)supported by the Hunan Provincial Natural Science Foundation of ChinaProject(kq2014057)supported by the Changsha Municipal Natural Science Foundation,China。
文摘Face recognition has been widely used and developed rapidly in recent years.The methods based on sparse representation have made great breakthroughs,and collaborative representation-based classification(CRC)is the typical representative.However,CRC cannot distinguish similar samples well,leading to a wrong classification easily.As an improved method based on CRC,the two-phase test sample sparse representation(TPTSSR)removes the samples that make little contribution to the representation of the testing sample.Nevertheless,only one removal is not sufficient,since some useless samples may still be retained,along with some useful samples maybe being removed randomly.In this work,a novel classifier,called discriminative sparse parameter(DSP)classifier with iterative removal,is proposed for face recognition.The proposed DSP classifier utilizes sparse parameter to measure the representation ability of training samples straight-forward.Moreover,to avoid some useful samples being removed randomly with only one removal,DSP classifier removes most uncorrelated samples gradually with iterations.Extensive experiments on different typical poses,expressions and noisy face datasets are conducted to assess the performance of the proposed DSP classifier.The experimental results demonstrate that DSP classifier achieves a better recognition rate than the well-known SRC,CRC,RRC,RCR,SRMVS,RFSR and TPTSSR classifiers for face recognition in various situations.
基金supported by the Innovation Project of Graduate Students of Jiangsu Province, China under Grants No. CXZZ12_0466, No. CXZZ11_0390the National Natural Science Foundation of China under Grants No. 61071091, No. 61271240, No. 61201160, No. 61172118+2 种基金the Natural Science Foundation of the Higher Education Institutions of Jiangsu Province, China under Grant No. 12KJB510019the Science and Technology Research Program of Hubei Provincial Department of Education under Grants No. D20121408, No. D20121402the Program for Research Innovation of Nanjing Institute of Technology Project under Grant No. CKJ20110006
文摘Video reconstruction quality largely depends on the ability of employed sparse domain to adequately represent the underlying video in Distributed Compressed Video Sensing (DCVS). In this paper, we propose a novel dynamic global-Principal Component Analysis (PCA) sparse representation algorithm for video based on the sparse-land model and nonlocal similarity. First, grouping by matching is realized at the decoder from key frames that are previously recovered. Second, we apply PCA to each group (sub-dataset) to compute the principle components from which the sub-dictionary is constructed. Finally, the non-key frames are reconstructed from random measurement data using a Compressed Sensing (CS) reconstruction algorithm with sparse regularization. Experimental results show that our algorithm has a better performance compared with the DCT and K-SVD dictionaries.
基金supported by the National Natural Science Foundation of China (No. 11305073)
文摘Dual Energy CT (DECT) has recently gained significant research interest owing to its ability to discriminate materials, and hence is widely applied in the field of nuclear safety and security inspection. With the current technological developments, DECT can be typically realized by using two sets of detectors, one for detecting lower energy X-rays and another for detecting higher energy X-rays. This makes the imaging system expensive, limiting its practical implementation. In 2009, our group performed a preliminary study on a new low-cost system design, using only a complete data set for lower energy level and a sparse data set for the higher energy level. This could significantly reduce the cost of the system, as it contained much smaller number of detector elements. Reconstruction method is the key point of this system. In the present study, we further validated this system and proposed a robust method, involving three main steps: (1) estimation of the missing data iteratively with TV constraints; (2) use the reconstruction from the complete lower energy CT data set to form an initial estimation of the projection data for higher energy level; (3) use ordered views to accelerate the computation. Numerical simulations with different number of detector elements have also been examined. The results obtained in this study demonstrate that 1 + 14% CT data is sufficient enough to provide a rather good reconstruction of both the effective atomic number and electron density distributions of the scanned object, instead of 2 sets CT data.