Greater attention has been paid to vintage-merge processing of seismic data and extracting more valuable information by the geophysicist. A match filter is used within many important areas such as splicing seismic dat...Greater attention has been paid to vintage-merge processing of seismic data and extracting more valuable information by the geophysicist. A match filter is used within many important areas such as splicing seismic data, matching seismic data with different ages and sources, 4-D seismic monitoring, and so on. The traditional match filtering method is subject to many restrictions and is usually difficult to overcome the impact of noise. Based on the traditional match filter, we propose the wavelet domain L1 norm optimal matching filter. In this paper, two different types of seismic data are decomposed to the wavelet domain, different detailed effective information is extracted for Ll-norm optimal matching, and ideal results are achieved. Based on the model test, we find that the L1 norm optimal matching filter attenuates the noise and the waveform, amplitude, and phase coherence of result signals are better than the conventional method. The field data test shows that, with our method, the seismic events in the filter results have better continuity which achieves the high precision seismic match requirements.展开更多
Fisher线性判别分析(Fisher Linear Discriminant Analysis,FLDA)是一种典型的监督型特征提取方法,旨在最大化Fisher准则,寻求最优投影矩阵。在标准Fisher准则中,涉及到的度量为L_2范数度量,此度量通常缺乏鲁棒性,对异常值点较敏感。为...Fisher线性判别分析(Fisher Linear Discriminant Analysis,FLDA)是一种典型的监督型特征提取方法,旨在最大化Fisher准则,寻求最优投影矩阵。在标准Fisher准则中,涉及到的度量为L_2范数度量,此度量通常缺乏鲁棒性,对异常值点较敏感。为提高鲁棒性,引入了一种基于L_1范数度量的FLDA及其优化求解算法。实验结果表明:在很多情形下,相比于传统的L_2范数FLDA,L_1范数FLDA具有更好的分类精度和鲁棒性。展开更多
This paper focuses on the 2-median location improvement problem on tree networks and the problem is to modify the weights of edges at the minimum cost such that the overall sum of the weighted distance of the vertices...This paper focuses on the 2-median location improvement problem on tree networks and the problem is to modify the weights of edges at the minimum cost such that the overall sum of the weighted distance of the vertices to the respective closest one of two prescribed vertices in the modified network is upper bounded by a given value.l1 norm and l∞norm are used to measure the total modification cost. These two problems have a strong practical application background and important theoretical research value. It is shown that such problems can be transformed into a series of sum-type and bottleneck-type continuous knapsack problems respectively.Based on the property of the optimal solution two O n2 algorithms for solving the two problems are proposed where n is the number of vertices on the tree.展开更多
The aim of the paper is to estimate the density functions or distribution functions measured by Wasserstein metric, a typical kind of statistical distances, which is usually required in the statistical learning. Based...The aim of the paper is to estimate the density functions or distribution functions measured by Wasserstein metric, a typical kind of statistical distances, which is usually required in the statistical learning. Based on the classical Bernstein approximation, a scheme is presented. To get the error estimates of the scheme, the problem turns to estimating the L1 norm of the Bernstein approximation for monotone C-1 functions, which was rarely discussed in the classical approximation theory. Finally, we get a probability estimate by the statistical distance.展开更多
The traditional compressed sensing method for improving resolution is realized in the frequency domain.This method is aff ected by noise,which limits the signal-to-noise ratio and resolution,resulting in poor inversio...The traditional compressed sensing method for improving resolution is realized in the frequency domain.This method is aff ected by noise,which limits the signal-to-noise ratio and resolution,resulting in poor inversion.To solve this problem,we improved the objective function that extends the frequency domain to the Gaussian frequency domain having denoising and smoothing characteristics.Moreover,the reconstruction of the sparse refl ection coeffi cient is implemented by the mixed L1_L2 norm algorithm,which converts the L0 norm problem into an L1 norm problem.Additionally,a fast threshold iterative algorithm is introduced to speed up convergence and the conjugate gradient algorithm is used to achieve debiasing for eliminating the threshold constraint and amplitude error.The model test indicates that the proposed method is superior to the conventional OMP and BPDN methods.It not only has better denoising and smoothing eff ects but also improves the recognition accuracy of thin interbeds.The actual data application also shows that the new method can eff ectively expand the seismic frequency band and improve seismic data resolution,so the method is conducive to the identifi cation of thin interbeds for beach-bar sand reservoirs.展开更多
For a SlSO linear discrete-time system with a specified input signal, a novel method to realize optimal l1 regulation control is presented. Utilizing the technique of converting a polynomial equation to its correspond...For a SlSO linear discrete-time system with a specified input signal, a novel method to realize optimal l1 regulation control is presented. Utilizing the technique of converting a polynomial equation to its corresponding matrix equation, a linear programming problem to get an optimal l1 norm of the system output error map is developed which includes the first term and the last term of the map sequence in the objective function and the right vector of its constraint matrix equation, respectively. The adjustability for the width of the constraint matrix makes the trade-off between the order of the optimal regulator and the value of the minimum objective norm become possible, especially for achieving the optimal regulator with minimum order. By norm scaling rules for the constraint matrix equation, the optimal solution can be scaled directly or be obtained by solving a linear programming problem with l1 norm objective.展开更多
An advanced earthquake location technique presented by Prugger and Gendzwill (1988) was introduced in this paper. Its characteristics are: 1) adopting the difference between the mean value by observed arrival times an...An advanced earthquake location technique presented by Prugger and Gendzwill (1988) was introduced in this paper. Its characteristics are: 1) adopting the difference between the mean value by observed arrival times and the mean value by calculated travel times as the original reference time of event to calculate the traveltime residuals, thus resulting in the 'true' minimum of travel-time residuals; 2) choosing the L1 norm statistic of the residuals that is more suitable to earthquake location; 3) using a simplex optimized algorithm to search for the minimum residual value directly and iteratively, thus it does not require derivative calculations and avoids matrix inversions, it can be used for any velocity structures and different network systems and can solve out hypocentral parameters (λ, ,h) rapidly and exactly; 4) original time is further derived alone, so the trade-off between focal depth and original time is avoided. All these prominent features make us obtain more accurate Tibetan earthquake locations in the rare network condition by using this method. In this paper, we examined these schemes for our mobile and permanent networks in Tibet with artificial data sets,then using these methods, we determined the hypocentral parameters of partial events observed in the field work period of this project from July 1991 to September 1991 and the seven problematic earthquakes during 1989 - 1990. The hypocentral location errors may be estimated to be less than 3. 6 km approximately. The events with focal depth more than 40 km seem to be distributed in parallel to Qinghai-Sichuan-Yunnan arc structural zone.展开更多
Motion deblurring is a basic problem in the field of image processing and analysis. This paper proposes a new method of single image blind deblurring which can be significant to kernel estimation and non-blind deconvo...Motion deblurring is a basic problem in the field of image processing and analysis. This paper proposes a new method of single image blind deblurring which can be significant to kernel estimation and non-blind deconvolution. Experiments show that the details of the image destroy the structure of the kernel, especially when the blur kernel is large. So we extract the image structure with salient edges by the method based on RTV. In addition, the traditional method for motion blur kernel estimation based on sparse priors is conducive to gain a sparse blur kernel. But these priors do not ensure the continuity of blur kernel and sometimes induce noisy estimated results. Therefore we propose the kernel refinement method based on L0 to overcome the above shortcomings. In terms of non-blind deconvolution we adopt the L1/L2 regularization term. Compared with the traditional method, the method based on L1/L2 norm has better adaptability to image structure, and the constructed energy functional can better describe the sharp image. For this model, an effective algorithm is presented based on alternating minimization algorithm.展开更多
The goal of this paper is to achieve a computational model and corresponding efficient algorithm for obtaining a sparse representation of the fitting surface to the given scattered data. The basic idea of the model is...The goal of this paper is to achieve a computational model and corresponding efficient algorithm for obtaining a sparse representation of the fitting surface to the given scattered data. The basic idea of the model is to utilize the principal shift invariant(PSI) space and the l_1 norm minimization. In order to obtain different sparsity of the approximation solution, the problem is represented as a multilevel LASSO(MLASSO)model with different regularization parameters. The MLASSO model can be solved efficiently by the alternating direction method of multipliers. Numerical experiments indicate that compared to the AGLASSO model and the basic MBA algorithm, the MLASSO model can provide an acceptable compromise between the minimization of the data mismatch term and the sparsity of the solution. Moreover, the solution by the MLASSO model can reflect the regions of the underlying surface where high gradients occur.展开更多
We present a variational method for subdivision surface reconstruction from a noisy dense mesh. A new set of subdivision rules with continuous sharpness control is introduced into Loop subdivision for better modeling ...We present a variational method for subdivision surface reconstruction from a noisy dense mesh. A new set of subdivision rules with continuous sharpness control is introduced into Loop subdivision for better modeling subdivision surface features such as semi-sharp creases, creases, and corners. The key idea is to assign a sharpness value to each edge of the control mesh to continuously control the surface features.Based on the new subdivision rules, a variational model with L_1 norm is formulated to find the control mesh and the corresponding sharpness values of the subdivision surface that best fits the input mesh. An iterative solver based on the augmented Lagrangian method and particle swarm optimization is used to solve the resulting non-linear, non-differentiable optimization problem. Our experimental results show that our method can handle meshes well with sharp/semi-sharp features and noise.展开更多
基金sponsored by the Natural Science Foundation of China(No.41074075)Graduate Innovation Fund by Jilin University(No.20121070)
文摘Greater attention has been paid to vintage-merge processing of seismic data and extracting more valuable information by the geophysicist. A match filter is used within many important areas such as splicing seismic data, matching seismic data with different ages and sources, 4-D seismic monitoring, and so on. The traditional match filtering method is subject to many restrictions and is usually difficult to overcome the impact of noise. Based on the traditional match filter, we propose the wavelet domain L1 norm optimal matching filter. In this paper, two different types of seismic data are decomposed to the wavelet domain, different detailed effective information is extracted for Ll-norm optimal matching, and ideal results are achieved. Based on the model test, we find that the L1 norm optimal matching filter attenuates the noise and the waveform, amplitude, and phase coherence of result signals are better than the conventional method. The field data test shows that, with our method, the seismic events in the filter results have better continuity which achieves the high precision seismic match requirements.
文摘Fisher线性判别分析(Fisher Linear Discriminant Analysis,FLDA)是一种典型的监督型特征提取方法,旨在最大化Fisher准则,寻求最优投影矩阵。在标准Fisher准则中,涉及到的度量为L_2范数度量,此度量通常缺乏鲁棒性,对异常值点较敏感。为提高鲁棒性,引入了一种基于L_1范数度量的FLDA及其优化求解算法。实验结果表明:在很多情形下,相比于传统的L_2范数FLDA,L_1范数FLDA具有更好的分类精度和鲁棒性。
基金The National Natural Science Foundation of China(No.10801031)
文摘This paper focuses on the 2-median location improvement problem on tree networks and the problem is to modify the weights of edges at the minimum cost such that the overall sum of the weighted distance of the vertices to the respective closest one of two prescribed vertices in the modified network is upper bounded by a given value.l1 norm and l∞norm are used to measure the total modification cost. These two problems have a strong practical application background and important theoretical research value. It is shown that such problems can be transformed into a series of sum-type and bottleneck-type continuous knapsack problems respectively.Based on the property of the optimal solution two O n2 algorithms for solving the two problems are proposed where n is the number of vertices on the tree.
基金Supported by 973-Project of China(2006cb303102)the National Science Foundation of China(11461161006,11201079)
文摘The aim of the paper is to estimate the density functions or distribution functions measured by Wasserstein metric, a typical kind of statistical distances, which is usually required in the statistical learning. Based on the classical Bernstein approximation, a scheme is presented. To get the error estimates of the scheme, the problem turns to estimating the L1 norm of the Bernstein approximation for monotone C-1 functions, which was rarely discussed in the classical approximation theory. Finally, we get a probability estimate by the statistical distance.
基金National Science and Technology Major Project(No.2016ZX05006-002 and 2017ZX05072-001).
文摘The traditional compressed sensing method for improving resolution is realized in the frequency domain.This method is aff ected by noise,which limits the signal-to-noise ratio and resolution,resulting in poor inversion.To solve this problem,we improved the objective function that extends the frequency domain to the Gaussian frequency domain having denoising and smoothing characteristics.Moreover,the reconstruction of the sparse refl ection coeffi cient is implemented by the mixed L1_L2 norm algorithm,which converts the L0 norm problem into an L1 norm problem.Additionally,a fast threshold iterative algorithm is introduced to speed up convergence and the conjugate gradient algorithm is used to achieve debiasing for eliminating the threshold constraint and amplitude error.The model test indicates that the proposed method is superior to the conventional OMP and BPDN methods.It not only has better denoising and smoothing eff ects but also improves the recognition accuracy of thin interbeds.The actual data application also shows that the new method can eff ectively expand the seismic frequency band and improve seismic data resolution,so the method is conducive to the identifi cation of thin interbeds for beach-bar sand reservoirs.
基金This work was supported by the National Science Foundation of China(No.60274036).
文摘For a SlSO linear discrete-time system with a specified input signal, a novel method to realize optimal l1 regulation control is presented. Utilizing the technique of converting a polynomial equation to its corresponding matrix equation, a linear programming problem to get an optimal l1 norm of the system output error map is developed which includes the first term and the last term of the map sequence in the objective function and the right vector of its constraint matrix equation, respectively. The adjustability for the width of the constraint matrix makes the trade-off between the order of the optimal regulator and the value of the minimum objective norm become possible, especially for achieving the optimal regulator with minimum order. By norm scaling rules for the constraint matrix equation, the optimal solution can be scaled directly or be obtained by solving a linear programming problem with l1 norm objective.
文摘An advanced earthquake location technique presented by Prugger and Gendzwill (1988) was introduced in this paper. Its characteristics are: 1) adopting the difference between the mean value by observed arrival times and the mean value by calculated travel times as the original reference time of event to calculate the traveltime residuals, thus resulting in the 'true' minimum of travel-time residuals; 2) choosing the L1 norm statistic of the residuals that is more suitable to earthquake location; 3) using a simplex optimized algorithm to search for the minimum residual value directly and iteratively, thus it does not require derivative calculations and avoids matrix inversions, it can be used for any velocity structures and different network systems and can solve out hypocentral parameters (λ, ,h) rapidly and exactly; 4) original time is further derived alone, so the trade-off between focal depth and original time is avoided. All these prominent features make us obtain more accurate Tibetan earthquake locations in the rare network condition by using this method. In this paper, we examined these schemes for our mobile and permanent networks in Tibet with artificial data sets,then using these methods, we determined the hypocentral parameters of partial events observed in the field work period of this project from July 1991 to September 1991 and the seven problematic earthquakes during 1989 - 1990. The hypocentral location errors may be estimated to be less than 3. 6 km approximately. The events with focal depth more than 40 km seem to be distributed in parallel to Qinghai-Sichuan-Yunnan arc structural zone.
基金Partially Supported by National Natural Science Foundation of China(No.61173102)
文摘Motion deblurring is a basic problem in the field of image processing and analysis. This paper proposes a new method of single image blind deblurring which can be significant to kernel estimation and non-blind deconvolution. Experiments show that the details of the image destroy the structure of the kernel, especially when the blur kernel is large. So we extract the image structure with salient edges by the method based on RTV. In addition, the traditional method for motion blur kernel estimation based on sparse priors is conducive to gain a sparse blur kernel. But these priors do not ensure the continuity of blur kernel and sometimes induce noisy estimated results. Therefore we propose the kernel refinement method based on L0 to overcome the above shortcomings. In terms of non-blind deconvolution we adopt the L1/L2 regularization term. Compared with the traditional method, the method based on L1/L2 norm has better adaptability to image structure, and the constructed energy functional can better describe the sharp image. For this model, an effective algorithm is presented based on alternating minimization algorithm.
基金supported by National Natural Science Foundation of China(Grant Nos.11526098,11001037,11290143 and 11471066)the Research Foundation for Advanced Talents of Jiangsu University(Grant No.14JDG034)+1 种基金the Natural Science Foundation of Jiangsu Province(Grant No.BK20160487)the Fundamental Research Funds for the Central Universities(Grant No.DUT15LK44)
文摘The goal of this paper is to achieve a computational model and corresponding efficient algorithm for obtaining a sparse representation of the fitting surface to the given scattered data. The basic idea of the model is to utilize the principal shift invariant(PSI) space and the l_1 norm minimization. In order to obtain different sparsity of the approximation solution, the problem is represented as a multilevel LASSO(MLASSO)model with different regularization parameters. The MLASSO model can be solved efficiently by the alternating direction method of multipliers. Numerical experiments indicate that compared to the AGLASSO model and the basic MBA algorithm, the MLASSO model can provide an acceptable compromise between the minimization of the data mismatch term and the sparsity of the solution. Moreover, the solution by the MLASSO model can reflect the regions of the underlying surface where high gradients occur.
基金supported by the National Natural Science Foundation of China (No. 61602015)an MOE AcRF Tier 1 Grant of Singapore (RG26/15)+2 种基金Beijing Natural Science Foundation (No. 4162019)open funding project of State Key Lab of Virtual Reality Technology and Systems at Beihang University (No. BUAAVR-16KF-06)the Research Foundation for Young Scholars of Beijing Technology and Business University
文摘We present a variational method for subdivision surface reconstruction from a noisy dense mesh. A new set of subdivision rules with continuous sharpness control is introduced into Loop subdivision for better modeling subdivision surface features such as semi-sharp creases, creases, and corners. The key idea is to assign a sharpness value to each edge of the control mesh to continuously control the surface features.Based on the new subdivision rules, a variational model with L_1 norm is formulated to find the control mesh and the corresponding sharpness values of the subdivision surface that best fits the input mesh. An iterative solver based on the augmented Lagrangian method and particle swarm optimization is used to solve the resulting non-linear, non-differentiable optimization problem. Our experimental results show that our method can handle meshes well with sharp/semi-sharp features and noise.