摘要
L1/2正则化比L1正则化有更稀疏的解,比L0正则化更易求解.本文将L1/2正则化引入到地震数据重建过程,提出了一种基于光滑L1/2正则化的地震数据重建方法.首先建立L1/2正则化地震数据重建模型,并利用光滑渐近函数逼近L1/2正则项,克服了L1/2正则化求解过程中的数值振荡问题;之后根据光滑L1/2正则化理论改进了字典学习算法,提高了冗余字典的训练效率;最后利用训练的冗余字典和半阈值迭代算法对地震数据进行恢复重建.对具有232道、每道751个采样点的地震炮集数据应用结果表明,与基于L1正则化的K-SVD重建方法相比,本方法重建结果的信噪比提高3.3 dB.在计算效率方面,本方法字典训练耗时仅为L1正则化K-SVD的1/3,重建耗时仅为L1正则化K-SVD一半的时间.
L1/2 regularization produces a more sparse solution than L1 regularization and is easier to be solved than L0 regularization optimization.A smoothed asymptotic L1/2 regularized seismic reconstruction method is proposed by introducing L1/2 regularization into the seismic data reconstruction.It first establishes an L1/2 regularized seismic data reconstruction model.Then the numerical oscillation problem can be resolved through the approximation of the L1/2 regularization item by using the smoothed asymptotic functions.Thereafter,a fast redundant dictionary-training algorithm is proposed based on a smoothed asymptotic L1/2 regularizing and the efficiency of this dictionary training method is greatly improved.Finally,the seismic data can be reconstructed by combining the iterative half-thresholding algorithm and the redundant dictionary trained by the proposed algorithm.In the paper the proposed method is applied into the seismic data with 232 traces and 751 samples per channel.Results show that this method improves 3.3 dB in the signal-to-noise ratio of the reconstructed seismic data compared with the L1 K-singular value decomposition(K-SVD)method.The consuming time for the dictionary training and the reconstruction time of the proposed method are about 1/3,1/2 of that of the L1 regularized K-SVD method,respectively.
作者
张繁昌
兰南英
张珩
ZHANG Fanchang;LAN Nanying;ZHANG Heng(School of Geosciences,China University of Petroleum(East China),Qingdao,Shandong 266580,China)
出处
《中国矿业大学学报》
EI
CAS
CSCD
北大核心
2019年第5期1045-1052,共8页
Journal of China University of Mining & Technology
基金
国家自然科学资金项目(41874146)
关键词
地震数据重建
光滑L1/2正则化
字典学习
半阈值迭代
seismic data reconstruction
smoothed L1/2 regularization
dictionary learning
iterative half thresholding