Multi-source seismic technology is an efficient seismic acquisition method that requires a group of blended seismic data to be separated into single-source seismic data for subsequent processing. The separation of ble...Multi-source seismic technology is an efficient seismic acquisition method that requires a group of blended seismic data to be separated into single-source seismic data for subsequent processing. The separation of blended seismic data is a linear inverse problem. According to the relationship between the shooting number and the simultaneous source number of the acquisition system, this separation of blended seismic data is divided into an easily determined or overdetermined linear inverse problem and an underdetermined linear inverse problem that is difficult to solve. For the latter, this paper presents an optimization method that imposes the sparsity constraint on wavefields to construct the object function of inversion, and the problem is solved by using the iterative thresholding method. For the most extremely underdetermined separation problem with single-shooting and multiple sources, this paper presents a method of pseudo-deblending with random noise filtering. In this method, approximate common shot gathers are received through the pseudo-deblending process, and the random noises that appear when the approximate common shot gathers are sorted into common receiver gathers are eliminated through filtering methods. The separation methods proposed in this paper are applied to three types of numerical simulation data, including pure data without noise, data with random noise, and data with linear regular noise to obtain satisfactory results. The noise suppression effects of these methods are sufficient, particularly with single-shooting blended seismic data, which verifies the effectiveness of the proposed methods.展开更多
In this paper, we consider data separation problem, where the original signal is composed of two distinct subcomponents, via dual frames based Split-analysis approach. We show that the two distinct subcomponents, whic...In this paper, we consider data separation problem, where the original signal is composed of two distinct subcomponents, via dual frames based Split-analysis approach. We show that the two distinct subcomponents, which are sparse in two diff erent general frames respectively, can be exactly recovered with high probability, when the measurement matrix is a Weibull random matrix (not Gaussian) and the two frames satisfy a mutual coherence property. Our result may be significant for analysing Split-analysis model for data separation.展开更多
The main aim of this work is to improve the security of data hiding forsecret image sharing. The privacy and security of digital information have becomea primary concern nowadays due to the enormous usage of digital t...The main aim of this work is to improve the security of data hiding forsecret image sharing. The privacy and security of digital information have becomea primary concern nowadays due to the enormous usage of digital technology.The security and the privacy of users’ images are ensured through reversible datahiding techniques. The efficiency of the existing data hiding techniques did notprovide optimum performance with multiple end nodes. These issues are solvedby using Separable Data Hiding and Adaptive Particle Swarm Optimization(SDHAPSO) algorithm to attain optimal performance. Image encryption, dataembedding, data extraction/image recovery are the main phases of the proposedapproach. DFT is generally used to extract the transform coefficient matrix fromthe original image. DFT coefficients are in float format, which assists in transforming the image to integral format using the round function. After obtainingthe encrypted image by data-hider, additional data embedding is formulated intohigh-frequency coefficients. The proposed SDHAPSO is mainly utilized for performance improvement through optimal pixel location selection within the imagefor secret bits concealment. In addition, the secret data embedding capacityenhancement is focused on image visual quality maintenance. Hence, it isobserved from the simulation results that the proposed SDHAPSO techniqueoffers high-level security outcomes with respect to higher PSNR, security level,lesser MSE and higher correlation than existing techniques. Hence, enhancedsensitive information protection is attained, which improves the overall systemperformance.展开更多
As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North Am...As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North America as elsewhere in the world,changes in water resources strongly impact agriculture and animal husbandry.From a combination of Gravity Recovery and Climate Experiment(GRACE) gravity and Global Positioning System(GPS) data,it is recently found that water storage from August,2002 to March,2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005.In this paper,we use GRACE monthly gravity data of Release 5 to track the water storage change from August,2002 to June,2014.In Canadian Prairies and the Great Lakes areas,the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a,which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error.We also find a long term decrease of water storage at a rate of-12.0 ± 4.2 Gt/a in Ungava Peninsula,possibly due to permafrost degradation and less snow accumulation during the winter in the region.In addition,the effect of total mass gain in the surveyed area,on present-day sea level,amounts to-0.18 mm/a,and thus should be taken into account in studies of global sea level change.展开更多
There are four serious problems in the discriminant analysis. We developed an optimal linear discriminant function (optimal LDF) based on the minimum number of misclassification (minimum NM) using integer programm...There are four serious problems in the discriminant analysis. We developed an optimal linear discriminant function (optimal LDF) based on the minimum number of misclassification (minimum NM) using integer programming (IP). We call this LDF as Revised IP-OLDF. Only this LDF can discriminate the cases on the discriminant hyperplane (Probleml). This LDF and a hard-margin SVM (H-SVM) can discriminate the lineary separable data (LSD) exactly. Another LDFs may not discriminate the LSD theoretically (Problem2). When Revised IP-OLDF discriminate the Swiss banknote data with six variables, we find MNM of two-variables model such as (X4, X6) is zero. Because MNMk decreases monotounusly (MNMk 〉= MNM(k+1)), sixteen MNMs including (X4, X6) are zero. Until now, because there is no research of the LSD, we surveyed another three linear separable data sets such as: 18 exam scores data sets, the Japanese 44 cars data and six microarray datasets. When we discriminate the exam scores with MNM=0, we find the generalized inverse matrix technique causes the serious Problem3 and confirmed this fact by the cars data. At last, we claim the discriminant analysis is not the inferential statistics because there is no standard errors (SEs) of error rates and discriminant coefficients (Problem4). Therefore, we poroposed the "100-fold cross validation for the small sample" method (the method). By this break-through, we can choose the best model having minimum mean of error rate (M2) in the validation sample and obtaine two 95% confidence intervals (CIs) of error rate and discriminant coefficients. When we discriminate the exam scores by this new method, we obtaine the surprising results seven LDFs except for Fisher's LDF are almost the same as the trivial LDFs. In this research, we discriminate the Japanese 44 cars data because we can discuss four problems. There are six independent variables to discriminate 29 regular cars and 15 small cars. This data is linear separable by the emission rate (X1) and the number of seats (X3). We examine the validity of the new model selection procedure of the discriminant analysis. We proposed the model with minimum mean of error rates (M2) in the validation samples is the best model. We had examined this procedure by the exam scores, and we obtain good results. Moreover, the 95% CI of eight LDFs offers us real perception of the discriminant theory. However, the exam scores are different from the ordinal data. Therefore, we apply our theory and procedure to the Japanese 44 cars data and confirmed the same conclution.展开更多
In this paper,we study compressed data separation(CDS)problem,i.e.,sparse data separation from a few linear random measurements.We propose the nonconvex ℓ_(q)-split analysis with ℓ_(∞)-constraint and 0<q≤1.We cal...In this paper,we study compressed data separation(CDS)problem,i.e.,sparse data separation from a few linear random measurements.We propose the nonconvex ℓ_(q)-split analysis with ℓ_(∞)-constraint and 0<q≤1.We call the algorithm ℓ_(q)-split-analysis Dantzig selector(ℓ_(q)-split-analysis DS).We show that the two distinct subcomponents that are approximately sparse in terms of two different dictionaries could be stably approximated via the ℓ_(q)-split-analysis DS,provided that the measurement matrix satisfies either a classical D-RIP(Restricted Isometry Property with respect to Dictionaries and ℓ_(2) norm)or a relatively new(D,q)-RIP(RIP with respect to Dictionaries and ℓ_(q)-quasi norm)condition and the two different dictionaries satisfy a mutual coherence condition between them.For the Gaussian random measurements,the measurement number needed for the(D,q)-RIP condition is far less than those needed for the D-RIP condition and the(D,1)-RIP condition when q is small enough.展开更多
XML data can be represented by a tree or graph and the query processing for XML data requires the structural information among nodes. Designing an efficient labeling scheme for the nodes of Order-Sensitive XML trees i...XML data can be represented by a tree or graph and the query processing for XML data requires the structural information among nodes. Designing an efficient labeling scheme for the nodes of Order-Sensitive XML trees is one of the important methods to obtain the excellent management of XML data. Previous labeling schemes such as region and prefix often sacrifice updating performance and suffer increasing labeling space when inserting new nodes. To overcome these limitations, in this paper we propose a new labeling idea of separating structure from order. According to the proposed idea, a novel Prime-based Middle Fraction Labeling Scheme(PMFLS) is designed accordingly, in which a series of algorithms are proposed to obtain the structural relationships among nodes and to support updates. PMFLS combines the advantages of both prefix and region schemes in which the structural information and sequential information are separately expressed. PMFLS also supports Order-Sensitive updates without relabeling or recalculation, and its labeling space is stable. Experiments and analysis on several benchmarks are conducted and the results show that PMFLS is efficient in handling updates and also significantly improves the performance of the query processing with good scalability.展开更多
To realize the distributed storage and management of a secret halftone image in blockchain,a secure separable reversible data hiding(RDH)of halftone image in blockchain(SSRDHB)was proposed.A secret halftone image can ...To realize the distributed storage and management of a secret halftone image in blockchain,a secure separable reversible data hiding(RDH)of halftone image in blockchain(SSRDHB)was proposed.A secret halftone image can be used as the original image to generate multiple share images which can be distributed storage in each point of blockchain,and additional data can be hidden to achieve management of each share image.Firstly,the secret halftone image was encrypted through Zu Chongzhi(ZUC)algorithm by using the encryption key(EK).Secondly,the method of using odd or even of share data was proposed to hide data,and a share dataset can be generated by using polynomial operation.Thirdly,multiple share images can be obtained through selecting share data,and different additional data can be hidden through controlling odd or even of share data,and additional data can be protected by using data-hiding key(DK).After sharing process,if the receiver has both keys,the halftone image can be recovered and additional data can be revealed,and two processes are separable.Experiment results show that multiple share images hidden additional data can be obtained through SSRDHB,and the halftone image can be recovered with 100%by picking any part of share images,and one additional data can be revealed with 100%by picking any one share image.展开更多
文摘Multi-source seismic technology is an efficient seismic acquisition method that requires a group of blended seismic data to be separated into single-source seismic data for subsequent processing. The separation of blended seismic data is a linear inverse problem. According to the relationship between the shooting number and the simultaneous source number of the acquisition system, this separation of blended seismic data is divided into an easily determined or overdetermined linear inverse problem and an underdetermined linear inverse problem that is difficult to solve. For the latter, this paper presents an optimization method that imposes the sparsity constraint on wavefields to construct the object function of inversion, and the problem is solved by using the iterative thresholding method. For the most extremely underdetermined separation problem with single-shooting and multiple sources, this paper presents a method of pseudo-deblending with random noise filtering. In this method, approximate common shot gathers are received through the pseudo-deblending process, and the random noises that appear when the approximate common shot gathers are sorted into common receiver gathers are eliminated through filtering methods. The separation methods proposed in this paper are applied to three types of numerical simulation data, including pure data without noise, data with random noise, and data with linear regular noise to obtain satisfactory results. The noise suppression effects of these methods are sufficient, particularly with single-shooting blended seismic data, which verifies the effectiveness of the proposed methods.
基金Supported by the National Natural Science Foundation of China(11171299 and 91130009)
文摘In this paper, we consider data separation problem, where the original signal is composed of two distinct subcomponents, via dual frames based Split-analysis approach. We show that the two distinct subcomponents, which are sparse in two diff erent general frames respectively, can be exactly recovered with high probability, when the measurement matrix is a Weibull random matrix (not Gaussian) and the two frames satisfy a mutual coherence property. Our result may be significant for analysing Split-analysis model for data separation.
文摘The main aim of this work is to improve the security of data hiding forsecret image sharing. The privacy and security of digital information have becomea primary concern nowadays due to the enormous usage of digital technology.The security and the privacy of users’ images are ensured through reversible datahiding techniques. The efficiency of the existing data hiding techniques did notprovide optimum performance with multiple end nodes. These issues are solvedby using Separable Data Hiding and Adaptive Particle Swarm Optimization(SDHAPSO) algorithm to attain optimal performance. Image encryption, dataembedding, data extraction/image recovery are the main phases of the proposedapproach. DFT is generally used to extract the transform coefficient matrix fromthe original image. DFT coefficients are in float format, which assists in transforming the image to integral format using the round function. After obtainingthe encrypted image by data-hider, additional data embedding is formulated intohigh-frequency coefficients. The proposed SDHAPSO is mainly utilized for performance improvement through optimal pixel location selection within the imagefor secret bits concealment. In addition, the secret data embedding capacityenhancement is focused on image visual quality maintenance. Hence, it isobserved from the simulation results that the proposed SDHAPSO techniqueoffers high-level security outcomes with respect to higher PSNR, security level,lesser MSE and higher correlation than existing techniques. Hence, enhancedsensitive information protection is attained, which improves the overall systemperformance.
基金supported by National Natural Science Foundation of China(Grant Nos.41431070,41174016,41274026,41274024,41321063)National Key Basic Research Program of China(973 Program,2012CB957703)+1 种基金CAS/SAFEA International Partnership Program for Creative Research Teams(KZZD-EW-TZ-05)The Chinese Academy of Sciences
文摘As global warming continues,the monitoring of changes in terrestrial water storage becomes increasingly important since it plays a critical role in understanding global change and water resource management.In North America as elsewhere in the world,changes in water resources strongly impact agriculture and animal husbandry.From a combination of Gravity Recovery and Climate Experiment(GRACE) gravity and Global Positioning System(GPS) data,it is recently found that water storage from August,2002 to March,2011 recovered after the extreme Canadian Prairies drought between 1999 and 2005.In this paper,we use GRACE monthly gravity data of Release 5 to track the water storage change from August,2002 to June,2014.In Canadian Prairies and the Great Lakes areas,the total water storage is found to have increased during the last decade by a rate of 73.8 ± 14.5 Gt/a,which is larger than that found in the previous study due to the longer time span of GRACE observations used and the reduction of the leakage error.We also find a long term decrease of water storage at a rate of-12.0 ± 4.2 Gt/a in Ungava Peninsula,possibly due to permafrost degradation and less snow accumulation during the winter in the region.In addition,the effect of total mass gain in the surveyed area,on present-day sea level,amounts to-0.18 mm/a,and thus should be taken into account in studies of global sea level change.
文摘There are four serious problems in the discriminant analysis. We developed an optimal linear discriminant function (optimal LDF) based on the minimum number of misclassification (minimum NM) using integer programming (IP). We call this LDF as Revised IP-OLDF. Only this LDF can discriminate the cases on the discriminant hyperplane (Probleml). This LDF and a hard-margin SVM (H-SVM) can discriminate the lineary separable data (LSD) exactly. Another LDFs may not discriminate the LSD theoretically (Problem2). When Revised IP-OLDF discriminate the Swiss banknote data with six variables, we find MNM of two-variables model such as (X4, X6) is zero. Because MNMk decreases monotounusly (MNMk 〉= MNM(k+1)), sixteen MNMs including (X4, X6) are zero. Until now, because there is no research of the LSD, we surveyed another three linear separable data sets such as: 18 exam scores data sets, the Japanese 44 cars data and six microarray datasets. When we discriminate the exam scores with MNM=0, we find the generalized inverse matrix technique causes the serious Problem3 and confirmed this fact by the cars data. At last, we claim the discriminant analysis is not the inferential statistics because there is no standard errors (SEs) of error rates and discriminant coefficients (Problem4). Therefore, we poroposed the "100-fold cross validation for the small sample" method (the method). By this break-through, we can choose the best model having minimum mean of error rate (M2) in the validation sample and obtaine two 95% confidence intervals (CIs) of error rate and discriminant coefficients. When we discriminate the exam scores by this new method, we obtaine the surprising results seven LDFs except for Fisher's LDF are almost the same as the trivial LDFs. In this research, we discriminate the Japanese 44 cars data because we can discuss four problems. There are six independent variables to discriminate 29 regular cars and 15 small cars. This data is linear separable by the emission rate (X1) and the number of seats (X3). We examine the validity of the new model selection procedure of the discriminant analysis. We proposed the model with minimum mean of error rates (M2) in the validation samples is the best model. We had examined this procedure by the exam scores, and we obtain good results. Moreover, the 95% CI of eight LDFs offers us real perception of the discriminant theory. However, the exam scores are different from the ordinal data. Therefore, we apply our theory and procedure to the Japanese 44 cars data and confirmed the same conclution.
基金Supported by the National Key Research and Development Program of China(Grant No.2021YFA1003500)the NSFC(Grant Nos.U21A20426,11971427,12071426 and 11901518)。
文摘In this paper,we study compressed data separation(CDS)problem,i.e.,sparse data separation from a few linear random measurements.We propose the nonconvex ℓ_(q)-split analysis with ℓ_(∞)-constraint and 0<q≤1.We call the algorithm ℓ_(q)-split-analysis Dantzig selector(ℓ_(q)-split-analysis DS).We show that the two distinct subcomponents that are approximately sparse in terms of two different dictionaries could be stably approximated via the ℓ_(q)-split-analysis DS,provided that the measurement matrix satisfies either a classical D-RIP(Restricted Isometry Property with respect to Dictionaries and ℓ_(2) norm)or a relatively new(D,q)-RIP(RIP with respect to Dictionaries and ℓ_(q)-quasi norm)condition and the two different dictionaries satisfy a mutual coherence condition between them.For the Gaussian random measurements,the measurement number needed for the(D,q)-RIP condition is far less than those needed for the D-RIP condition and the(D,1)-RIP condition when q is small enough.
基金supported by the National Science Foundation of China(Grant No.61272067,61370229)the National Key Technology R&D Program of China(Grant No.2012BAH27F05,2013BAH72B01)+1 种基金the National High Technology R&D Program of China(Grant No.2013AA01A212)the S&T Projects of Guangdong Province(Grant No.2016B010109008,2014B010117007,2015A030401087,2015B010109003,2015B010110002)
文摘XML data can be represented by a tree or graph and the query processing for XML data requires the structural information among nodes. Designing an efficient labeling scheme for the nodes of Order-Sensitive XML trees is one of the important methods to obtain the excellent management of XML data. Previous labeling schemes such as region and prefix often sacrifice updating performance and suffer increasing labeling space when inserting new nodes. To overcome these limitations, in this paper we propose a new labeling idea of separating structure from order. According to the proposed idea, a novel Prime-based Middle Fraction Labeling Scheme(PMFLS) is designed accordingly, in which a series of algorithms are proposed to obtain the structural relationships among nodes and to support updates. PMFLS combines the advantages of both prefix and region schemes in which the structural information and sequential information are separately expressed. PMFLS also supports Order-Sensitive updates without relabeling or recalculation, and its labeling space is stable. Experiments and analysis on several benchmarks are conducted and the results show that PMFLS is efficient in handling updates and also significantly improves the performance of the query processing with good scalability.
基金supported by the Beijing City Board of Education Science and Technology Key Project(KZ201710015010)the Scientific Research Common Program of Beijing Municipal Commission of Education(KM202110015004)+2 种基金the Beijing Institute of Graphic Communication Excellent Course Construction Project for Postgraduates(21090121021)the Beijing Institute of Graphic Communication Project(Ec202007,Eb202004)the Initial Funding for the Doctoral Program of Beijing Institute of Graphic Communication(27170120003/022)。
文摘To realize the distributed storage and management of a secret halftone image in blockchain,a secure separable reversible data hiding(RDH)of halftone image in blockchain(SSRDHB)was proposed.A secret halftone image can be used as the original image to generate multiple share images which can be distributed storage in each point of blockchain,and additional data can be hidden to achieve management of each share image.Firstly,the secret halftone image was encrypted through Zu Chongzhi(ZUC)algorithm by using the encryption key(EK).Secondly,the method of using odd or even of share data was proposed to hide data,and a share dataset can be generated by using polynomial operation.Thirdly,multiple share images can be obtained through selecting share data,and different additional data can be hidden through controlling odd or even of share data,and additional data can be protected by using data-hiding key(DK).After sharing process,if the receiver has both keys,the halftone image can be recovered and additional data can be revealed,and two processes are separable.Experiment results show that multiple share images hidden additional data can be obtained through SSRDHB,and the halftone image can be recovered with 100%by picking any part of share images,and one additional data can be revealed with 100%by picking any one share image.