In this article,multiple attribute decision-making problems are solved using the vague normal set(VNS).It is possible to generalize the vague set(VS)and q-rung fuzzy set(FS)into the q-rung vague set(VS).A log q-rung n...In this article,multiple attribute decision-making problems are solved using the vague normal set(VNS).It is possible to generalize the vague set(VS)and q-rung fuzzy set(FS)into the q-rung vague set(VS).A log q-rung normal vague weighted averaging(log q-rung NVWA),a log q-rung normal vague weighted geometric(log q-rung NVWG),a log generalized q-rung normal vague weighted averaging(log Gq-rung NVWA),and a log generalized q-rungnormal vagueweightedgeometric(logGq-rungNVWG)operator are discussed in this article.Adescription is provided of the scoring function,accuracy function and operational laws of the log q-rung VS.The algorithms underlying these functions are also described.A numerical example is provided to extend the Euclidean distance and the Humming distance.Additionally,idempotency,boundedness,commutativity,and monotonicity of the log q-rung VS are examined as they facilitate recognizing the optimal alternative more quickly and help clarify conceptualization.We chose five anemia patients with four types of symptoms including seizures,emotional shock or hysteria,brain cause,and high fever,who had either retrograde amnesia,anterograde amnesia,transient global amnesia,post-traumatic amnesia,or infantile amnesia.Natural numbers q are used to express the results of the models.To demonstrate the effectiveness and accuracy of the models we are investigating,we compare several existing models with those that have been developed.展开更多
The Hamming distances of all negacyclic codes of length 28 over the Gaiois ring GR (2^α, m) are given. In particular, the Lee distances of all negacyclic codes over Z4 of length 2^s are obtained. The Gray images of...The Hamming distances of all negacyclic codes of length 28 over the Gaiois ring GR (2^α, m) are given. In particular, the Lee distances of all negacyclic codes over Z4 of length 2^s are obtained. The Gray images of such negacyclic codes over Z4 are also determined under the Gray map.展开更多
Given a generalized minimum cost flow problem,the corresponding inverse problem is to find a minimal adjustment of the cost function so that the given generalized flow becomes optimal to the problem.In this paper,we c...Given a generalized minimum cost flow problem,the corresponding inverse problem is to find a minimal adjustment of the cost function so that the given generalized flow becomes optimal to the problem.In this paper,we consider both types of the weighted Hamming distances for measuring the adjustment.In the sum-type case,it is shown that the inverse problem is APX-hard.In the bottleneck-type case,we present a polynomial time algorithm.展开更多
By using the generalized MacWilliams theorem, we give new representations for expectation and variance of Hamming distance between two i.i.d random vectors. By using the new representations, we derive a lower bound fo...By using the generalized MacWilliams theorem, we give new representations for expectation and variance of Hamming distance between two i.i.d random vectors. By using the new representations, we derive a lower bound for the variance, and present a simple and direct proof of the inequality of [1].展开更多
In spatial modulation systems,the reliability of the active antenna detection is of vital importance since the modulated symbols tend to be correctly demodulated when the active antennas are accurately identified.In t...In spatial modulation systems,the reliability of the active antenna detection is of vital importance since the modulated symbols tend to be correctly demodulated when the active antennas are accurately identified.In this paper,we propose a spatial coded modulation(SCM)scheme,which improves the accuracy of the active antenna detection by coding over the transmit antennas.Specifically,the antenna activation pattern in the SCM corresponds to a codeword in a properly designed codebook with a larger minimum Hamming distance than the conventional spatial modulation.As the minimum Hamming distance increases,the reliability of the active antenna detection is directly enhanced,which yields a better system reliability.In addition to the reliability,the proposed SCM scheme also achieves a higher capacity with the identical antenna configuration compared to the conventional counterpart.The optimal maximum likelihood detector is first formulated.Then,a low-complexity suboptimal detector is proposed to reduce the computational complexity.Theoretical derivations of the channel capacity and the bit error rate are presented in various channel scenarios.Further derivation on performance bounding is also provided to reveal the insight of the benefit of increasing the minimum Hamming distance.Numerical results validate the analysis and demonstrate that the proposed SCM outperforms the conventional spatial modulation techniques in both channel capacity and system reliability.展开更多
Gray mapping is a well-known way to improve the performance of regular constellation modulation,but it is challenging to be applied directly for irregular alternative.To address this issue,in this paper,a unified bit-...Gray mapping is a well-known way to improve the performance of regular constellation modulation,but it is challenging to be applied directly for irregular alternative.To address this issue,in this paper,a unified bit-to-symbol mapping method is designed for generalized constellation modulation(i.e.,regular and irregular shaping).The objective of the proposed approach is to minimize the average bit error probability by reducing the hamming distance(HD)of symbols with larger values of pairwise error probability.Simulation results show that the conventional constellation modulation(i.e.,phase shift keying and quadrature amplitude modulation(QAM)with the proposed mapping rule yield the same performance as that of classical gray mapping.Moreover,the recently developed golden angle modulation(GAM)with the proposed mapping method is capable of providing around1 d B gain over the conventional mapping counterpart and offers comparable performance to QAM with Gray mapping.展开更多
A new approach to improve the test efficiency of random testing is presented in this paper. In conventional random testing, each test pattern is selected randomly regardless of the tests previously generated. This pap...A new approach to improve the test efficiency of random testing is presented in this paper. In conventional random testing, each test pattern is selected randomly regardless of the tests previously generated. This paper introduces the concept of random like testing. The method provided appears to have the same concepts as used in random testing,but actually takes an opposite way to it in order to improve the efficiency of random testing.In a random like testing sequence, the total distance among all test patterns is chosen to be maximal so that the fault sets detected by one test pattern are as different as possible from that detected by the tests previously applied. The procedure to construct a random like testing sequence (RLTS) is described in detail. Theorems to justify the effectiveness and usefulness of the procedure presented are developed. Experimental results on benchmark circuits as well as on other circuit are also given to evaluate the performance of the new approach.展开更多
An adaptive quantum-inspired evolutionary algorithm based on Hamming distance (HD-QEA) was presented to optimize the network coding resources in multicast networks. In the HD-QEA, the diversity among individuals was...An adaptive quantum-inspired evolutionary algorithm based on Hamming distance (HD-QEA) was presented to optimize the network coding resources in multicast networks. In the HD-QEA, the diversity among individuals was taken into consideration, and a suitable rotation angle step (RAS) was assigned to each individual according to the Hamming distance. Performance comparisons were conducted among the HD-QEA, a basic quantum-inspired evolutionary algorithm (QEA) and an individual's fitness based adaptive QEA. A solid demonstration was provided that the proposed HD-QEA is better than the other two algorithms in terms of the convergence speed and the global optimization capability when they are employed to optimize the network coding resources in multicast networks.展开更多
When the number of runs is large, to search for uniform designs in the sense of low-discrepancy is an NP hard problem. The number of runs of most of the available uniform designs is small (≤50). In this article, th...When the number of runs is large, to search for uniform designs in the sense of low-discrepancy is an NP hard problem. The number of runs of most of the available uniform designs is small (≤50). In this article, the authors employ a kind of the so-called Hamming distance method to construct uniform designs with two- or three-level such that some resulting uniform designs have a large number of runs. Several infinite classes for the existence of uniform designs with the same Hamming distances between any distinct rows are also obtained simultaneously. Two measures of uniformity, the centered L2-discrepancy (CD, for short) and wrap-around L2-discrepancy (WD, for short), are employed.展开更多
To quickly find documents with high similarity in existing documentation sets, fingerprint group merging retrieval algorithm is proposed to address both sides of the problem:a given similarity threshold could not be t...To quickly find documents with high similarity in existing documentation sets, fingerprint group merging retrieval algorithm is proposed to address both sides of the problem:a given similarity threshold could not be too low and fewer fingerprints could lead to low accuracy. It can be proved that the efficiency of similarity retrieval is improved by fingerprint group merging retrieval algorithm with lower similarity threshold. Experiments with the lower similarity threshold r=0.7 and high fingerprint bits k=400 demonstrate that the CPU time-consuming cost decreases from 1 921 s to 273 s. Theoretical analysis and experimental results verify the effectiveness of this method.展开更多
Ransomware is considered one of the most threatening cyberattacks.Existing solutions have focused mainly on discriminating ransomware by analyzing the apps themselves,but they have overlooked possible ways of hiding r...Ransomware is considered one of the most threatening cyberattacks.Existing solutions have focused mainly on discriminating ransomware by analyzing the apps themselves,but they have overlooked possible ways of hiding ransomware apps and making them difficult to be detected and then analyzed.Therefore,this paper proposes a novel ransomware hiding model by utilizing a block-based High-Efficiency Video Coding(HEVC)steganography approach.The main idea of the proposed steganography approach is the division of the secret ransomware data and cover HEVC frames into different blocks.After that,the Least Significant Bit(LSB)based Hamming Distance(HD)calculation is performed amongst the secret data’s divided blocks and cover frames.Finally,the secret data bits are hidden into the marked bits of the cover HEVC frame-blocks based on the calculated HD value.The main advantage of the suggested steganography approach is the minor impact on the cover HEVC frames after embedding the ransomware while preserving the histogram attributes of the cover video frame with a high imperceptibility.This is due to the utilization of an adaptive steganography cost function during the embedding process.The proposed ransomware hiding approach was heavily examined using subjective and objective tests and applying different HEVC streams with diverse resolutions and different secret ransomware apps of various sizes.The obtained results prove the efficiency of the proposed steganography approach by achieving high capacity and successful embedding process while ensuring the hidden ransomware’s undetectability within the video frames.For example,in terms of embedding quality,the proposed model achieved a high peak signal-to-noise ratio that reached 59.3 dB and a low mean-square-error of 0.07 for the examined HEVC streams.Also,out of 65 antivirus engines,no engine could detect the existence of the embedded ransomware app.展开更多
Mapping design criteria of bit-interleaved coded modulation with iterative decoding (BICM-ID) with square 16QAM are analyzed. Three of the existing criteria are analyzed and compared with each other. Through the compa...Mapping design criteria of bit-interleaved coded modulation with iterative decoding (BICM-ID) with square 16QAM are analyzed. Three of the existing criteria are analyzed and compared with each other. Through the comparison, two main characters of the mapping design criteria are found. They are the harmonic mean of the minimum squared Euclidean distance and the average of Hamming distances with the nearest Euclidean distance. Based on these two characters, a novel mapping design criterion is proposed and a label mapping named mixed mapping is searched according to it. Simulation results show that mixed mapping performs better than the other mappings in BICM-ID system.展开更多
A double-pattern associative memory neural network with “pattern loop” is proposed. It can store 2N bit bipolar binary patterns up to the order of 2 2N , retrieve part or all of the stored patterns which all have th...A double-pattern associative memory neural network with “pattern loop” is proposed. It can store 2N bit bipolar binary patterns up to the order of 2 2N , retrieve part or all of the stored patterns which all have the minimum Hamming distance with input pattern, completely eliminate spurious patterns, and has higher storing efficiency and reliability than conventional associative memory. The length of a pattern stored in this associative memory can be easily extended from 2N to kN.展开更多
Currently,many biometric systems maintain the user’s biometrics and templates in plaintext format,which brings great privacy risk to uses’biometric information.Biometrics are unique and almost unchangeable,which mea...Currently,many biometric systems maintain the user’s biometrics and templates in plaintext format,which brings great privacy risk to uses’biometric information.Biometrics are unique and almost unchangeable,which means it is a great concern for users on whether their biometric information would be leaked.To address this issue,this paper proposes a confidential comparison algorithm for iris feature vectors with masks,and develops a privacy-preserving iris verification scheme based on the El Gamal encryption scheme.In our scheme,the multiplicative homomorphism of encrypted features is used to compare of iris features and their mask information.Also,this paper improves the Hamming distance of iris features,which makes the similarity matching work better than existing ones.Experimental results confirm the practicality of our proposed schemes in real world applications,that is,for the iris feature vectors and masks of 2048 bits,nearly 12 comparisons can be performed per second.展开更多
In allusion to the previous various forecasting verification results of each similarity measurement, starting from formulas, the fact that there existed the same features between analog deviation and Hamming distance ...In allusion to the previous various forecasting verification results of each similarity measurement, starting from formulas, the fact that there existed the same features between analog deviation and Hamming distance was analyzed and then proved. The limitations of analog deviation used in similarity forecast were discussed. The data from 96 different stations at 850 hPa in the East Asia during May 1 -30, 2010 were utilized to make similarity selection experiments among several frequently used similarity measurements and the new one proposed by the authors of this essay. The result indicated that analog deviation and Hamming distance were very much similar, with over 80% selected samples being the same. Analog deviation had the biggest difference with similarity coefficient, with over 70% selected samples being different. The new analog quantity was more similar to the related coefficient, with 60% selected samples being the same. Analog deviation and Hamming distance reflected how much the distances of the samples were similar to each other, while related coefficient and the new analog quantity reflected how much the shapes of the selected samples were similar to each other.展开更多
This paper combines interval-valued intuitionistic fuzzy sets and rough sets.It studies rougheness in interval-valued intuitionistic fuzzy sets and proposes one kind of interval-valued intuitionistic fuzzy-rough sets ...This paper combines interval-valued intuitionistic fuzzy sets and rough sets.It studies rougheness in interval-valued intuitionistic fuzzy sets and proposes one kind of interval-valued intuitionistic fuzzy-rough sets models under the equivalence relation in crisp sets.That extends the classical rough set defined by Pawlak.展开更多
In the present paper, geometry of the Boolean space B<sup>n</sup> in terms of Hausdorff distances between subsets and subset sums is investigated. The main results are the algebraic and analytical expressi...In the present paper, geometry of the Boolean space B<sup>n</sup> in terms of Hausdorff distances between subsets and subset sums is investigated. The main results are the algebraic and analytical expressions for representing of classical figures in B<sup>n</sup> and the functions of distances between them. In particular, equations in sets are considered and their interpretations in combinatory terms are given.展开更多
In this paper,we propose the two-stage constructions for the rate-compatible shortened polar(RCSP)codes.For the Stage-I construction,the shortening pattern and the frozen bit are jointly designed to make the shortened...In this paper,we propose the two-stage constructions for the rate-compatible shortened polar(RCSP)codes.For the Stage-I construction,the shortening pattern and the frozen bit are jointly designed to make the shortened bits be completely known by the decoder.Besides,a distance-greedy algorithm is presented to improve the minimum Hamming distance of the codes.To design the remaining Stage-II frozen bits,three different construction algorithms are further presented,called the Reed-Muller(RM)construction,the Gaussian Approximation(GA)construction,and the RM-GA construction.Then we give the row weight distribution numerical results of the generator matrix after the Stage-I and Stage-II constructions,which shows that the proposed constructions can efficiently increase the minimum Hamming distance.Simulation results show that the proposed RCSP codes have excellent frame error rate(FER)performances at different code lengths and code rates.More specifically,the RM-GA construction performs best and can achieve at most 0.8 dB gain compared to the Wang14 and the quasi-uniform puncturing(QUP)schemes.The RM construction is designed completely by the distance-constraint without channel evaluation thus has the simplest structure.Interestingly,it still has better FER performance than the existing shortening/puncturing schemes,especially at high signal noise ratio(SNR)region.展开更多
A uniform experimental design(UED)is an extremely used powerful and efficient methodology for designing experiments with high-dimensional inputs,limited resources and unknown underlying models.A UED enjoys the followi...A uniform experimental design(UED)is an extremely used powerful and efficient methodology for designing experiments with high-dimensional inputs,limited resources and unknown underlying models.A UED enjoys the following two significant advantages:(i)It is a robust design,since it does not require to specify a model before experimenters conduct their experiments;and(ii)it provides uniformly scatter design points in the experimental domain,thus it gives a good representation of this domain with fewer experimental trials(runs).Many real-life experiments involve hundreds or thousands of active factors and thus large UEDs are needed.Constructing large UEDs using the existing techniques is an NP-hard problem,an extremely time-consuming heuristic search process and a satisfactory result is not guaranteed.This paper presents a new effective and easy technique,adjusted Gray map technique(AGMT),for constructing(nearly)UEDs with large numbers of four-level factors and runs by converting designs with s two-level factors and n runs to(nearly)UEDs with 2^(t−1)s four-level factors and 2tn runs for any t≥0 using two simple transformation functions.Theoretical justifications for the uniformity of the resulting four-level designs are given,which provide some necessary and/or sufficient conditions for obtaining(nearly)uniform four-level designs.The results show that the AGMT is much easier and better than the existing widely used techniques and it can be effectively used to simply generate new recommended large(nearly)UEDs with four-level factors.展开更多
基金supported by the National Research Foundation of Korea(NRF)Grant funded by the Korea government(MSIT)(No.RS-2023-00218176)Korea Institute for Advancement of Technology(KIAT)Grant funded by the Korea government(MOTIE)(P0012724)The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘In this article,multiple attribute decision-making problems are solved using the vague normal set(VNS).It is possible to generalize the vague set(VS)and q-rung fuzzy set(FS)into the q-rung vague set(VS).A log q-rung normal vague weighted averaging(log q-rung NVWA),a log q-rung normal vague weighted geometric(log q-rung NVWG),a log generalized q-rung normal vague weighted averaging(log Gq-rung NVWA),and a log generalized q-rungnormal vagueweightedgeometric(logGq-rungNVWG)operator are discussed in this article.Adescription is provided of the scoring function,accuracy function and operational laws of the log q-rung VS.The algorithms underlying these functions are also described.A numerical example is provided to extend the Euclidean distance and the Humming distance.Additionally,idempotency,boundedness,commutativity,and monotonicity of the log q-rung VS are examined as they facilitate recognizing the optimal alternative more quickly and help clarify conceptualization.We chose five anemia patients with four types of symptoms including seizures,emotional shock or hysteria,brain cause,and high fever,who had either retrograde amnesia,anterograde amnesia,transient global amnesia,post-traumatic amnesia,or infantile amnesia.Natural numbers q are used to express the results of the models.To demonstrate the effectiveness and accuracy of the models we are investigating,we compare several existing models with those that have been developed.
基金the National Natural Science Foundation of China under Grant No.60673074
文摘The Hamming distances of all negacyclic codes of length 28 over the Gaiois ring GR (2^α, m) are given. In particular, the Lee distances of all negacyclic codes over Z4 of length 2^s are obtained. The Gray images of such negacyclic codes over Z4 are also determined under the Gray map.
文摘Given a generalized minimum cost flow problem,the corresponding inverse problem is to find a minimal adjustment of the cost function so that the given generalized flow becomes optimal to the problem.In this paper,we consider both types of the weighted Hamming distances for measuring the adjustment.In the sum-type case,it is shown that the inverse problem is APX-hard.In the bottleneck-type case,we present a polynomial time algorithm.
文摘By using the generalized MacWilliams theorem, we give new representations for expectation and variance of Hamming distance between two i.i.d random vectors. By using the new representations, we derive a lower bound for the variance, and present a simple and direct proof of the inequality of [1].
文摘In spatial modulation systems,the reliability of the active antenna detection is of vital importance since the modulated symbols tend to be correctly demodulated when the active antennas are accurately identified.In this paper,we propose a spatial coded modulation(SCM)scheme,which improves the accuracy of the active antenna detection by coding over the transmit antennas.Specifically,the antenna activation pattern in the SCM corresponds to a codeword in a properly designed codebook with a larger minimum Hamming distance than the conventional spatial modulation.As the minimum Hamming distance increases,the reliability of the active antenna detection is directly enhanced,which yields a better system reliability.In addition to the reliability,the proposed SCM scheme also achieves a higher capacity with the identical antenna configuration compared to the conventional counterpart.The optimal maximum likelihood detector is first formulated.Then,a low-complexity suboptimal detector is proposed to reduce the computational complexity.Theoretical derivations of the channel capacity and the bit error rate are presented in various channel scenarios.Further derivation on performance bounding is also provided to reveal the insight of the benefit of increasing the minimum Hamming distance.Numerical results validate the analysis and demonstrate that the proposed SCM outperforms the conventional spatial modulation techniques in both channel capacity and system reliability.
基金supported in part by the National Key Research and Development Program of China under Grant 2021YFB2900502in part by the National Science Foundation of China under Grant 62001179in part by the Fundamental Research Funds for the Central Universities under Grant 2020kfy XJJS111。
文摘Gray mapping is a well-known way to improve the performance of regular constellation modulation,but it is challenging to be applied directly for irregular alternative.To address this issue,in this paper,a unified bit-to-symbol mapping method is designed for generalized constellation modulation(i.e.,regular and irregular shaping).The objective of the proposed approach is to minimize the average bit error probability by reducing the hamming distance(HD)of symbols with larger values of pairwise error probability.Simulation results show that the conventional constellation modulation(i.e.,phase shift keying and quadrature amplitude modulation(QAM)with the proposed mapping rule yield the same performance as that of classical gray mapping.Moreover,the recently developed golden angle modulation(GAM)with the proposed mapping method is capable of providing around1 d B gain over the conventional mapping counterpart and offers comparable performance to QAM with Gray mapping.
文摘A new approach to improve the test efficiency of random testing is presented in this paper. In conventional random testing, each test pattern is selected randomly regardless of the tests previously generated. This paper introduces the concept of random like testing. The method provided appears to have the same concepts as used in random testing,but actually takes an opposite way to it in order to improve the efficiency of random testing.In a random like testing sequence, the total distance among all test patterns is chosen to be maximal so that the fault sets detected by one test pattern are as different as possible from that detected by the tests previously applied. The procedure to construct a random like testing sequence (RLTS) is described in detail. Theorems to justify the effectiveness and usefulness of the procedure presented are developed. Experimental results on benchmark circuits as well as on other circuit are also given to evaluate the performance of the new approach.
基金supported by the National Natural Science Foundation of China (61473179)the Doctor Foundation of Shandong Province (BS2013DX032)the Youth Scholars Development Program of Shandong University of Technology (2014-09)
文摘An adaptive quantum-inspired evolutionary algorithm based on Hamming distance (HD-QEA) was presented to optimize the network coding resources in multicast networks. In the HD-QEA, the diversity among individuals was taken into consideration, and a suitable rotation angle step (RAS) was assigned to each individual according to the Hamming distance. Performance comparisons were conducted among the HD-QEA, a basic quantum-inspired evolutionary algorithm (QEA) and an individual's fitness based adaptive QEA. A solid demonstration was provided that the proposed HD-QEA is better than the other two algorithms in terms of the convergence speed and the global optimization capability when they are employed to optimize the network coding resources in multicast networks.
基金This work was partially supported by the NNSF of China(10441001) the Project sponsored by SRF for ROCS (SEM) the NSF of Hubei Province. The second author's research was also partially supported by the Pre-studies Project of NBRP (2003CCA2400)
文摘When the number of runs is large, to search for uniform designs in the sense of low-discrepancy is an NP hard problem. The number of runs of most of the available uniform designs is small (≤50). In this article, the authors employ a kind of the so-called Hamming distance method to construct uniform designs with two- or three-level such that some resulting uniform designs have a large number of runs. Several infinite classes for the existence of uniform designs with the same Hamming distances between any distinct rows are also obtained simultaneously. Two measures of uniformity, the centered L2-discrepancy (CD, for short) and wrap-around L2-discrepancy (WD, for short), are employed.
基金Project(60873081) supported by the National Natural Science Foundation of ChinaProject(NCET-10-0787) supported by the Program for New Century Excellent Talents in University, ChinaProject(11JJ1012) supported by the Natural Science Foundation of Hunan Province, China
文摘To quickly find documents with high similarity in existing documentation sets, fingerprint group merging retrieval algorithm is proposed to address both sides of the problem:a given similarity threshold could not be too low and fewer fingerprints could lead to low accuracy. It can be proved that the efficiency of similarity retrieval is improved by fingerprint group merging retrieval algorithm with lower similarity threshold. Experiments with the lower similarity threshold r=0.7 and high fingerprint bits k=400 demonstrate that the CPU time-consuming cost decreases from 1 921 s to 273 s. Theoretical analysis and experimental results verify the effectiveness of this method.
文摘Ransomware is considered one of the most threatening cyberattacks.Existing solutions have focused mainly on discriminating ransomware by analyzing the apps themselves,but they have overlooked possible ways of hiding ransomware apps and making them difficult to be detected and then analyzed.Therefore,this paper proposes a novel ransomware hiding model by utilizing a block-based High-Efficiency Video Coding(HEVC)steganography approach.The main idea of the proposed steganography approach is the division of the secret ransomware data and cover HEVC frames into different blocks.After that,the Least Significant Bit(LSB)based Hamming Distance(HD)calculation is performed amongst the secret data’s divided blocks and cover frames.Finally,the secret data bits are hidden into the marked bits of the cover HEVC frame-blocks based on the calculated HD value.The main advantage of the suggested steganography approach is the minor impact on the cover HEVC frames after embedding the ransomware while preserving the histogram attributes of the cover video frame with a high imperceptibility.This is due to the utilization of an adaptive steganography cost function during the embedding process.The proposed ransomware hiding approach was heavily examined using subjective and objective tests and applying different HEVC streams with diverse resolutions and different secret ransomware apps of various sizes.The obtained results prove the efficiency of the proposed steganography approach by achieving high capacity and successful embedding process while ensuring the hidden ransomware’s undetectability within the video frames.For example,in terms of embedding quality,the proposed model achieved a high peak signal-to-noise ratio that reached 59.3 dB and a low mean-square-error of 0.07 for the examined HEVC streams.Also,out of 65 antivirus engines,no engine could detect the existence of the embedded ransomware app.
文摘Mapping design criteria of bit-interleaved coded modulation with iterative decoding (BICM-ID) with square 16QAM are analyzed. Three of the existing criteria are analyzed and compared with each other. Through the comparison, two main characters of the mapping design criteria are found. They are the harmonic mean of the minimum squared Euclidean distance and the average of Hamming distances with the nearest Euclidean distance. Based on these two characters, a novel mapping design criterion is proposed and a label mapping named mixed mapping is searched according to it. Simulation results show that mixed mapping performs better than the other mappings in BICM-ID system.
文摘A double-pattern associative memory neural network with “pattern loop” is proposed. It can store 2N bit bipolar binary patterns up to the order of 2 2N , retrieve part or all of the stored patterns which all have the minimum Hamming distance with input pattern, completely eliminate spurious patterns, and has higher storing efficiency and reliability than conventional associative memory. The length of a pattern stored in this associative memory can be easily extended from 2N to kN.
基金partially supported by the National Natural Science Foundation of China(Grant Nos.61772150,61862012)the National Cryptography Development Fund of China under project MMJJ20170217+3 种基金the Guangxi Key R&D Fund under project AB17195025the Guangxi Natural Science Foundation under grant 2018GXNSFAA281232the open project of Guangxi Key Laboratory of Cryptography and Information Security(Grant Nos.GCIS201622,GCIS201702)the GUET Excellent Graduate Thesis Program(16YJPYSS23).
文摘Currently,many biometric systems maintain the user’s biometrics and templates in plaintext format,which brings great privacy risk to uses’biometric information.Biometrics are unique and almost unchangeable,which means it is a great concern for users on whether their biometric information would be leaked.To address this issue,this paper proposes a confidential comparison algorithm for iris feature vectors with masks,and develops a privacy-preserving iris verification scheme based on the El Gamal encryption scheme.In our scheme,the multiplicative homomorphism of encrypted features is used to compare of iris features and their mask information.Also,this paper improves the Hamming distance of iris features,which makes the similarity matching work better than existing ones.Experimental results confirm the practicality of our proposed schemes in real world applications,that is,for the iris feature vectors and masks of 2048 bits,nearly 12 comparisons can be performed per second.
文摘In allusion to the previous various forecasting verification results of each similarity measurement, starting from formulas, the fact that there existed the same features between analog deviation and Hamming distance was analyzed and then proved. The limitations of analog deviation used in similarity forecast were discussed. The data from 96 different stations at 850 hPa in the East Asia during May 1 -30, 2010 were utilized to make similarity selection experiments among several frequently used similarity measurements and the new one proposed by the authors of this essay. The result indicated that analog deviation and Hamming distance were very much similar, with over 80% selected samples being the same. Analog deviation had the biggest difference with similarity coefficient, with over 70% selected samples being different. The new analog quantity was more similar to the related coefficient, with 60% selected samples being the same. Analog deviation and Hamming distance reflected how much the distances of the samples were similar to each other, while related coefficient and the new analog quantity reflected how much the shapes of the selected samples were similar to each other.
基金supported by grants from the National Natural Science Foundation of China(Nos.10971185 and 10971186)the Natural Science Foundation of Fujiang Province in China(No.2008F5066).
文摘This paper combines interval-valued intuitionistic fuzzy sets and rough sets.It studies rougheness in interval-valued intuitionistic fuzzy sets and proposes one kind of interval-valued intuitionistic fuzzy-rough sets models under the equivalence relation in crisp sets.That extends the classical rough set defined by Pawlak.
文摘In the present paper, geometry of the Boolean space B<sup>n</sup> in terms of Hausdorff distances between subsets and subset sums is investigated. The main results are the algebraic and analytical expressions for representing of classical figures in B<sup>n</sup> and the functions of distances between them. In particular, equations in sets are considered and their interpretations in combinatory terms are given.
基金This work was supported by the Interdisciplinary Scientific Research Foundation of GuangXi University(No.2022JCC015)the National Natural Science Foundation of China(Nos.61761006,61961004,and 61762011)the Natural Science Foundation of Guangxi of China(Nos.2017GXNSFAA198263 and 2018GXNSFAA2940。
文摘In this paper,we propose the two-stage constructions for the rate-compatible shortened polar(RCSP)codes.For the Stage-I construction,the shortening pattern and the frozen bit are jointly designed to make the shortened bits be completely known by the decoder.Besides,a distance-greedy algorithm is presented to improve the minimum Hamming distance of the codes.To design the remaining Stage-II frozen bits,three different construction algorithms are further presented,called the Reed-Muller(RM)construction,the Gaussian Approximation(GA)construction,and the RM-GA construction.Then we give the row weight distribution numerical results of the generator matrix after the Stage-I and Stage-II constructions,which shows that the proposed constructions can efficiently increase the minimum Hamming distance.Simulation results show that the proposed RCSP codes have excellent frame error rate(FER)performances at different code lengths and code rates.More specifically,the RM-GA construction performs best and can achieve at most 0.8 dB gain compared to the Wang14 and the quasi-uniform puncturing(QUP)schemes.The RM construction is designed completely by the distance-constraint without channel evaluation thus has the simplest structure.Interestingly,it still has better FER performance than the existing shortening/puncturing schemes,especially at high signal noise ratio(SNR)region.
基金supported by the UIC Research Grants with No.of(R201912 and R202010)the Curriculum Development and Teaching Enhancement with No.of(UICR0400046-21CTL)+1 种基金the Guangdong Provincial Key Laboratory of Interdisciplinary Research and Application for Data Science,BNU-HKBU United International College with No.of(2022B1212010006)Guangdong Higher Education Upgrading Plan(2021-2025)with No.of(UICR0400001-22).
文摘A uniform experimental design(UED)is an extremely used powerful and efficient methodology for designing experiments with high-dimensional inputs,limited resources and unknown underlying models.A UED enjoys the following two significant advantages:(i)It is a robust design,since it does not require to specify a model before experimenters conduct their experiments;and(ii)it provides uniformly scatter design points in the experimental domain,thus it gives a good representation of this domain with fewer experimental trials(runs).Many real-life experiments involve hundreds or thousands of active factors and thus large UEDs are needed.Constructing large UEDs using the existing techniques is an NP-hard problem,an extremely time-consuming heuristic search process and a satisfactory result is not guaranteed.This paper presents a new effective and easy technique,adjusted Gray map technique(AGMT),for constructing(nearly)UEDs with large numbers of four-level factors and runs by converting designs with s two-level factors and n runs to(nearly)UEDs with 2^(t−1)s four-level factors and 2tn runs for any t≥0 using two simple transformation functions.Theoretical justifications for the uniformity of the resulting four-level designs are given,which provide some necessary and/or sufficient conditions for obtaining(nearly)uniform four-level designs.The results show that the AGMT is much easier and better than the existing widely used techniques and it can be effectively used to simply generate new recommended large(nearly)UEDs with four-level factors.