Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a s...Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms.展开更多
The easy generation, storage, transmission and reproduction of digital images have caused serious abuse and security problems. Assurance of the rightful ownership, integrity, and authenticity is a major concern to the...The easy generation, storage, transmission and reproduction of digital images have caused serious abuse and security problems. Assurance of the rightful ownership, integrity, and authenticity is a major concern to the academia as well as the industry. On the other hand, efficient search of the huge amount of images has become a great challenge. Image hashing is a technique suitable for use in image authentication and content based image retrieval (CBIR). In this article, we review some representative image hashing techniques proposed in the recent years, with emphases on how to meet the conflicting requirements of perceptual robustness and security. Following a brief introduction to some earlier methods, we focus on a typical two-stage structure and some geometric-distortion resilient techniques. We then introduce two image hashing approaches developed in our own research, and reveal security problems in some existing methods due to the absence of secret keys in certain stage of the image feature extraction, or availability of a large quantity of images, keys, or the hash function to the adversary. More research efforts are needed in developing truly robust and secure image hashing techniques.展开更多
There is a steep increase in data encoded as symmetric positive definite(SPD)matrix in the past decade.The set of SPD matrices forms a Riemannian manifold that constitutes a half convex cone in the vector space of mat...There is a steep increase in data encoded as symmetric positive definite(SPD)matrix in the past decade.The set of SPD matrices forms a Riemannian manifold that constitutes a half convex cone in the vector space of matrices,which we sometimes call SPD manifold.One of the fundamental problems in the application of SPD manifold is to find the nearest neighbor of a queried SPD matrix.Hashing is a popular method that can be used for the nearest neighbor search.However,hashing cannot be directly applied to SPD manifold due to its non-Euclidean intrinsic geometry.Inspired by the idea of kernel trick,a new hashing scheme for SPD manifold by random projection and quantization in expanded data space is proposed in this paper.Experimental results in large scale nearduplicate image detection show the effectiveness and efficiency of the proposed method.展开更多
Image hashing is a useful multimedia technology for many applications,such as image authentication,image retrieval,image copy detection and image forensics.In this paper,we propose a robust image hashing based on rand...Image hashing is a useful multimedia technology for many applications,such as image authentication,image retrieval,image copy detection and image forensics.In this paper,we propose a robust image hashing based on random Gabor filtering and discrete wavelet transform(DWT).Specifically,robust and secure image features are first extracted from the normalized image by Gabor filtering and a chaotic map called Skew tent map,and then are compressed via a single-level 2-D DWT.Image hash is finally obtained by concatenating DWT coefficients in the LL sub-band.Many experiments with open image datasets are carried out and the results illustrate that our hashing is robust,discriminative and secure.Receiver operating characteristic(ROC)curve comparisons show that our hashing is better than some popular image hashing algorithms in classification performance between robustness and discrimination.展开更多
Hashing technology has the advantages of reducing data storage and improving the efficiency of the learning system,making it more and more widely used in image retrieval.Multi-view data describes image information mor...Hashing technology has the advantages of reducing data storage and improving the efficiency of the learning system,making it more and more widely used in image retrieval.Multi-view data describes image information more comprehensively than traditional methods using a single-view.How to use hashing to combine multi-view data for image retrieval is still a challenge.In this paper,a multi-view fusion hashing method based on RKCCA(Random Kernel Canonical Correlation Analysis)is proposed.In order to describe image content more accurately,we use deep learning dense convolutional network feature DenseNet to construct multi-view by combining GIST feature or BoW_SIFT(Bag-of-Words model+SIFT feature)feature.This algorithm uses RKCCA method to fuse multi-view features to construct association features and apply them to image retrieval.The algorithm generates binary hash code with minimal distortion error by designing quantization regularization terms.A large number of experiments on benchmark datasets show that this method is superior to other multi-view hashing methods.展开更多
In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)...In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance.展开更多
Lung medical image retrieval based on content similarity plays an important role in computer-aided diagnosis of lung cancer.In recent years,binary hashing has become a hot topic in this field due to its compressed sto...Lung medical image retrieval based on content similarity plays an important role in computer-aided diagnosis of lung cancer.In recent years,binary hashing has become a hot topic in this field due to its compressed storage and fast query speed.Traditional hashing methods often rely on highdimensional features based hand-crafted methods,which might not be optimally compatible with lung nodule images.Also,different hashing bits contribute to the image retrieval differently,and therefore treating the hashing bits equally affects the retrieval accuracy.Hence,an image retrieval method of lung nodule images is proposed with the basis on convolutional neural networks and hashing.First,apre-trained and fine-tuned convolutional neural network is employed to learn multilevel semantic features of the lung nodules.Principal components analysis is utilized to remove redundant information and preserve informative semantic features of the lung nodules.Second,the proposed method relies on nine sign labels of lung nodules for the training set,and the semantic feature is combined to construct hashing functions.Finally,returned lung nodule images can be easily ranked with the query-adaptive search method based on weighted Hamming distance.Extensive experiments and evaluations on the dataset demonstrate that the proposed method can significantly improve the expression ability of lung nodule images,which further validates the effectiveness of the proposed method.展开更多
It is well known that robustness, fragility, and security are three important criteria of image hashing; however how to build a system that can strongly meet these three criteria is still a challenge. In this paper, a...It is well known that robustness, fragility, and security are three important criteria of image hashing; however how to build a system that can strongly meet these three criteria is still a challenge. In this paper, a content-based image hashing scheme using wave atoms is proposed, which satisfies the above criteria. Compared with traditional transforms like wavelet transform and discrete cosine transform (DCT), wave atom transform is adopted for the sparser expansion and better characteristics of texture feature extraction which shows better performance in both robustness and fragility. In addition, multi-frequency detection is presented to provide an application-defined trade-off. To ensure the security of the proposed approach and its resistance to a chosen-plaintext attack, a randomized pixel modulation based on the Rdnyi chaotic map is employed, combining with the nonliner wave atom transform. The experimental results reveal that the proposed scheme is robust against content-preserving manipulations and has a good discriminative capability to malicions tampering.展开更多
The homomorphic hash algorithm(HHA)is introduced to help on-the-fly verify the vireless sensor network(WSN)over-the-air programming(OAP)data based on rateless codes.The receiver calculates the hash value of a group of...The homomorphic hash algorithm(HHA)is introduced to help on-the-fly verify the vireless sensor network(WSN)over-the-air programming(OAP)data based on rateless codes.The receiver calculates the hash value of a group of data by homomorphic hash function,and then it compares the hash value with the receiving message digest.Because the feedback channel is deliberately removed during the distribution process,the rateless codes are often vulnerable when they face security issues such as packets contamination or attack.This method prevents contaminating or attack on rateless codes and reduces the potential risks of decoding failure.Compared with the SHA1 and MD5,HHA,which has a much shorter message digest,will deliver more data.The simulation results show that to transmit and verify the same amount of OAP data,HHA method sends 17.9% to 23.1%fewer packets than MD5 and SHA1 under different packet loss rates.展开更多
In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor probl...In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.展开更多
This In the past decade there has been an increasing need for designs to address the time and cost efficiency issues from various computer network applications such as general IP address lookup and specific network in...This In the past decade there has been an increasing need for designs to address the time and cost efficiency issues from various computer network applications such as general IP address lookup and specific network intrusion detection. Hashing techniques have been widely adopted for this purpose, among which XOR-operation-based hashing is one of most popular techniques due to its relatively small hash process delay. In most current commonly used XOR-hashing algorithms, each of the hash key bits is usually explicitly XORed only at most once in the hash process, which may limit the amount of potential randomness that can be introduced by the hashing process. In [1] a series of bit duplication techniques are proposed by systematically duplicating one row of key bits. This paper further looks into various ways in duplicating and reusing key bits to maximize randomness needed in the hashing process so as to enhance the overall performance further. Our simulation results show that, even with a slight increase in hardware requirement, a very significant reduction in the amount of hash collision can be obtained by the proposed technique.展开更多
Image retrieval has become more and more important because of the explosive growth of images on the Internet.Traditional image retrieval methods have limited image retrieval performance due to the poor image expressio...Image retrieval has become more and more important because of the explosive growth of images on the Internet.Traditional image retrieval methods have limited image retrieval performance due to the poor image expression abhility of visual feature and high dimension of feature.Hashing is a widely-used method for Approximate Nearest Neighbor(ANN)search due to its rapidity and timeliness.Meanwhile,Convolutional Neural Networks(CNNs)have strong discriminative characteristics which are used for image classification.In this paper,we propose a CNN architecture based on improved deep supervised hashing(IDSH)method,by which the binary compact codes can be generated directly.The main contributions of this paper are as follows:first,we add a Batch Normalization(BN)layer before each activation layer to prevent the gradient from vanishing and improve the training speed;secondly,we use Divide-and-Encode Module to map image features to approximate hash codes;finally,we adopt center loss to optimize training.Extensive experimental results on four large-scale datasets:MNIST,CIFAR-10,NUS-WIDE and SVHN demonstrate the effectiveness of the proposed method compared with other state-of-the-art hashing methods.展开更多
文摘Steganography is a technique for hiding secret messages while sending and receiving communications through a cover item.From ancient times to the present,the security of secret or vital information has always been a significant problem.The development of secure communication methods that keep recipient-only data transmissions secret has always been an area of interest.Therefore,several approaches,including steganography,have been developed by researchers over time to enable safe data transit.In this review,we have discussed image steganography based on Discrete Cosine Transform(DCT)algorithm,etc.We have also discussed image steganography based on multiple hashing algorithms like the Rivest–Shamir–Adleman(RSA)method,the Blowfish technique,and the hash-least significant bit(LSB)approach.In this review,a novel method of hiding information in images has been developed with minimal variance in image bits,making our method secure and effective.A cryptography mechanism was also used in this strategy.Before encoding the data and embedding it into a carry image,this review verifies that it has been encrypted.Usually,embedded text in photos conveys crucial signals about the content.This review employs hash table encryption on the message before hiding it within the picture to provide a more secure method of data transport.If the message is ever intercepted by a third party,there are several ways to stop this operation.A second level of security process implementation involves encrypting and decrypting steganography images using different hashing algorithms.
基金supported by the National Natural Science Foundation of China(Grant No.60502039),the Shanghai Rising-Star Program(Grant No.06QA14022),and the Key project of Shanghai Municipality for Basic Research (Grant No.04JC14037)
文摘The easy generation, storage, transmission and reproduction of digital images have caused serious abuse and security problems. Assurance of the rightful ownership, integrity, and authenticity is a major concern to the academia as well as the industry. On the other hand, efficient search of the huge amount of images has become a great challenge. Image hashing is a technique suitable for use in image authentication and content based image retrieval (CBIR). In this article, we review some representative image hashing techniques proposed in the recent years, with emphases on how to meet the conflicting requirements of perceptual robustness and security. Following a brief introduction to some earlier methods, we focus on a typical two-stage structure and some geometric-distortion resilient techniques. We then introduce two image hashing approaches developed in our own research, and reveal security problems in some existing methods due to the absence of secret keys in certain stage of the image feature extraction, or availability of a large quantity of images, keys, or the hash function to the adversary. More research efforts are needed in developing truly robust and secure image hashing techniques.
文摘There is a steep increase in data encoded as symmetric positive definite(SPD)matrix in the past decade.The set of SPD matrices forms a Riemannian manifold that constitutes a half convex cone in the vector space of matrices,which we sometimes call SPD manifold.One of the fundamental problems in the application of SPD manifold is to find the nearest neighbor of a queried SPD matrix.Hashing is a popular method that can be used for the nearest neighbor search.However,hashing cannot be directly applied to SPD manifold due to its non-Euclidean intrinsic geometry.Inspired by the idea of kernel trick,a new hashing scheme for SPD manifold by random projection and quantization in expanded data space is proposed in this paper.Experimental results in large scale nearduplicate image detection show the effectiveness and efficiency of the proposed method.
基金This work is partially supported by the National Natural Science Foundation of China(Nos.61562007,61762017,61702332)National Key R&D Plan of China(2018YFB1003701)+3 种基金Guangxi“Bagui Scholar”Teams for Innovation and Research,the Guangxi Natural Science Foundation(Nos.2017GXNSFAA198222,2015GXNSFDA139040)the Project of Guangxi Science and Technology(Nos.GuiKeAD17195062)the Project of the Guangxi Key Lab of Multi-source Information Mining&Security(Nos.16-A-02-02,15-A-02-02)the Guangxi Collaborative Innovation Center of Multi-source Information Integration and Intelligent Processing,and the Innovation Project of Guangxi Graduate Education(No.XYCSZ 2018076).
文摘Image hashing is a useful multimedia technology for many applications,such as image authentication,image retrieval,image copy detection and image forensics.In this paper,we propose a robust image hashing based on random Gabor filtering and discrete wavelet transform(DWT).Specifically,robust and secure image features are first extracted from the normalized image by Gabor filtering and a chaotic map called Skew tent map,and then are compressed via a single-level 2-D DWT.Image hash is finally obtained by concatenating DWT coefficients in the LL sub-band.Many experiments with open image datasets are carried out and the results illustrate that our hashing is robust,discriminative and secure.Receiver operating characteristic(ROC)curve comparisons show that our hashing is better than some popular image hashing algorithms in classification performance between robustness and discrimination.
基金This work is supported by the National Natural Science Foundation of China(No.61772561)the Key Research&Development Plan of Hunan Province(No.2018NK2012)+1 种基金the Science Research Projects of Hunan Provincial Education Department(Nos.18A174,18C0262)the Science&Technology Innovation Platform and Talent Plan of Hunan Province(2017TP1022).
文摘Hashing technology has the advantages of reducing data storage and improving the efficiency of the learning system,making it more and more widely used in image retrieval.Multi-view data describes image information more comprehensively than traditional methods using a single-view.How to use hashing to combine multi-view data for image retrieval is still a challenge.In this paper,a multi-view fusion hashing method based on RKCCA(Random Kernel Canonical Correlation Analysis)is proposed.In order to describe image content more accurately,we use deep learning dense convolutional network feature DenseNet to construct multi-view by combining GIST feature or BoW_SIFT(Bag-of-Words model+SIFT feature)feature.This algorithm uses RKCCA method to fuse multi-view features to construct association features and apply them to image retrieval.The algorithm generates binary hash code with minimal distortion error by designing quantization regularization terms.A large number of experiments on benchmark datasets show that this method is superior to other multi-view hashing methods.
基金This work was partially supported by Science and Technology Project of Chongqing Education Commission of China(KJZD-K202200513)National Natural Science Foundation of China(61370205)+1 种基金Chongqing Normal University Fund(22XLB003)Chongqing Education Science Planning Project(2021-GX-320).
文摘In recent years,the development of deep learning has further improved hash retrieval technology.Most of the existing hashing methods currently use Convolutional Neural Networks(CNNs)and Recurrent Neural Networks(RNNs)to process image and text information,respectively.This makes images or texts subject to local constraints,and inherent label matching cannot capture finegrained information,often leading to suboptimal results.Driven by the development of the transformer model,we propose a framework called ViT2CMH mainly based on the Vision Transformer to handle deep Cross-modal Hashing tasks rather than CNNs or RNNs.Specifically,we use a BERT network to extract text features and use the vision transformer as the image network of the model.Finally,the features are transformed into hash codes for efficient and fast retrieval.We conduct extensive experiments on Microsoft COCO(MS-COCO)and Flickr30K,comparing with baselines of some hashing methods and image-text matching methods,showing that our method has better performance.
基金Supported by the National Natural Science Foundation of China(61373100)the Open Funding Project of State Key Laboratory of Virtual Reality Technology and Systems(BUAA-VR-16KF-13,BUAA-VR-17KF-14,BUAA-VR-17KF-15)the Research Project Supported by Shanxi Scholarship Council of China(2016-038)
文摘Lung medical image retrieval based on content similarity plays an important role in computer-aided diagnosis of lung cancer.In recent years,binary hashing has become a hot topic in this field due to its compressed storage and fast query speed.Traditional hashing methods often rely on highdimensional features based hand-crafted methods,which might not be optimally compatible with lung nodule images.Also,different hashing bits contribute to the image retrieval differently,and therefore treating the hashing bits equally affects the retrieval accuracy.Hence,an image retrieval method of lung nodule images is proposed with the basis on convolutional neural networks and hashing.First,apre-trained and fine-tuned convolutional neural network is employed to learn multilevel semantic features of the lung nodules.Principal components analysis is utilized to remove redundant information and preserve informative semantic features of the lung nodules.Second,the proposed method relies on nine sign labels of lung nodules for the training set,and the semantic feature is combined to construct hashing functions.Finally,returned lung nodule images can be easily ranked with the query-adaptive search method based on weighted Hamming distance.Extensive experiments and evaluations on the dataset demonstrate that the proposed method can significantly improve the expression ability of lung nodule images,which further validates the effectiveness of the proposed method.
文摘It is well known that robustness, fragility, and security are three important criteria of image hashing; however how to build a system that can strongly meet these three criteria is still a challenge. In this paper, a content-based image hashing scheme using wave atoms is proposed, which satisfies the above criteria. Compared with traditional transforms like wavelet transform and discrete cosine transform (DCT), wave atom transform is adopted for the sparser expansion and better characteristics of texture feature extraction which shows better performance in both robustness and fragility. In addition, multi-frequency detection is presented to provide an application-defined trade-off. To ensure the security of the proposed approach and its resistance to a chosen-plaintext attack, a randomized pixel modulation based on the Rdnyi chaotic map is employed, combining with the nonliner wave atom transform. The experimental results reveal that the proposed scheme is robust against content-preserving manipulations and has a good discriminative capability to malicions tampering.
基金Supported by the National Science and Technology Support Program(Y2140161A5)the National High Technology Research and Development Program of China(863Program)(O812041A04)
文摘The homomorphic hash algorithm(HHA)is introduced to help on-the-fly verify the vireless sensor network(WSN)over-the-air programming(OAP)data based on rateless codes.The receiver calculates the hash value of a group of data by homomorphic hash function,and then it compares the hash value with the receiving message digest.Because the feedback channel is deliberately removed during the distribution process,the rateless codes are often vulnerable when they face security issues such as packets contamination or attack.This method prevents contaminating or attack on rateless codes and reduces the potential risks of decoding failure.Compared with the SHA1 and MD5,HHA,which has a much shorter message digest,will deliver more data.The simulation results show that to transmit and verify the same amount of OAP data,HHA method sends 17.9% to 23.1%fewer packets than MD5 and SHA1 under different packet loss rates.
基金Project supported by the National Natural Science Foundation of China(Grant No.61173143)the Special Public Sector Research Program of China(Grant No.GYHY201206030)the Deanship of Scientific Research at King Saud University for funding this work through research group No.RGP-VPP-264
文摘In recent years, the nearest neighbor search (NNS) problem has been widely used in various interesting applications. Locality-sensitive hashing (LSH), a popular algorithm for the approximate nearest neighbor problem, is proved to be an efficient method to solve the NNS problem in the high-dimensional and large-scale databases. Based on the scheme of p-stable LSH, this paper introduces a novel improvement algorithm called randomness-based locality-sensitive hashing (RLSH) based on p-stable LSH. Our proposed algorithm modifies the query strategy that it randomly selects a certain hash table to project the query point instead of mapping the query point into all hash tables in the period of the nearest neighbor query and reconstructs the candidate points for finding the nearest neighbors. This improvement strategy ensures that RLSH spends less time searching for the nearest neighbors than the p-stable LSH algorithm to keep a high recall. Besides, this strategy is proved to promote the diversity of the candidate points even with fewer hash tables. Experiments are executed on the synthetic dataset and open dataset. The results show that our method can cost less time consumption and less space requirements than the p-stable LSH while balancing the same recall.
文摘This In the past decade there has been an increasing need for designs to address the time and cost efficiency issues from various computer network applications such as general IP address lookup and specific network intrusion detection. Hashing techniques have been widely adopted for this purpose, among which XOR-operation-based hashing is one of most popular techniques due to its relatively small hash process delay. In most current commonly used XOR-hashing algorithms, each of the hash key bits is usually explicitly XORed only at most once in the hash process, which may limit the amount of potential randomness that can be introduced by the hashing process. In [1] a series of bit duplication techniques are proposed by systematically duplicating one row of key bits. This paper further looks into various ways in duplicating and reusing key bits to maximize randomness needed in the hashing process so as to enhance the overall performance further. Our simulation results show that, even with a slight increase in hardware requirement, a very significant reduction in the amount of hash collision can be obtained by the proposed technique.
文摘Image retrieval has become more and more important because of the explosive growth of images on the Internet.Traditional image retrieval methods have limited image retrieval performance due to the poor image expression abhility of visual feature and high dimension of feature.Hashing is a widely-used method for Approximate Nearest Neighbor(ANN)search due to its rapidity and timeliness.Meanwhile,Convolutional Neural Networks(CNNs)have strong discriminative characteristics which are used for image classification.In this paper,we propose a CNN architecture based on improved deep supervised hashing(IDSH)method,by which the binary compact codes can be generated directly.The main contributions of this paper are as follows:first,we add a Batch Normalization(BN)layer before each activation layer to prevent the gradient from vanishing and improve the training speed;secondly,we use Divide-and-Encode Module to map image features to approximate hash codes;finally,we adopt center loss to optimize training.Extensive experimental results on four large-scale datasets:MNIST,CIFAR-10,NUS-WIDE and SVHN demonstrate the effectiveness of the proposed method compared with other state-of-the-art hashing methods.