Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based di...Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based diagnosis,teaching,and research.Although the retrieval accuracy has largely improved,there has been limited development toward visualizing important image features that indicate the similarity of retrieved images.Despite the prevalence of 3D volumetric data in medical imaging such as computed tomography(CT),current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images.Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information,including the size,shape,and spatial relations of multiple structures.This process is time-consuming and reliant on users'experience.Methods In this study,we proposed an importance-aware 3D volume visualization method.The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process.We then integrated the proposed visualization into a CBIR system,thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.Results Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography(PETCT)images of a non-small cell lung cancer dataset.展开更多
The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor l...The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor localization technologies generally used scene-specific 3D representations or were trained on specific datasets, making it challenging to balance accuracy and cost when applied to new scenes. Addressing this issue, this paper proposed a universal indoor visual localization method based on efficient image retrieval. Initially, a Multi-Layer Perceptron (MLP) was employed to aggregate features from intermediate layers of a convolutional neural network, obtaining a global representation of the image. This approach ensured accurate and rapid retrieval of reference images. Subsequently, a new mechanism using Random Sample Consensus (RANSAC) was designed to resolve relative pose ambiguity caused by the essential matrix decomposition based on the five-point method. Finally, the absolute pose of the queried user image was computed, thereby achieving indoor user pose estimation. The proposed indoor localization method was characterized by its simplicity, flexibility, and excellent cross-scene generalization. Experimental results demonstrated a positioning error of 0.09 m and 2.14° on the 7Scenes dataset, and 0.15 m and 6.37° on the 12Scenes dataset. These results convincingly illustrated the outstanding performance of the proposed indoor localization method.展开更多
Fine-grained image classification is a challenging research topic because of the high degree of similarity among categories and the high degree of dissimilarity for a specific category caused by different poses and scal...Fine-grained image classification is a challenging research topic because of the high degree of similarity among categories and the high degree of dissimilarity for a specific category caused by different poses and scales.A cul-tural heritage image is one of thefine-grained images because each image has the same similarity in most cases.Using the classification technique,distinguishing cultural heritage architecture may be difficult.This study proposes a cultural heri-tage content retrieval method using adaptive deep learning forfine-grained image retrieval.The key contribution of this research was the creation of a retrieval mod-el that could handle incremental streams of new categories while maintaining its past performance in old categories and not losing the old categorization of a cul-tural heritage image.The goal of the proposed method is to perform a retrieval task for classes.Incremental learning for new classes was conducted to reduce the re-training process.In this step,the original class is not necessary for re-train-ing which we call an adaptive deep learning technique.Cultural heritage in the case of Thai archaeological site architecture was retrieved through machine learn-ing and image processing.We analyze the experimental results of incremental learning forfine-grained images with images of Thai archaeological site architec-ture from world heritage provinces in Thailand,which have a similar architecture.Using afine-grained image retrieval technique for this group of cultural heritage images in a database can solve the problem of a high degree of similarity among categories and a high degree of dissimilarity for a specific category.The proposed method for retrieving the correct image from a database can deliver an average accuracy of 85 percent.Adaptive deep learning forfine-grained image retrieval was used to retrieve cultural heritage content,and it outperformed state-of-the-art methods infine-grained image retrieval.展开更多
Deep convolutional neural networks(DCNNs)are widely used in content-based image retrieval(CBIR)because of the advantages in image feature extraction.However,the training of deep neural networks requires a large number...Deep convolutional neural networks(DCNNs)are widely used in content-based image retrieval(CBIR)because of the advantages in image feature extraction.However,the training of deep neural networks requires a large number of labeled data,which limits the application.Self-supervised learning is a more general approach in unlabeled scenarios.A method of fine-tuning feature extraction networks based on masked learning is proposed.Masked autoencoders(MAE)are used in the fine-tune vision transformer(ViT)model.In addition,the scheme of extracting image descriptors is discussed.The encoder of the MAE uses the ViT to extract global features and performs self-supervised fine-tuning by reconstructing masked area pixels.The method works well on category-level image retrieval datasets with marked improvements in instance-level datasets.For the instance-level datasets Oxford5k and Paris6k,the retrieval accuracy of the base model is improved by 7%and 17%compared to that of the original model,respectively.展开更多
The demand for image retrieval with text manipulation exists in many fields, such as e-commerce and Internet search. Deep metric learning methods are used by most researchers to calculate the similarity between the qu...The demand for image retrieval with text manipulation exists in many fields, such as e-commerce and Internet search. Deep metric learning methods are used by most researchers to calculate the similarity between the query and the candidate image by fusing the global feature of the query image and the text feature. However, the text usually corresponds to the local feature of the query image rather than the global feature. Therefore, in this paper, we propose a framework of image retrieval with text manipulation by local feature modification(LFM-IR) which can focus on the related image regions and attributes and perform modification. A spatial attention module and a channel attention module are designed to realize the semantic mapping between image and text. We achieve excellent performance on three benchmark datasets, namely Color-Shape-Size(CSS), Massachusetts Institute of Technology(MIT) States and Fashion200K(+8.3%, +0.7% and +4.6% in R@1).展开更多
Recent days,Image retrieval has become a tedious process as the image database has grown very larger.The introduction of Machine Learning(ML)and Deep Learning(DL)made this process more comfortable.In these,the pair-wi...Recent days,Image retrieval has become a tedious process as the image database has grown very larger.The introduction of Machine Learning(ML)and Deep Learning(DL)made this process more comfortable.In these,the pair-wise label similarity is used tofind the matching images from the database.But this method lacks of limited propose code and weak execution of misclassified images.In order to get-rid of the above problem,a novel triplet based label that incorporates context-spatial similarity measure is proposed.A Point Attention Based Triplet Network(PABTN)is introduced to study propose code that gives maximum discriminative ability.To improve the performance of ranking,a corre-lating resolutions for the classification,triplet labels based onfindings,a spatial-attention mechanism and Region Of Interest(ROI)and small trial information loss containing a new triplet cross-entropy loss are used.From the experimental results,it is shown that the proposed technique exhibits better results in terms of mean Reciprocal Rank(mRR)and mean Average Precision(mAP)in the CIFAR-10 and NUS-WIPE datasets.展开更多
Fine-grained image search is one of the most challenging tasks in computer vision that aims to retrieve similar images at the fine-grained level for a given query image.The key objective is to learn discriminative fin...Fine-grained image search is one of the most challenging tasks in computer vision that aims to retrieve similar images at the fine-grained level for a given query image.The key objective is to learn discriminative fine-grained features by training deep models such that similar images are clustered,and dissimilar images are separated in the low embedding space.Previous works primarily focused on defining local structure loss functions like triplet loss,pairwise loss,etc.However,training via these approaches takes a long training time,and they have poor accuracy.Additionally,representations learned through it tend to tighten up in the embedded space and lose generalizability to unseen classes.This paper proposes a noise-assisted representation learning method for fine-grained image retrieval to mitigate these issues.In the proposed work,class manifold learning is performed in which positive pairs are created with noise insertion operation instead of tightening class clusters.And other instances are treated as negatives within the same cluster.Then a loss function is defined to penalize when the distance between instances of the same class becomes too small relative to the noise pair in that class in embedded space.The proposed approach is validated on CARS-196 and CUB-200 datasets and achieved better retrieval results(85.38%recall@1 for CARS-196%and 70.13%recall@1 for CUB-200)compared to other existing methods.展开更多
In existing remote sensing image retrieval(RSIR)datasets,the number of images among different classes varies dramatically,which leads to a severe class imbalance problem.Some studies propose to train the model with th...In existing remote sensing image retrieval(RSIR)datasets,the number of images among different classes varies dramatically,which leads to a severe class imbalance problem.Some studies propose to train the model with the ranking‐based metric(e.g.,average precision[AP]),because AP is robust to class imbalance.However,current AP‐based methods overlook an important issue:only optimising samples ranking before each positive sample,which is limited by the definition of AP and is prone to local optimum.To achieve global optimisation of AP,a novel method,namely Optimising Samples after positive ones&AP loss(OSAP‐Loss)is proposed in this study.Specifically,a novel superior ranking function is designed to make the AP loss differentiable while providing a tighter upper bound.Then,a novel loss called Optimising Samples after Positive ones(OSP)loss is proposed to involve all positive and negative samples ranking after each positive one and to provide a more flexible optimisation strategy for each sample.Finally,a graphics processing unit memory‐free mechanism is developed to thoroughly address the non‐decomposability of AP optimisation.Extensive experimental results on RSIR as well as conventional image retrieval datasets show the superiority and competitive performance of OSAP‐Loss compared to the state‐of‐the‐art.展开更多
Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality ...Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality retrieval by utilizing adequate learning instances,ignoring the extraction of the image’s essential information which leads to difficulty in the retrieval of similar category images just using one reference image.Aiming to solve this problem above,we proposed in this paper one refined sparse representation based similar category image retrieval model.On the one hand,saliency detection and multi-level decomposition could contribute to taking salient and spatial information into consideration more fully in the future.On the other hand,the cross mutual sparse coding model aims to extract the image’s essential feature to the maximumextent possible.At last,we set up a database concluding a large number of multi-source images.Adequate groups of comparative experiments show that our method could contribute to retrieving similar category images effectively.Moreover,adequate groups of ablation experiments show that nearly all procedures play their roles,respectively.展开更多
Content-based medical image retrieval(CBMIR)is a technique for retrieving medical images based on automatically derived image features.There are many applications of CBMIR,such as teaching,research,diagnosis and elect...Content-based medical image retrieval(CBMIR)is a technique for retrieving medical images based on automatically derived image features.There are many applications of CBMIR,such as teaching,research,diagnosis and electronic patient records.Several methods are applied to enhance the retrieval performance of CBMIR systems.Developing new and effective similarity measure and features fusion methods are two of the most powerful and effective strategies for improving these systems.This study proposes the relative difference-based similarity measure(RDBSM)for CBMIR.The new measure was first used in the similarity calculation stage for the CBMIR using an unweighted fusion method of traditional color and texture features.Furthermore,the study also proposes a weighted fusion method for medical image features extracted using pre-trained convolutional neural networks(CNNs)models.Our proposed RDBSM has outperformed the standard well-known similarity and distance measures using two popular medical image datasets,Kvasir and PH2,in terms of recall and precision retrieval measures.The effectiveness and quality of our proposed similarity measure are also proved using a significant test and statistical confidence bound.展开更多
The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision...The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples).展开更多
To solve the problem that the existing ciphertext domain image retrieval system is challenging to balance security,retrieval efficiency,and retrieval accuracy.This research suggests a searchable encryption and deep ha...To solve the problem that the existing ciphertext domain image retrieval system is challenging to balance security,retrieval efficiency,and retrieval accuracy.This research suggests a searchable encryption and deep hashing-based secure image retrieval technique that extracts more expressive image features and constructs a secure,searchable encryption scheme.First,a deep learning framework based on residual network and transfer learn-ing model is designed to extract more representative image deep features.Secondly,the central similarity is used to quantify and construct the deep hash sequence of features.The Paillier homomorphic encryption encrypts the deep hash sequence to build a high-security and low-complexity searchable index.Finally,according to the additive homomorphic property of Paillier homomorphic encryption,a similarity measurement method suitable for com-puting in the retrieval system’s security is ensured by the encrypted domain.The experimental results,which were obtained on Web Image Database from the National University of Singapore(NUS-WIDE),Microsoft Common Objects in Context(MS COCO),and ImageNet data sets,demonstrate the system’s robust security and precise retrieval,the proposed scheme can achieve efficient image retrieval without revealing user privacy.The retrieval accuracy is improved by at least 37%compared to traditional hashing schemes.At the same time,the retrieval time is saved by at least 9.7%compared to the latest deep hashing schemes.展开更多
A novel image retrieval approach based on color features and anisotropic directional information is proposed for content based image retrieval systems (CBIR). The color feature is described by the color histogram ...A novel image retrieval approach based on color features and anisotropic directional information is proposed for content based image retrieval systems (CBIR). The color feature is described by the color histogram (CH), which is translation and rotation invariant. However, the CH does not contain spatial information which is very important for the image retrieval. To overcome this shortcoming, the subband energy of the lifting directionlet transform (L-DT) is proposed to describe the directional information, in which L-DT is characterized by multi-direction and anisotropic basis functions compared with the wavelet transform. A global similarity measure is designed to implement the fusion of both color feature and anisotropic directionality for the retrieval process. The retrieval experiments using a set of COREL images demonstrate that the higher query precision and better visual effect can be achieved.展开更多
In this paper, we present a novel and efficient scheme for extracting, indexing and retrieving color images. Our motivation was to reduce the space overhead of partition-based approaches taking advantage of the fact t...In this paper, we present a novel and efficient scheme for extracting, indexing and retrieving color images. Our motivation was to reduce the space overhead of partition-based approaches taking advantage of the fact that only a relatively low number of distinct values of a particular visual feature is present in most images. To extract color feature and build indices into our image database we take into consideration factors such as human color perception and perceptual range, and the image is partitioned into a set of regions by using a simple classifying scheme. The compact color feature vector and the spatial color histogram, which are extracted from the seqmented image region, are used for representing the color and spatial information in the image. We have also developed the region-based distance measures to compare the similarity of two images. Extensive tests on a large image collection were conducted to demonstrate the effectiveness of the proposed approach.展开更多
<div style="text-align:justify;"> Digital image collection as rapidly increased along with the development of computer network. Image retrieval system was developed purposely to provide an efficient to...<div style="text-align:justify;"> Digital image collection as rapidly increased along with the development of computer network. Image retrieval system was developed purposely to provide an efficient tool for a set of images from a collection of images in the database that matches the user’s requirements in similarity evaluations such as image content similarity, edge, and color similarity. Retrieving images based on the content which is color, texture, and shape is called content based image retrieval (CBIR). The content is actually the feature of an image and these features are extracted and used as the basis for a similarity check between images. The algorithms used to calculate the similarity between extracted features. There are two kinds of content based image retrieval which are general image retrieval and application specific image retrieval. For the general image retrieval, the goal of the query is to obtain images with the same object as the query. Such CBIR imitates web search engines for images rather than for text. For application specific, the purpose tries to match a query image to a collection of images of a specific type such as fingerprints image and x-ray. In this paper, the general architecture, various functional components, and techniques of CBIR system are discussed. CBIR techniques discussed in this paper are categorized as CBIR using color, CBIR using texture, and CBIR using shape features. This paper also describe about the comparison study about color features, texture features, shape features, and combined features (hybrid techniques) in terms of several parameters. The parameters are precision, recall and response time. </div>展开更多
This paper introduces the principles of using color histogram to match images in CBIR. And a prototype CBIR system is designed with color matching function. A new method using 2-dimensional color histogram based on hu...This paper introduces the principles of using color histogram to match images in CBIR. And a prototype CBIR system is designed with color matching function. A new method using 2-dimensional color histogram based on hue and saturation to extract and represent color information of an image is presented. We also improve the Euclidean-distance algorithm by adding Center of Color to it. The experiment shows modifications made to Euclidean-distance signif-icantly elevate the quality and efficiency of retrieval.展开更多
The technique of image retrieval is widely used in science experiment, military affairs, public security, advertisement, family entertainment, library and so on. The existing algorithms are mostly based on the charact...The technique of image retrieval is widely used in science experiment, military affairs, public security, advertisement, family entertainment, library and so on. The existing algorithms are mostly based on the characteristics of color, texture, shape and space relationship. This paper introduced an image retrieval algorithm, which is based on the matching of weighted EMD(Earth Mover’s Distance) distance and texture distance. EMD distance is the distance between the histograms of two images in HSV(Hue, Saturation, Value) color space, and texture distance is the L1 distance between the texture spectra of two images. The experimental results show that the retrieval rate can be increased obviously by using the proposed algorithm.展开更多
Content based image retrieval(CBIR)techniques have been widely deployed in many applications for seeking the abundant information existed in images.Due to large amounts of storage and computational requirements of CBI...Content based image retrieval(CBIR)techniques have been widely deployed in many applications for seeking the abundant information existed in images.Due to large amounts of storage and computational requirements of CBIR,outsourcing image search work to the cloud provider becomes a very attractive option for many owners with small devices.However,owing to the private content contained in images,directly outsourcing retrieval work to the cloud provider apparently bring about privacy problem,so the images should be protected carefully before outsourcing.This paper presents a secure retrieval scheme for the encrypted images in the YUV color space.With this scheme,the discrete cosine transform(DCT)is performed on the Y component.The resulting DC coefficients are encrypted with stream cipher technology and the resulting AC coefficients as well as other two color components are encrypted with value permutation and position scrambling.Then the image owner transmits the encrypted images to the cloud server.When receiving a query trapdoor form on query user,the server extracts AC-coefficients histogram from the encrypted Y component and extracts two color histograms from the other two color components.The similarity between query trapdoor and database image is measured by calculating the Manhattan distance of their respective histograms.Finally,the encrypted images closest to the query image are returned to the query user.展开更多
A novel content based image retrieval (CBIR) algorithmusing relevant feedback is presented. The proposed frameworkhas three major contributions: a novel feature descriptor calledcolor spectral histogram (CSH) to ...A novel content based image retrieval (CBIR) algorithmusing relevant feedback is presented. The proposed frameworkhas three major contributions: a novel feature descriptor calledcolor spectral histogram (CSH) to measure the similarity betweenimages; two-dimensional matrix based indexing approach proposedfor short-term learning (STL); and long-term learning (LTL).In general, image similarities are measured from feature representationwhich includes color quantization, texture, color, shapeand edges. However, CSH can describe the image feature onlywith the histogram. Typically the image retrieval process starts byfinding the similarity between the query image and the imagesin the database; the major computation involved here is that theselection of top ranking images requires a sorting algorithm to beemployed at least with the lower bound of O(n log n). A 2D matrixbased indexing of images can enormously reduce the searchtime in STL. The same structure is used for LTL with an aim toreduce the amount of log to be maintained. The performance ofthe proposed framework is analyzed and compared with the existingapproaches, the quantified results indicates that the proposedfeature descriptor is more effectual than the existing feature descriptorsthat were originally developed for CBIR. In terms of STL,the proposed 2D matrix based indexing minimizes the computationeffort for retrieving similar images and for LTL, the proposed algorithmtakes minimum log information than the existing approaches.展开更多
文摘Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based diagnosis,teaching,and research.Although the retrieval accuracy has largely improved,there has been limited development toward visualizing important image features that indicate the similarity of retrieved images.Despite the prevalence of 3D volumetric data in medical imaging such as computed tomography(CT),current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images.Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information,including the size,shape,and spatial relations of multiple structures.This process is time-consuming and reliant on users'experience.Methods In this study,we proposed an importance-aware 3D volume visualization method.The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process.We then integrated the proposed visualization into a CBIR system,thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.Results Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography(PETCT)images of a non-small cell lung cancer dataset.
文摘The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor localization technologies generally used scene-specific 3D representations or were trained on specific datasets, making it challenging to balance accuracy and cost when applied to new scenes. Addressing this issue, this paper proposed a universal indoor visual localization method based on efficient image retrieval. Initially, a Multi-Layer Perceptron (MLP) was employed to aggregate features from intermediate layers of a convolutional neural network, obtaining a global representation of the image. This approach ensured accurate and rapid retrieval of reference images. Subsequently, a new mechanism using Random Sample Consensus (RANSAC) was designed to resolve relative pose ambiguity caused by the essential matrix decomposition based on the five-point method. Finally, the absolute pose of the queried user image was computed, thereby achieving indoor user pose estimation. The proposed indoor localization method was characterized by its simplicity, flexibility, and excellent cross-scene generalization. Experimental results demonstrated a positioning error of 0.09 m and 2.14° on the 7Scenes dataset, and 0.15 m and 6.37° on the 12Scenes dataset. These results convincingly illustrated the outstanding performance of the proposed indoor localization method.
基金This research was funded by King Mongkut’s University of Technology North Bangkok(Contract no.KMUTNB-62-KNOW-026).
文摘Fine-grained image classification is a challenging research topic because of the high degree of similarity among categories and the high degree of dissimilarity for a specific category caused by different poses and scales.A cul-tural heritage image is one of thefine-grained images because each image has the same similarity in most cases.Using the classification technique,distinguishing cultural heritage architecture may be difficult.This study proposes a cultural heri-tage content retrieval method using adaptive deep learning forfine-grained image retrieval.The key contribution of this research was the creation of a retrieval mod-el that could handle incremental streams of new categories while maintaining its past performance in old categories and not losing the old categorization of a cul-tural heritage image.The goal of the proposed method is to perform a retrieval task for classes.Incremental learning for new classes was conducted to reduce the re-training process.In this step,the original class is not necessary for re-train-ing which we call an adaptive deep learning technique.Cultural heritage in the case of Thai archaeological site architecture was retrieved through machine learn-ing and image processing.We analyze the experimental results of incremental learning forfine-grained images with images of Thai archaeological site architec-ture from world heritage provinces in Thailand,which have a similar architecture.Using afine-grained image retrieval technique for this group of cultural heritage images in a database can solve the problem of a high degree of similarity among categories and a high degree of dissimilarity for a specific category.The proposed method for retrieving the correct image from a database can deliver an average accuracy of 85 percent.Adaptive deep learning forfine-grained image retrieval was used to retrieve cultural heritage content,and it outperformed state-of-the-art methods infine-grained image retrieval.
基金the Project of Introducing Urgently Needed Talents in Key Supporting Regions of Shandong Province,China(No.SDJQP20221805)。
文摘Deep convolutional neural networks(DCNNs)are widely used in content-based image retrieval(CBIR)because of the advantages in image feature extraction.However,the training of deep neural networks requires a large number of labeled data,which limits the application.Self-supervised learning is a more general approach in unlabeled scenarios.A method of fine-tuning feature extraction networks based on masked learning is proposed.Masked autoencoders(MAE)are used in the fine-tune vision transformer(ViT)model.In addition,the scheme of extracting image descriptors is discussed.The encoder of the MAE uses the ViT to extract global features and performs self-supervised fine-tuning by reconstructing masked area pixels.The method works well on category-level image retrieval datasets with marked improvements in instance-level datasets.For the instance-level datasets Oxford5k and Paris6k,the retrieval accuracy of the base model is improved by 7%and 17%compared to that of the original model,respectively.
基金Foundation items:Shanghai Sailing Program,China (No. 21YF1401300)Shanghai Science and Technology Innovation Action Plan,China (No.19511101802)Fundamental Research Funds for the Central Universities,China (No.2232021D-25)。
文摘The demand for image retrieval with text manipulation exists in many fields, such as e-commerce and Internet search. Deep metric learning methods are used by most researchers to calculate the similarity between the query and the candidate image by fusing the global feature of the query image and the text feature. However, the text usually corresponds to the local feature of the query image rather than the global feature. Therefore, in this paper, we propose a framework of image retrieval with text manipulation by local feature modification(LFM-IR) which can focus on the related image regions and attributes and perform modification. A spatial attention module and a channel attention module are designed to realize the semantic mapping between image and text. We achieve excellent performance on three benchmark datasets, namely Color-Shape-Size(CSS), Massachusetts Institute of Technology(MIT) States and Fashion200K(+8.3%, +0.7% and +4.6% in R@1).
文摘Recent days,Image retrieval has become a tedious process as the image database has grown very larger.The introduction of Machine Learning(ML)and Deep Learning(DL)made this process more comfortable.In these,the pair-wise label similarity is used tofind the matching images from the database.But this method lacks of limited propose code and weak execution of misclassified images.In order to get-rid of the above problem,a novel triplet based label that incorporates context-spatial similarity measure is proposed.A Point Attention Based Triplet Network(PABTN)is introduced to study propose code that gives maximum discriminative ability.To improve the performance of ranking,a corre-lating resolutions for the classification,triplet labels based onfindings,a spatial-attention mechanism and Region Of Interest(ROI)and small trial information loss containing a new triplet cross-entropy loss are used.From the experimental results,it is shown that the proposed technique exhibits better results in terms of mean Reciprocal Rank(mRR)and mean Average Precision(mAP)in the CIFAR-10 and NUS-WIPE datasets.
文摘Fine-grained image search is one of the most challenging tasks in computer vision that aims to retrieve similar images at the fine-grained level for a given query image.The key objective is to learn discriminative fine-grained features by training deep models such that similar images are clustered,and dissimilar images are separated in the low embedding space.Previous works primarily focused on defining local structure loss functions like triplet loss,pairwise loss,etc.However,training via these approaches takes a long training time,and they have poor accuracy.Additionally,representations learned through it tend to tighten up in the embedded space and lose generalizability to unseen classes.This paper proposes a noise-assisted representation learning method for fine-grained image retrieval to mitigate these issues.In the proposed work,class manifold learning is performed in which positive pairs are created with noise insertion operation instead of tightening class clusters.And other instances are treated as negatives within the same cluster.Then a loss function is defined to penalize when the distance between instances of the same class becomes too small relative to the noise pair in that class in embedded space.The proposed approach is validated on CARS-196 and CUB-200 datasets and achieved better retrieval results(85.38%recall@1 for CARS-196%and 70.13%recall@1 for CUB-200)compared to other existing methods.
基金supported by the National Nature Science Foundation of China(No.U1803262,62176191,62171325)Nature Science Foundation of Hubei Province(2022CFB018)financially supported by fund from Hubei Province Key Laboratory of Intelligent Information Processing and Real‐time Industrial System(Wuhan University of Science and Technology)(ZNXX2022001).
文摘In existing remote sensing image retrieval(RSIR)datasets,the number of images among different classes varies dramatically,which leads to a severe class imbalance problem.Some studies propose to train the model with the ranking‐based metric(e.g.,average precision[AP]),because AP is robust to class imbalance.However,current AP‐based methods overlook an important issue:only optimising samples ranking before each positive sample,which is limited by the definition of AP and is prone to local optimum.To achieve global optimisation of AP,a novel method,namely Optimising Samples after positive ones&AP loss(OSAP‐Loss)is proposed in this study.Specifically,a novel superior ranking function is designed to make the AP loss differentiable while providing a tighter upper bound.Then,a novel loss called Optimising Samples after Positive ones(OSP)loss is proposed to involve all positive and negative samples ranking after each positive one and to provide a more flexible optimisation strategy for each sample.Finally,a graphics processing unit memory‐free mechanism is developed to thoroughly address the non‐decomposability of AP optimisation.Extensive experimental results on RSIR as well as conventional image retrieval datasets show the superiority and competitive performance of OSAP‐Loss compared to the state‐of‐the‐art.
基金sponsored by the National Natural Science Foundation of China(Grants:62002200,61772319)Shandong Natural Science Foundation of China(Grant:ZR2020QF012).
文摘Given one specific image,it would be quite significant if humanity could simply retrieve all those pictures that fall into a similar category of images.However,traditional methods are inclined to achieve high-quality retrieval by utilizing adequate learning instances,ignoring the extraction of the image’s essential information which leads to difficulty in the retrieval of similar category images just using one reference image.Aiming to solve this problem above,we proposed in this paper one refined sparse representation based similar category image retrieval model.On the one hand,saliency detection and multi-level decomposition could contribute to taking salient and spatial information into consideration more fully in the future.On the other hand,the cross mutual sparse coding model aims to extract the image’s essential feature to the maximumextent possible.At last,we set up a database concluding a large number of multi-source images.Adequate groups of comparative experiments show that our method could contribute to retrieving similar category images effectively.Moreover,adequate groups of ablation experiments show that nearly all procedures play their roles,respectively.
基金funded by the Deanship of Scientific Research (DSR)at King Abdulaziz University,Jeddah,Saudi Arabia,Under Grant No. (G:146-830-1441).
文摘Content-based medical image retrieval(CBMIR)is a technique for retrieving medical images based on automatically derived image features.There are many applications of CBMIR,such as teaching,research,diagnosis and electronic patient records.Several methods are applied to enhance the retrieval performance of CBMIR systems.Developing new and effective similarity measure and features fusion methods are two of the most powerful and effective strategies for improving these systems.This study proposes the relative difference-based similarity measure(RDBSM)for CBMIR.The new measure was first used in the similarity calculation stage for the CBMIR using an unweighted fusion method of traditional color and texture features.Furthermore,the study also proposes a weighted fusion method for medical image features extracted using pre-trained convolutional neural networks(CNNs)models.Our proposed RDBSM has outperformed the standard well-known similarity and distance measures using two popular medical image datasets,Kvasir and PH2,in terms of recall and precision retrieval measures.The effectiveness and quality of our proposed similarity measure are also proved using a significant test and statistical confidence bound.
基金supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia,grant number 077416-04.
文摘The utilization of digital picture search and retrieval has grown substantially in numerous fields for different purposes during the last decade,owing to the continuing advances in image processing and computer vision approaches.In multiple real-life applications,for example,social media,content-based face picture retrieval is a well-invested technique for large-scale databases,where there is a significant necessity for reliable retrieval capabilities enabling quick search in a vast number of pictures.Humans widely employ faces for recognizing and identifying people.Thus,face recognition through formal or personal pictures is increasingly used in various real-life applications,such as helping crime investigators retrieve matching images from face image databases to identify victims and criminals.However,such face image retrieval becomes more challenging in large-scale databases,where traditional vision-based face analysis requires ample additional storage space than the raw face images already occupied to store extracted lengthy feature vectors and takes much longer to process and match thousands of face images.This work mainly contributes to enhancing face image retrieval performance in large-scale databases using hash codes inferred by locality-sensitive hashing(LSH)for facial hard and soft biometrics as(Hard BioHash)and(Soft BioHash),respectively,to be used as a search input for retrieving the top-k matching faces.Moreover,we propose the multi-biometric score-level fusion of both face hard and soft BioHashes(Hard-Soft BioHash Fusion)for further augmented face image retrieval.The experimental outcomes applied on the Labeled Faces in the Wild(LFW)dataset and the related attributes dataset(LFW-attributes),demonstrate that the retrieval performance of the suggested fusion approach(Hard-Soft BioHash Fusion)significantly improved the retrieval performance compared to solely using Hard BioHash or Soft BioHash in isolation,where the suggested method provides an augmented accuracy of 87%when executed on 1000 specimens and 77%on 5743 samples.These results remarkably outperform the results of the Hard BioHash method by(50%on the 1000 samples and 30%on the 5743 samples),and the Soft BioHash method by(78%on the 1000 samples and 63%on the 5743 samples).
基金supported by the National Natural Science Foundation of China(No.61862041).
文摘To solve the problem that the existing ciphertext domain image retrieval system is challenging to balance security,retrieval efficiency,and retrieval accuracy.This research suggests a searchable encryption and deep hashing-based secure image retrieval technique that extracts more expressive image features and constructs a secure,searchable encryption scheme.First,a deep learning framework based on residual network and transfer learn-ing model is designed to extract more representative image deep features.Secondly,the central similarity is used to quantify and construct the deep hash sequence of features.The Paillier homomorphic encryption encrypts the deep hash sequence to build a high-security and low-complexity searchable index.Finally,according to the additive homomorphic property of Paillier homomorphic encryption,a similarity measurement method suitable for com-puting in the retrieval system’s security is ensured by the encrypted domain.The experimental results,which were obtained on Web Image Database from the National University of Singapore(NUS-WIDE),Microsoft Common Objects in Context(MS COCO),and ImageNet data sets,demonstrate the system’s robust security and precise retrieval,the proposed scheme can achieve efficient image retrieval without revealing user privacy.The retrieval accuracy is improved by at least 37%compared to traditional hashing schemes.At the same time,the retrieval time is saved by at least 9.7%compared to the latest deep hashing schemes.
基金supported by the National High Technology Research and Development Program of China (863 Program) (2007AA12Z1362007AA12Z223)+2 种基金the National Basic Research Program of China (973Program) (2006CB705707)the National Natural Science Foundation of China (60672126, 60607010)the Program for Cheung Kong Scholars and Innovative Research Team in University (IRT0645)
文摘A novel image retrieval approach based on color features and anisotropic directional information is proposed for content based image retrieval systems (CBIR). The color feature is described by the color histogram (CH), which is translation and rotation invariant. However, the CH does not contain spatial information which is very important for the image retrieval. To overcome this shortcoming, the subband energy of the lifting directionlet transform (L-DT) is proposed to describe the directional information, in which L-DT is characterized by multi-direction and anisotropic basis functions compared with the wavelet transform. A global similarity measure is designed to implement the fusion of both color feature and anisotropic directionality for the retrieval process. The retrieval experiments using a set of COREL images demonstrate that the higher query precision and better visual effect can be achieved.
文摘In this paper, we present a novel and efficient scheme for extracting, indexing and retrieving color images. Our motivation was to reduce the space overhead of partition-based approaches taking advantage of the fact that only a relatively low number of distinct values of a particular visual feature is present in most images. To extract color feature and build indices into our image database we take into consideration factors such as human color perception and perceptual range, and the image is partitioned into a set of regions by using a simple classifying scheme. The compact color feature vector and the spatial color histogram, which are extracted from the seqmented image region, are used for representing the color and spatial information in the image. We have also developed the region-based distance measures to compare the similarity of two images. Extensive tests on a large image collection were conducted to demonstrate the effectiveness of the proposed approach.
文摘<div style="text-align:justify;"> Digital image collection as rapidly increased along with the development of computer network. Image retrieval system was developed purposely to provide an efficient tool for a set of images from a collection of images in the database that matches the user’s requirements in similarity evaluations such as image content similarity, edge, and color similarity. Retrieving images based on the content which is color, texture, and shape is called content based image retrieval (CBIR). The content is actually the feature of an image and these features are extracted and used as the basis for a similarity check between images. The algorithms used to calculate the similarity between extracted features. There are two kinds of content based image retrieval which are general image retrieval and application specific image retrieval. For the general image retrieval, the goal of the query is to obtain images with the same object as the query. Such CBIR imitates web search engines for images rather than for text. For application specific, the purpose tries to match a query image to a collection of images of a specific type such as fingerprints image and x-ray. In this paper, the general architecture, various functional components, and techniques of CBIR system are discussed. CBIR techniques discussed in this paper are categorized as CBIR using color, CBIR using texture, and CBIR using shape features. This paper also describe about the comparison study about color features, texture features, shape features, and combined features (hybrid techniques) in terms of several parameters. The parameters are precision, recall and response time. </div>
基金Supported by the Project of Science & Technology Depart ment of Shanghai (No.055115001)
文摘This paper introduces the principles of using color histogram to match images in CBIR. And a prototype CBIR system is designed with color matching function. A new method using 2-dimensional color histogram based on hue and saturation to extract and represent color information of an image is presented. We also improve the Euclidean-distance algorithm by adding Center of Color to it. The experiment shows modifications made to Euclidean-distance signif-icantly elevate the quality and efficiency of retrieval.
文摘The technique of image retrieval is widely used in science experiment, military affairs, public security, advertisement, family entertainment, library and so on. The existing algorithms are mostly based on the characteristics of color, texture, shape and space relationship. This paper introduced an image retrieval algorithm, which is based on the matching of weighted EMD(Earth Mover’s Distance) distance and texture distance. EMD distance is the distance between the histograms of two images in HSV(Hue, Saturation, Value) color space, and texture distance is the L1 distance between the texture spectra of two images. The experimental results show that the retrieval rate can be increased obviously by using the proposed algorithm.
基金This work is supported in part by the National Natural Science Foundation of China under grant numbers 61672294,61502242,61702276,U1536206,U1405254,61772283,61602253,61601236 and 61572258,in part by Six peak talent project of Jiangsu Province(R2016L13),in part by National Key R&D Program of China under grant 2018YFB1003205,in part by NRF-2016R1D1A1B03933294,in part by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20150925 and BK20151530,in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund,in part by the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China.Zhihua Xia is supported by BK21+program from the Ministry of Education of Korea.
文摘Content based image retrieval(CBIR)techniques have been widely deployed in many applications for seeking the abundant information existed in images.Due to large amounts of storage and computational requirements of CBIR,outsourcing image search work to the cloud provider becomes a very attractive option for many owners with small devices.However,owing to the private content contained in images,directly outsourcing retrieval work to the cloud provider apparently bring about privacy problem,so the images should be protected carefully before outsourcing.This paper presents a secure retrieval scheme for the encrypted images in the YUV color space.With this scheme,the discrete cosine transform(DCT)is performed on the Y component.The resulting DC coefficients are encrypted with stream cipher technology and the resulting AC coefficients as well as other two color components are encrypted with value permutation and position scrambling.Then the image owner transmits the encrypted images to the cloud server.When receiving a query trapdoor form on query user,the server extracts AC-coefficients histogram from the encrypted Y component and extracts two color histograms from the other two color components.The similarity between query trapdoor and database image is measured by calculating the Manhattan distance of their respective histograms.Finally,the encrypted images closest to the query image are returned to the query user.
文摘A novel content based image retrieval (CBIR) algorithmusing relevant feedback is presented. The proposed frameworkhas three major contributions: a novel feature descriptor calledcolor spectral histogram (CSH) to measure the similarity betweenimages; two-dimensional matrix based indexing approach proposedfor short-term learning (STL); and long-term learning (LTL).In general, image similarities are measured from feature representationwhich includes color quantization, texture, color, shapeand edges. However, CSH can describe the image feature onlywith the histogram. Typically the image retrieval process starts byfinding the similarity between the query image and the imagesin the database; the major computation involved here is that theselection of top ranking images requires a sorting algorithm to beemployed at least with the lower bound of O(n log n). A 2D matrixbased indexing of images can enormously reduce the searchtime in STL. The same structure is used for LTL with an aim toreduce the amount of log to be maintained. The performance ofthe proposed framework is analyzed and compared with the existingapproaches, the quantified results indicates that the proposedfeature descriptor is more effectual than the existing feature descriptorsthat were originally developed for CBIR. In terms of STL,the proposed 2D matrix based indexing minimizes the computationeffort for retrieving similar images and for LTL, the proposed algorithmtakes minimum log information than the existing approaches.