Computational similarity measures have been evaluated in a variety of ways, but few of the validated computational measures are based on a high-level, cognitive criterion of objective similarity. In this paper, we eva...Computational similarity measures have been evaluated in a variety of ways, but few of the validated computational measures are based on a high-level, cognitive criterion of objective similarity. In this paper, we evaluate two popular objective similarity measures by comparing them with face matching performance in human observers. The results suggest that these measures are still limited in predicting human behavior, especially in rejection behavior, but objective measure taking advantage of global and local face characteristics may improve the prediction. It is also suggested that human may set different criterions for“hit” and “rejection”and this may provide implications for biologically-inspired computational systems.展开更多
In order to compensate for the deficiency of present methods of monitoring plane displacement in similarity model tests,such as inadequate real-time monitoring and more manual intervention,an effective monitoring meth...In order to compensate for the deficiency of present methods of monitoring plane displacement in similarity model tests,such as inadequate real-time monitoring and more manual intervention,an effective monitoring method was proposed in this study,and the major steps of the monitoring method include:firstly,time-series images of the similarity model in the test were obtained by a camera,and secondly,measuring points marked as artificial targets were automatically tracked and recognized from time-series images.Finally,the real-time plane displacement field was calculated by the fixed magnification between objects and images under the specific conditions.And then the application device of the method was designed and tested.At the same time,a sub-pixel location method and a distortion error model were used to improve the measuring accuracy.The results indicate that this method may record the entire test,especially the detailed non-uniform deformation and sudden deformation.Compared with traditional methods this method has a number of advantages,such as greater measurement accuracy and reliability,less manual intervention,higher automation,strong practical properties,much more measurement information and so on.展开更多
Background Image matching is crucial in numerous computer vision tasks such as 3D reconstruction and simultaneous visual localization and mapping.The accuracy of the matching significantly impacted subsequent studies....Background Image matching is crucial in numerous computer vision tasks such as 3D reconstruction and simultaneous visual localization and mapping.The accuracy of the matching significantly impacted subsequent studies.Because of their local similarity,when image pairs contain comparable patterns but feature pairs are positioned differently,incorrect recognition can occur as global motion consistency is disregarded.Methods This study proposes an image-matching filtering algorithm based on global motion consistency.It can be used as a subsequent matching filter for the initial matching results generated by other matching algorithms based on the principle of motion smoothness.A particular matching algorithm can first be used to perform the initial matching;then,the rotation and movement information of the global feature vectors are combined to effectively identify outlier matches.The principle is that if the matching result is accurate,the feature vectors formed by any matched point should have similar rotation angles and moving distances.Thus,global motion direction and global motion distance consistencies were used to reject outliers caused by similar patterns in different locations.Results Four datasets were used to test the effectiveness of the proposed method.Three datasets with similar patterns in different locations were used to test the results for similar images that could easily be incorrectly matched by other algorithms,and one commonly used dataset was used to test the results for the general image-matching problem.The experimental results suggest that the proposed method is more accurate than other state-of-the-art algorithms in identifying mismatches in the initial matching set.Conclusions The proposed outlier rejection matching method can significantly improve the matching accuracy for similar images with locally similar feature pairs in different locations and can provide more accurate matching results for subsequent computer vision tasks.展开更多
A new algorithm using polar coordinate system similarity (PCSS) for tracking particle in particle tracking velocimetry (PTV) is proposed. The essence of the algorithm is to consider simultaneously the changes of t...A new algorithm using polar coordinate system similarity (PCSS) for tracking particle in particle tracking velocimetry (PTV) is proposed. The essence of the algorithm is to consider simultaneously the changes of the distance and angle of surrounding particles relative to the object particle. Monte Carlo simulations of a solid body rotational flow and a parallel shearing flow are used to investigate flows measurable by PCSS and the influences of experimental parameters on the implementation of the new algorithm. The results indicate that the PCSS algorithm can be applied to flows subjected to strong rotation and is not sensitive to experimental parameters in comparison with the conventional binary image cross-correlation (BICC) algorithm. Finally, PCSS is applied to images of a real experiment.展开更多
Classifying the visual features in images to retrieve a specific image is a significant problem within the computer vision field especially when dealing with historical faded colored images.Thus,there were lots of eff...Classifying the visual features in images to retrieve a specific image is a significant problem within the computer vision field especially when dealing with historical faded colored images.Thus,there were lots of efforts trying to automate the classification operation and retrieve similar images accurately.To reach this goal,we developed a VGG19 deep convolutional neural network to extract the visual features from the images automatically.Then,the distances among the extracted features vectors are measured and a similarity score is generated using a Siamese deep neural network.The Siamese model built and trained at first from scratch but,it didn’t generated high evaluation metrices.Thus,we re-built it from VGG19 pre-trained deep learning model to generate higher evaluation metrices.Afterward,three different distance metrics combined with the Sigmoid activation function are experimented looking for the most accurate method formeasuring the similarities among the retrieved images.Reaching that the highest evaluation parameters generated using the Cosine distance metric.Moreover,the Graphics Processing Unit(GPU)utilized to run the code instead of running it on the Central Processing Unit(CPU).This step optimized the execution further since it expedited both the training and the retrieval time efficiently.After extensive experimentation,we reached satisfactory solution recording 0.98 and 0.99 F-score for the classification and for the retrieval,respectively.展开更多
Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In ...Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In this paper,letter data obtained from images of invoices are denoised using a modified autoencoder based deep learning method.A stacked denoising autoencoder(SDAE)is implemented with two hidden layers each in encoder network and decoder network.In order to capture the most salient features of training samples,a undercomplete autoencoder is designed with non-linear encoder and decoder function.This autoencoder is regularized for denoising application using a combined loss function which considers both mean square error and binary cross entropy.A dataset consisting of 59,119 letter images,which contains both English alphabets(upper and lower case)and numbers(0 to 9)is prepared from many scanned invoices images and windows true type(.ttf)files,are used for training the neural network.Performance is analyzed in terms of Signal to Noise Ratio(SNR),Peak Signal to Noise Ratio(PSNR),Structural Similarity Index(SSIM)and Universal Image Quality Index(UQI)and compared with other filtering techniques like Nonlocal Means filter,Anisotropic diffusion filter,Gaussian filters and Mean filters.Denoising performance of proposed SDAE is compared with existing SDAE with single loss function in terms of SNR and PSNR values.Results show the superior performance of proposed SDAE method.展开更多
A phishing detection system, which comprises client-side filtering plug-in, analysis center and protected sites, is proposed. An image-based similarity detection algorithm is conceived to calculate the similarity of t...A phishing detection system, which comprises client-side filtering plug-in, analysis center and protected sites, is proposed. An image-based similarity detection algorithm is conceived to calculate the similarity of two web pages. The web pages are first converted into images, and then divided into sub-images with iterated dividing and shrinking. After that, the attributes of sub-images including color histograms, gray histograms and size parameters are computed to construct the attributed relational graph(ARG)of each page. In order to match two ARGs, the inner earth mover's distances(EMD)between every two nodes coming from each ARG respectively are first computed, and then the similarity of web pages by the outer EMD between two ARGs is worked out to detect phishing web pages. The experimental results show that the proposed architecture and algorithm has good robustness along with scalability, and can effectively detect phishing.展开更多
Image classifiers that based on Deep Neural Networks(DNNs)have been proved to be easily fooled by well-designed perturbations.Previous defense methods have the limitations of requiring expensive computation or reducin...Image classifiers that based on Deep Neural Networks(DNNs)have been proved to be easily fooled by well-designed perturbations.Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers.In this paper,we propose a novel defense method which based on perceptual hash.Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense.To verify our idea,we defended against two main attack methods(a white-box attack and a black-box attack)in different DNN-based image classifiers and show that,after using our defense method,the attack-success-rate for all DNN-based image classifiers decreases significantly.More specifically,for the white-box attack,the attack-success-rate is reduced by an average of 36.3%.For the black-box attack,the average attack-success-rate of targeted attack and non-targeted attack has been reduced by 72.8%and 76.7%respectively.The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples.展开更多
Similarity measure has long played a critical role and attracted great interest in various areas such as pattern recognition and machine perception.Nevertheless,there remains the issue of developing an efficient two-d...Similarity measure has long played a critical role and attracted great interest in various areas such as pattern recognition and machine perception.Nevertheless,there remains the issue of developing an efficient two-dimensional(2D)robust similarity measure method for images.Inspired by the properties of subspace,we develop an effective 2D image similarity measure technique,named transformation similarity measure(TSM),for robust face recognition.Specifically,the TSM method robustly determines the similarity between two well-aligned frontal facial images while weakening interference in the face recognition by linear transformation and singular value decomposition.We present the mathematical features and some odds to reveal the feasible and robust measure mechanism of TSM.The performance of the TSM method,combined with the nearest neighbor rule,is evaluated in face recognition under different challenges.Experimental results clearly show the advantages of the TSM method in terms of accuracy and robustness.展开更多
Purpose–The purpose of this paper is to demonstrate the effectiveness and advantages of using perceptual tolerance neighbourhoods in tolerance space-based image similarity measures and its application in content-base...Purpose–The purpose of this paper is to demonstrate the effectiveness and advantages of using perceptual tolerance neighbourhoods in tolerance space-based image similarity measures and its application in content-based image classification and retrieval.Design/methodology/approach–The proposed method in this paper is based on a set-theoretic approach,where an image is viewed as a set of local visual elements.The method also includes a tolerance relation that detects the similarity between pairs of elements,if the difference between corresponding feature vectors is less than a threshold 2(0,1).Findings–It is shown that tolerance space-based methods can be successfully used in a complete content-based image retrieval(CBIR)system.Also,it is shown that perceptual tolerance neighbourhoods can replace tolerance classes in CBIR,resulting in more accuracy and less computations.Originality/value–The main contribution of this paper is the introduction of perceptual tolerance neighbourhoods instead of tolerance classes in a new form of the Henry-Peters tolerance-based nearness measure(tNM)and a new neighbourhood-based tolerance-covering nearness measure(tcNM).Moreover,this paper presents a side–by–side comparison of the tolerance space based methods with other published methods on a test dataset of images.展开更多
Compressed sensing(CS) has achieved great success in single noise removal. However, it cannot restore the images contaminated with mixed noise efficiently. This paper introduces nonlocal similarity and cosparsity insp...Compressed sensing(CS) has achieved great success in single noise removal. However, it cannot restore the images contaminated with mixed noise efficiently. This paper introduces nonlocal similarity and cosparsity inspired by compressed sensing to overcome the difficulties in mixed noise removal, in which nonlocal similarity explores the signal sparsity from similar patches, and cosparsity assumes that the signal is sparse after a possibly redundant transform. Meanwhile, an adaptive scheme is designed to keep the balance between mixed noise removal and detail preservation based on local variance. Finally, IRLSM and RACoSaMP are adopted to solve the objective function. Experimental results demonstrate that the proposed method is superior to conventional CS methods, like K-SVD and state-of-art method nonlocally centralized sparse representation(NCSR), in terms of both visual results and quantitative measures.展开更多
Angiomyolipoma (AML) is a benign mesenchymal tumor that has been frequently reported in the kidney but rarely in the liver. AML is composed of fat, vascular, and smooth muscle elements. Because the proportion of the...Angiomyolipoma (AML) is a benign mesenchymal tumor that has been frequently reported in the kidney but rarely in the liver. AML is composed of fat, vascular, and smooth muscle elements. Because the proportion of the constituents composed of AML are varied, hepatic AML may be clinically, radiologically and morphologically difficult to distinguish from hepatocellular carcinoma (HCC) or other hepatic lesions. Here we report a case with pathologically confirmed hepatic AML who was previously diagnosed as HCC based on imaging examinations.展开更多
The problem of robust alignment of batches of images can be formulated as a low-rank matrix optimization problem, relying on the similarity of well-aligned images. Going further, observing that the images to be aligne...The problem of robust alignment of batches of images can be formulated as a low-rank matrix optimization problem, relying on the similarity of well-aligned images. Going further, observing that the images to be aligned are sampled from a union of low-rank subspaces, we propose a new method based on subspace recovery techniques to provide more robust and accurate alignment. The proposed method seeks a set of domain transformations which are applied to the unaligned images so that the resulting images are made as similar as possible. The resulting optimization problem can be linearized as a series of convex optimization problems which can be solved by alternative sparsity pursuit techniques. Compared to existing methods like robust alignment by sparse and low-rank models, the proposed method can more effectively solve the batch image alignment problem,and extract more similar structures from the misaligned images.展开更多
Radiology doctors perform text-based image retrieval when they want to retrieve medical images.However,the accuracy and efficiency of such retrieval cannot keep up with the requirements.An innovative algorithm is bein...Radiology doctors perform text-based image retrieval when they want to retrieve medical images.However,the accuracy and efficiency of such retrieval cannot keep up with the requirements.An innovative algorithm is being proposed to retrieve similar medical images.First,we extract the professional terms from the ontology structure and use them to annotate the CT images.Second,the semantic similarity matrix of ontology terms is calculated according to the structure of the ontology.Lastly,the corresponding semantic distance is calculated according to the marked vector,which contains different annotations.We use 120 real liver CT images(divided into six categories)of a top three-hospital to run the algorithm of the program.Result shows that the retrieval index"Precision"is 80.81%,and the classification index"AUC(Area Under Curve)"under the"ROC curve"(Receiver Operating Characteristic)is 0.945.展开更多
基金Supported by the National Basic Research Program of China (Grant No.2006CB303101)the National Natural Science Foundation of China(Grant Nos.60433030,30600182 and 30500157)the Royal Society
文摘Computational similarity measures have been evaluated in a variety of ways, but few of the validated computational measures are based on a high-level, cognitive criterion of objective similarity. In this paper, we evaluate two popular objective similarity measures by comparing them with face matching performance in human observers. The results suggest that these measures are still limited in predicting human behavior, especially in rejection behavior, but objective measure taking advantage of global and local face characteristics may improve the prediction. It is also suggested that human may set different criterions for“hit” and “rejection”and this may provide implications for biologically-inspired computational systems.
基金provided by the Program for New Century Excellent Talents in University (No. NCET-06-0477)the Independent Research Project of the State Key Laboratory of Coal Resources and Mine Safety of China University of Mining and Technology (No. SKLCRSM09X01)the Fundamental Research Funds for the Central Universities
文摘In order to compensate for the deficiency of present methods of monitoring plane displacement in similarity model tests,such as inadequate real-time monitoring and more manual intervention,an effective monitoring method was proposed in this study,and the major steps of the monitoring method include:firstly,time-series images of the similarity model in the test were obtained by a camera,and secondly,measuring points marked as artificial targets were automatically tracked and recognized from time-series images.Finally,the real-time plane displacement field was calculated by the fixed magnification between objects and images under the specific conditions.And then the application device of the method was designed and tested.At the same time,a sub-pixel location method and a distortion error model were used to improve the measuring accuracy.The results indicate that this method may record the entire test,especially the detailed non-uniform deformation and sudden deformation.Compared with traditional methods this method has a number of advantages,such as greater measurement accuracy and reliability,less manual intervention,higher automation,strong practical properties,much more measurement information and so on.
基金Supported by the Natural Science Foundation of China(62072388,62276146)the Industry Guidance Project Foundation of Science technology Bureau of Fujian province(2020H0047)+2 种基金the Natural Science Foundation of Science Technology Bureau of Fujian province(2019J01601)the Creation Fund project of Science Technology Bureau of Fujian province(JAT190596)Putian University Research Project(2022034)。
文摘Background Image matching is crucial in numerous computer vision tasks such as 3D reconstruction and simultaneous visual localization and mapping.The accuracy of the matching significantly impacted subsequent studies.Because of their local similarity,when image pairs contain comparable patterns but feature pairs are positioned differently,incorrect recognition can occur as global motion consistency is disregarded.Methods This study proposes an image-matching filtering algorithm based on global motion consistency.It can be used as a subsequent matching filter for the initial matching results generated by other matching algorithms based on the principle of motion smoothness.A particular matching algorithm can first be used to perform the initial matching;then,the rotation and movement information of the global feature vectors are combined to effectively identify outlier matches.The principle is that if the matching result is accurate,the feature vectors formed by any matched point should have similar rotation angles and moving distances.Thus,global motion direction and global motion distance consistencies were used to reject outliers caused by similar patterns in different locations.Results Four datasets were used to test the effectiveness of the proposed method.Three datasets with similar patterns in different locations were used to test the results for similar images that could easily be incorrectly matched by other algorithms,and one commonly used dataset was used to test the results for the general image-matching problem.The experimental results suggest that the proposed method is more accurate than other state-of-the-art algorithms in identifying mismatches in the initial matching set.Conclusions The proposed outlier rejection matching method can significantly improve the matching accuracy for similar images with locally similar feature pairs in different locations and can provide more accurate matching results for subsequent computer vision tasks.
基金supported by the National Natural Science Foundation of China(50206019)
文摘A new algorithm using polar coordinate system similarity (PCSS) for tracking particle in particle tracking velocimetry (PTV) is proposed. The essence of the algorithm is to consider simultaneously the changes of the distance and angle of surrounding particles relative to the object particle. Monte Carlo simulations of a solid body rotational flow and a parallel shearing flow are used to investigate flows measurable by PCSS and the influences of experimental parameters on the implementation of the new algorithm. The results indicate that the PCSS algorithm can be applied to flows subjected to strong rotation and is not sensitive to experimental parameters in comparison with the conventional binary image cross-correlation (BICC) algorithm. Finally, PCSS is applied to images of a real experiment.
基金The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4400271DSR01).
文摘Classifying the visual features in images to retrieve a specific image is a significant problem within the computer vision field especially when dealing with historical faded colored images.Thus,there were lots of efforts trying to automate the classification operation and retrieve similar images accurately.To reach this goal,we developed a VGG19 deep convolutional neural network to extract the visual features from the images automatically.Then,the distances among the extracted features vectors are measured and a similarity score is generated using a Siamese deep neural network.The Siamese model built and trained at first from scratch but,it didn’t generated high evaluation metrices.Thus,we re-built it from VGG19 pre-trained deep learning model to generate higher evaluation metrices.Afterward,three different distance metrics combined with the Sigmoid activation function are experimented looking for the most accurate method formeasuring the similarities among the retrieved images.Reaching that the highest evaluation parameters generated using the Cosine distance metric.Moreover,the Graphics Processing Unit(GPU)utilized to run the code instead of running it on the Central Processing Unit(CPU).This step optimized the execution further since it expedited both the training and the retrieval time efficiently.After extensive experimentation,we reached satisfactory solution recording 0.98 and 0.99 F-score for the classification and for the retrieval,respectively.
文摘Invoice document digitization is crucial for efficient management in industries.The scanned invoice image is often noisy due to various reasons.This affects the OCR(optical character recognition)detection accuracy.In this paper,letter data obtained from images of invoices are denoised using a modified autoencoder based deep learning method.A stacked denoising autoencoder(SDAE)is implemented with two hidden layers each in encoder network and decoder network.In order to capture the most salient features of training samples,a undercomplete autoencoder is designed with non-linear encoder and decoder function.This autoencoder is regularized for denoising application using a combined loss function which considers both mean square error and binary cross entropy.A dataset consisting of 59,119 letter images,which contains both English alphabets(upper and lower case)and numbers(0 to 9)is prepared from many scanned invoices images and windows true type(.ttf)files,are used for training the neural network.Performance is analyzed in terms of Signal to Noise Ratio(SNR),Peak Signal to Noise Ratio(PSNR),Structural Similarity Index(SSIM)and Universal Image Quality Index(UQI)and compared with other filtering techniques like Nonlocal Means filter,Anisotropic diffusion filter,Gaussian filters and Mean filters.Denoising performance of proposed SDAE is compared with existing SDAE with single loss function in terms of SNR and PSNR values.Results show the superior performance of proposed SDAE method.
基金The National Basic Research Program of China (973Program)(2010CB328104,2009CB320501)the National Natural Science Foundation of China (No.60773103,90912002)+1 种基金Specialized Research Fund for the Doctoral Program of Higher Education(No.200802860031)Key Laboratory of Computer Network and Information Integration of Ministry of Education of China (No.93K-9)
文摘A phishing detection system, which comprises client-side filtering plug-in, analysis center and protected sites, is proposed. An image-based similarity detection algorithm is conceived to calculate the similarity of two web pages. The web pages are first converted into images, and then divided into sub-images with iterated dividing and shrinking. After that, the attributes of sub-images including color histograms, gray histograms and size parameters are computed to construct the attributed relational graph(ARG)of each page. In order to match two ARGs, the inner earth mover's distances(EMD)between every two nodes coming from each ARG respectively are first computed, and then the similarity of web pages by the outer EMD between two ARGs is worked out to detect phishing web pages. The experimental results show that the proposed architecture and algorithm has good robustness along with scalability, and can effectively detect phishing.
基金The work is supported by the National Key Research Development Program of China(2016QY01W0200)the National Natural Science Foundation of China NSFC(U1636101,U1736211,U1636219).
文摘Image classifiers that based on Deep Neural Networks(DNNs)have been proved to be easily fooled by well-designed perturbations.Previous defense methods have the limitations of requiring expensive computation or reducing the accuracy of the image classifiers.In this paper,we propose a novel defense method which based on perceptual hash.Our main goal is to destroy the process of perturbations generation by comparing the similarities of images thus achieve the purpose of defense.To verify our idea,we defended against two main attack methods(a white-box attack and a black-box attack)in different DNN-based image classifiers and show that,after using our defense method,the attack-success-rate for all DNN-based image classifiers decreases significantly.More specifically,for the white-box attack,the attack-success-rate is reduced by an average of 36.3%.For the black-box attack,the average attack-success-rate of targeted attack and non-targeted attack has been reduced by 72.8%and 76.7%respectively.The proposed method is a simple and effective defense method and provides a new way to defend against adversarial samples.
基金Project supported by the National Natural Science Foundation of China(No.61873106)the Natural Science Foundation of Jiangsu Province,China(No.BK20171264)+5 种基金the Jiangsu Qing Lan Project to Cultivate Middle-Aged and Young Science Leaders,China,the Jiangsu Six Talent Peak Project,China(Nos.XYDXX-047 and XYDXX-140)the University Science Research General Research General Project of Jiangsu Province,China(Nos.18KJB520005 and 19KJB520004)the Innovation Fund Project for Key Laboratory of Intelligent Perception and Systems for High-Dimensional Information of Ministry of Education,China(No.JYB201609)the Lianyungang Hai Yan Plan,China(Nos.2018-ZD-003,2018-QD-001,and 2018-QD-012)the Science and Technology Project of Lianyungang Hightech Zone,China(Nos.ZD201910 and ZD201912)and the Natural Science Foundation Project of Huaihai Institute of Technology,China(No.Z2017005)。
文摘Similarity measure has long played a critical role and attracted great interest in various areas such as pattern recognition and machine perception.Nevertheless,there remains the issue of developing an efficient two-dimensional(2D)robust similarity measure method for images.Inspired by the properties of subspace,we develop an effective 2D image similarity measure technique,named transformation similarity measure(TSM),for robust face recognition.Specifically,the TSM method robustly determines the similarity between two well-aligned frontal facial images while weakening interference in the face recognition by linear transformation and singular value decomposition.We present the mathematical features and some odds to reveal the feasible and robust measure mechanism of TSM.The performance of the TSM method,combined with the nearest neighbor rule,is evaluated in face recognition under different challenges.Experimental results clearly show the advantages of the TSM method in terms of accuracy and robustness.
基金supported by the Natural Sciences and Engineering Research Council of Canada grant 185986.
文摘Purpose–The purpose of this paper is to demonstrate the effectiveness and advantages of using perceptual tolerance neighbourhoods in tolerance space-based image similarity measures and its application in content-based image classification and retrieval.Design/methodology/approach–The proposed method in this paper is based on a set-theoretic approach,where an image is viewed as a set of local visual elements.The method also includes a tolerance relation that detects the similarity between pairs of elements,if the difference between corresponding feature vectors is less than a threshold 2(0,1).Findings–It is shown that tolerance space-based methods can be successfully used in a complete content-based image retrieval(CBIR)system.Also,it is shown that perceptual tolerance neighbourhoods can replace tolerance classes in CBIR,resulting in more accuracy and less computations.Originality/value–The main contribution of this paper is the introduction of perceptual tolerance neighbourhoods instead of tolerance classes in a new form of the Henry-Peters tolerance-based nearness measure(tNM)and a new neighbourhood-based tolerance-covering nearness measure(tcNM).Moreover,this paper presents a side–by–side comparison of the tolerance space based methods with other published methods on a test dataset of images.
基金supported by the National Natural Science Foundation of China(Nos.61403146 and 61603105)the Fundamental Research Funds for the Central Universities(No.2015ZM128)the Science and Technology Program of Guangzhou in China(Nos.201707010054 and 201704030072)
文摘Compressed sensing(CS) has achieved great success in single noise removal. However, it cannot restore the images contaminated with mixed noise efficiently. This paper introduces nonlocal similarity and cosparsity inspired by compressed sensing to overcome the difficulties in mixed noise removal, in which nonlocal similarity explores the signal sparsity from similar patches, and cosparsity assumes that the signal is sparse after a possibly redundant transform. Meanwhile, an adaptive scheme is designed to keep the balance between mixed noise removal and detail preservation based on local variance. Finally, IRLSM and RACoSaMP are adopted to solve the objective function. Experimental results demonstrate that the proposed method is superior to conventional CS methods, like K-SVD and state-of-art method nonlocally centralized sparse representation(NCSR), in terms of both visual results and quantitative measures.
文摘Angiomyolipoma (AML) is a benign mesenchymal tumor that has been frequently reported in the kidney but rarely in the liver. AML is composed of fat, vascular, and smooth muscle elements. Because the proportion of the constituents composed of AML are varied, hepatic AML may be clinically, radiologically and morphologically difficult to distinguish from hepatocellular carcinoma (HCC) or other hepatic lesions. Here we report a case with pathologically confirmed hepatic AML who was previously diagnosed as HCC based on imaging examinations.
基金supported by the National Natural Science Foundation of China (Grant Nos. 61573150, 61573152, 61370185, 61403085, and 51275094)Guangzhou Project Nos. 201604016113 and 201604046018
文摘The problem of robust alignment of batches of images can be formulated as a low-rank matrix optimization problem, relying on the similarity of well-aligned images. Going further, observing that the images to be aligned are sampled from a union of low-rank subspaces, we propose a new method based on subspace recovery techniques to provide more robust and accurate alignment. The proposed method seeks a set of domain transformations which are applied to the unaligned images so that the resulting images are made as similar as possible. The resulting optimization problem can be linearized as a series of convex optimization problems which can be solved by alternative sparsity pursuit techniques. Compared to existing methods like robust alignment by sparse and low-rank models, the proposed method can more effectively solve the batch image alignment problem,and extract more similar structures from the misaligned images.
文摘Radiology doctors perform text-based image retrieval when they want to retrieve medical images.However,the accuracy and efficiency of such retrieval cannot keep up with the requirements.An innovative algorithm is being proposed to retrieve similar medical images.First,we extract the professional terms from the ontology structure and use them to annotate the CT images.Second,the semantic similarity matrix of ontology terms is calculated according to the structure of the ontology.Lastly,the corresponding semantic distance is calculated according to the marked vector,which contains different annotations.We use 120 real liver CT images(divided into six categories)of a top three-hospital to run the algorithm of the program.Result shows that the retrieval index"Precision"is 80.81%,and the classification index"AUC(Area Under Curve)"under the"ROC curve"(Receiver Operating Characteristic)is 0.945.