In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical ima...In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.展开更多
Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fus...Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices.展开更多
Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image f...Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices.展开更多
In order to meet the requirements of medical research,diagnosis and treatment,a new algorithm for image fusion based on the wavelet packet transform in conjunction with both subjective and objective assessments is put...In order to meet the requirements of medical research,diagnosis and treatment,a new algorithm for image fusion based on the wavelet packet transform in conjunction with both subjective and objective assessments is put forward in the paper.Compared to the wavelet transform,the wavelet packet transform is more intricate and effective for the medical image fusion.As indicated by the experimental results,parameters of the feedback system of the new algorithm are significantly superior to those of the wavelet transform,with practicability and accuracy.展开更多
This paper presents a low intricate,profoundly energy effective MRI Images combination intended for remote visual sensor frameworks which leads to improved understanding and implementation of treatment;especially for ...This paper presents a low intricate,profoundly energy effective MRI Images combination intended for remote visual sensor frameworks which leads to improved understanding and implementation of treatment;especially for radiology.This is done by combining the original picture which leads to a significant reduction in the computation time and frequency.The proposed technique conquers the calculation and energy impediment of low power tools and is examined as far as picture quality and energy is concerned.Reenactments are performed utilizing MATLAB 2018a,to quantify the resultant vitality investment funds and the reproduction results show that the proposed calculation is very quick and devours just around 1%of vitality decomposition by the hybrid combination plans.Likewise,the effortlessness of our proposed strategy makes it increasingly suitable for continuous applications.展开更多
In this paper,we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform(2D-SMCWT).The fusion of the detail 2D-SMCWT cofficients is performed via a Bayesian Maximum a Poste...In this paper,we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform(2D-SMCWT).The fusion of the detail 2D-SMCWT cofficients is performed via a Bayesian Maximum a Posteriori(MAP)approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients.For the approx imation coefficients,a new fusion rule based on the Principal Component Analysis(PCA)is applied.We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method.The obt ained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics.Robustness of the proposed method is further tested against different types of noise.The plots of fusion met rics establish the accuracy of the proposed fusion method.展开更多
An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques ha...An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison.展开更多
Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of br...Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate.展开更多
The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for preci...The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions.展开更多
基金The National High Technology Research and Development Program of China(‘863’Program)grant number:2007AA02Z4A9+1 种基金National Natural Science Foundation of Chinagrant number:30671997
文摘In recent years,many medical image fusion methods had been exploited to derive useful information from multimodality medical image data,but,not an appropriate fusion algorithm for anatomical and functional medical images.In this paper,the traditional method of wavelet fusion is improved and a new fusion algorithm of anatomical and functional medical images,in which high-frequency and low-frequency coefficients are studied respectively.When choosing high-frequency coefficients,the global gradient of each sub-image is calculated to realize adaptive fusion,so that the fused image can reserve the functional information;while choosing the low coefficients is based on the analysis of the neighborbood region energy,so that the fused image can reserve the anatomical image's edge and texture feature.Experimental results and the quality evaluation parameters show that the improved fusion algorithm can enhance the edge and texture feature and retain the function information and anatomical information effectively.
文摘Fusing medical images is a topic of interest in processing medical images.This is achieved to through fusing information from multimodality images for the purpose of increasing the clinical diagnosis accuracy.This fusion aims to improve the image quality and preserve the specific features.The methods of medical image fusion generally use knowledge in many differentfields such as clinical medicine,computer vision,digital imaging,machine learning,pattern recognition to fuse different medical images.There are two main approaches in fusing image,including spatial domain approach and transform domain approachs.This paper proposes a new algorithm to fusion multimodal images.This algorithm is based on Entropy optimization and the Sobel operator.Wavelet transform is used to split the input images into components over the low and high frequency domains.Then,two fusion rules are used for obtaining the fusing images.Thefirst rule,based on the Sobel operator,is used for high frequency components.The second rule,based on Entropy optimization by using Particle Swarm Optimization(PSO)algorithm,is used for low frequency components.Proposed algorithm is implemented on the images related to central nervous system diseases.The experimental results of the paper show that the proposed algorithm is better than some recent methods in term of brightness level,the contrast,the entropy,the gradient and visual informationfidelity for fusion(VIFF),Feature Mutual Information(FMI)indices.
文摘Medical Image Fusion is the synthesizing technology for fusing multi-modal medical information using mathematical procedures to generate better visual on the image content and high-quality image output.Medical image fusion represents an indispensible role infixing major solutions for the complicated medical predicaments,while the recent research results have an enhanced affinity towards the preservation of medical image details,leaving color distortion and halo artifacts to remain unaddressed.This paper proposes a novel method of fusing Computer Tomography(CT)and Magnetic Resonance Imaging(MRI)using a hybrid model of Non Sub-sampled Contourlet Transform(NSCT)and Joint Sparse Representation(JSR).This model gratifies the need for precise integration of medical images of different modalities,which is an essential requirement in the diagnosing process towards clinical activities and treating the patients accordingly.In the proposed model,the medical image is decomposed using NSCT which is an efficient shift variant decomposition transformation method.JSR is exercised to extricate the common features of the medical image for the fusion process.The performance analysis of the proposed system proves that the proposed image fusion technique for medical image fusion is more efficient,provides better results,and a high level of distinctness by integrating the advantages of complementary images.The comparative analysis proves that the proposed technique exhibits better-quality than the existing medical image fusion practices.
文摘In order to meet the requirements of medical research,diagnosis and treatment,a new algorithm for image fusion based on the wavelet packet transform in conjunction with both subjective and objective assessments is put forward in the paper.Compared to the wavelet transform,the wavelet packet transform is more intricate and effective for the medical image fusion.As indicated by the experimental results,parameters of the feedback system of the new algorithm are significantly superior to those of the wavelet transform,with practicability and accuracy.
文摘This paper presents a low intricate,profoundly energy effective MRI Images combination intended for remote visual sensor frameworks which leads to improved understanding and implementation of treatment;especially for radiology.This is done by combining the original picture which leads to a significant reduction in the computation time and frequency.The proposed technique conquers the calculation and energy impediment of low power tools and is examined as far as picture quality and energy is concerned.Reenactments are performed utilizing MATLAB 2018a,to quantify the resultant vitality investment funds and the reproduction results show that the proposed calculation is very quick and devours just around 1%of vitality decomposition by the hybrid combination plans.Likewise,the effortlessness of our proposed strategy makes it increasingly suitable for continuous applications.
文摘In this paper,we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform(2D-SMCWT).The fusion of the detail 2D-SMCWT cofficients is performed via a Bayesian Maximum a Posteriori(MAP)approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients.For the approx imation coefficients,a new fusion rule based on the Principal Component Analysis(PCA)is applied.We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method.The obt ained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics.Robustness of the proposed method is further tested against different types of noise.The plots of fusion met rics establish the accuracy of the proposed fusion method.
文摘An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison.
基金supported by the KIAS(Research No.CG076601)in part by Sejong University Faculty Research Fund.
文摘Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate.
基金funded by King Saud University,Riyadh,Saudi Arabia.Researchers Supporting Project Number(RSP2024R167),King Saud University,Riyadh,Saudi Arabia.
文摘The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions.