Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods...Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.展开更多
An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques ha...An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison.展开更多
Objective To explore the efficacy of target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation.Methods We retrospectively analyzed the clinical data and images of 79 cases(68 with Park...Objective To explore the efficacy of target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation.Methods We retrospectively analyzed the clinical data and images of 79 cases(68 with Parkinson's disease,11 with dystonia) who received preoperative CT/MRI image fusion in target positioning of subthalamic nucleus in deep brain stimulation.Deviation of implanted electrodes from the target nucleus of each patient were measured.Neurological evaluations of each patient before and after the treatment were performed and compared.Complications of the positioning and treatment were recorded.Results The mean deviations of the electrodes implanted on X,Y,and Z axis were 0.5 mm,0.6 mm,and 0.6 mm,respectively.Postoperative neurologic evaluations scores of unified Parkinson's disease rating scale(UPDRS) for Parkinson's disease and Burke-Fahn-Marsden Dystonia Rating Scale(BFMDRS) for dystonia patients improved significantly compared to the preoperative scores(P<0.001); Complications occurred in 10.1%(8/79) patients,and main side effects were dysarthria and diplopia.Conclusion Target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation has high accuracy and good clinical outcomes.展开更多
目的:分析Brain Time Stack图像融合技术在CT中的应用。方法:选取2021年3月—2022年9月衡水市第四人民医院收治的50例CT检查患者作为研究对象。所有患者进行CT检查并进行Brain Time Stack后处理。比较四组不同部位CT值、标准差(SD)、信...目的:分析Brain Time Stack图像融合技术在CT中的应用。方法:选取2021年3月—2022年9月衡水市第四人民医院收治的50例CT检查患者作为研究对象。所有患者进行CT检查并进行Brain Time Stack后处理。比较四组不同部位CT值、标准差(SD)、信噪比(SNR)。比较四组图像主观质量评分。分析不同部位CT值、SD、SNR与图像主观质量评分的相关性。结果:B组的延髓、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值明显低于A组;C组的延髓、脑室、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值高于A组;D组延髓、额叶灰质、颞肌肌肉CT值明显低于A组,脑室、额叶白质、小脑外侧CT值明显高于A组;C组延髓、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值明显高于B组;D组延髓、脑室、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值明显高于B组;D组延髓、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值明显低于C组;D组脑室CT值明显高于C组,差异有统计学意义(P<0.05)。B组、C组、D组延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SD值明显低于A组;C组延髓、脑室、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SD值均明显高于B组;C组额叶灰质SD明显低于B组;D组延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧、肌肉SD均明显低于B组、C组,差异有统计学意义(P<0.05)。B组、C组、D组延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SNR均明显高于A组;C组、D组延髓、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SNR值明显高于B组;C组、D组脑室SNR明显低于B组;D组延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SNR明显高于C组,差异有统计学意义(P<0.05)。D组图像主观质量评分最高,差异有统计学意义(P<0.05)。延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧及颞肌肌肉SD与主观质量评分呈明显负相关,SNR与主观质量评分间呈明显正相关,差异有统计学意义(P<0.05)。结论:利用Brain Time Stack图像融合技术对头部CT扫描检查图像处理,动脉期结合前一期及后一期的图像数据在处理后具有更好的质量和更少的噪音。展开更多
Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion net...Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion network model was constructed based on a laser paint removal experiment. The alignment of heterogeneous data under different modals was solved by combining the piecewise aggregate approximation and gramian angular field. Moreover, the attention mechanism was introduced to optimize the dual-path network and dense connection network, enabling the sampling characteristics to be extracted and integrated. Consequently, the multi-modal discriminant detection of laser paint removal was realized. According to the experimental results, the verification accuracy of the constructed model on the experimental dataset was 99.17%, which is 5.77% higher than the optimal single-modal detection results of the laser paint removal. The feature extraction network was optimized by the attention mechanism, and the model accuracy was increased by 3.3%. Results verify the improved classification performance of the constructed multi-modal feature fusion model in detecting laser paint removal, the effective integration of acoustic data and visual image data, and the accurate detection of laser paint removal.展开更多
Brain cancer detection and classification is done utilizing distinct medical imaging modalities like computed tomography(CT),or magnetic resonance imaging(MRI).An automated brain cancer classification using computer a...Brain cancer detection and classification is done utilizing distinct medical imaging modalities like computed tomography(CT),or magnetic resonance imaging(MRI).An automated brain cancer classification using computer aided diagnosis(CAD)models can be designed to assist radiologists.With the recent advancement in computer vision(CV)and deep learning(DL)models,it is possible to automatically detect the tumor from images using a computer-aided design.This study focuses on the design of automated Henry Gas Solubility Optimization with Fusion of Handcrafted and Deep Features(HGSO-FHDF)technique for brain cancer classification.The proposed HGSO-FHDF technique aims for detecting and classifying different stages of brain tumors.The proposed HGSO-FHDF technique involves Gabor filtering(GF)technique for removing the noise and enhancing the quality of MRI images.In addition,Tsallis entropy based image segmentation approach is applied to determine injured brain regions in the MRI image.Moreover,a fusion of handcrafted with deep features using Residual Network(ResNet)is utilized as feature extractors.Finally,HGSO algorithm with kernel extreme learning machine(KELM)model was utilized for identifying the presence of brain tumors.For examining the enhanced brain tumor classification performance,a comprehensive set of simulations take place on the BRATS 2015 dataset.展开更多
A de-noising approach is proposed that based on the combination of wiener filtering, nonlinear filtering and wavelet fusion, which de-noise the LASCA (LAser Speckle Contrast Analysis) image of blood vessels in Small A...A de-noising approach is proposed that based on the combination of wiener filtering, nonlinear filtering and wavelet fusion, which de-noise the LASCA (LAser Speckle Contrast Analysis) image of blood vessels in Small Animals. The approach first performs laser spectral contrast analysis on brain blood flow in rats, get their spatial and temporal contrast images. Then, a de-noising filtering method is proposed to deal with noise in LASCA. The image restoration is achieved by applying the proposed admixture filtering, and the subjective estimation and objective estimation are given to the de-noising images. As our experimental results shown, the proposed method provides clearer subjective sense and improved to over 25 db for PSNR.展开更多
Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions du...Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions due to the lack of valid information available from the limited input occluded frames.Event cameras are bio-inspired vision sensors that record the brightness changes at each pixel asynchronously with high temporal resolution.However,synthesizing images solely from event streams is ill-posed since only the brightness changes are recorded in the event stream,and the initial brightness is unknown.In this paper,we propose an event-enhanced multi-modal fusion hybrid network for image de-occlusion,which uses event streams to provide complete scene information and frames to provide color and texture information.An event stream encoder based on the spiking neural network(SNN)is proposed to encode and denoise the event stream efficiently.A comparison loss is proposed to generate clearer results.Experimental results on a largescale event-based and frame-based image de-occlusion dataset demonstrate that our proposed method achieves state-of-the-art performance.展开更多
针对现有脑部医学图像融合算法存在的融合图像细节模糊和边缘性差等问题,设计一种扩张金字塔特征提取算法,由特征提取器、特征融合器和特征重构器3部分组成。特征提取器由扩张金字塔特征模块提取浅层和深层图像特征的结合,防止图像细节...针对现有脑部医学图像融合算法存在的融合图像细节模糊和边缘性差等问题,设计一种扩张金字塔特征提取算法,由特征提取器、特征融合器和特征重构器3部分组成。特征提取器由扩张金字塔特征模块提取浅层和深层图像特征的结合,防止图像细节信息的丢失;特征融合器采用改进的功能能量比(Functional Energy Ratio,FER)特征融合策略增强融合图像边缘信息;特征重构器由4层卷积构成归一化图像。实验结果表明,相较于当前通用的脑部融合算法,所提出的算法具有较好的视觉效果和细节信息,客观评价指标有更好的表现。展开更多
The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for preci...The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions.展开更多
Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of br...Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate.展开更多
The development of experimental animal models for head and neck tumors generally rely on the biol uminescence imaging to achieve the dynamic monitoring of the tumor growth and metastasis due to the complicated anatomi...The development of experimental animal models for head and neck tumors generally rely on the biol uminescence imaging to achieve the dynamic monitoring of the tumor growth and metastasis due to the complicated anatomical structures.Since the bioluminescence imaging is largely affected by the intracellular luciferase expression level and external D-luciferin concentrations,its imaging accuracy requires further confirmation.Here,a new triple fusion reportelr gene,which consists of a herpes simplex virus type 1 thymidine kinase(TK)gene for radioactive imaging,a far-red fuorescent protein(mLumin)gene for fuorescent imaging,and a firefly luciferase gene for bioluminescence imaging,was introduced for in vrivo observation of the head and neck tumors through multi-modality imaging.Results show that fuorescence and bioluminescence signals from mLumin and luciferase,respectively,were clearly observed in tumor cells,and TK could activate suicide pathway of the cells in the presence of nucleotide analog-ganciclovir(GCV),demonstrating the effecti veness of individual functions of each gene.Moreover,subcutaneous and metastasis animal models for head and neck tumors using the fusion reporter gene-expressing cell lines were established,allowing multi-modality imaging in vio.Together,the established tumor models of head and neck cancer based on the newly developed triple fusion reporter gene are ideal for monitoring tumor growth,assessing the drug therapeutic efficacy and verifying the effec-tiveness of new treatments.展开更多
基金Ministry of Education,Youth and Sports of the Chezk Republic,Grant/Award Numbers:SP2023/039,SP2023/042the European Union under the REFRESH,Grant/Award Number:CZ.10.03.01/00/22_003/0000048。
文摘Detecting brain tumours is complex due to the natural variation in their location, shape, and intensity in images. While having accurate detection and segmentation of brain tumours would be beneficial, current methods still need to solve this problem despite the numerous available approaches. Precise analysis of Magnetic Resonance Imaging (MRI) is crucial for detecting, segmenting, and classifying brain tumours in medical diagnostics. Magnetic Resonance Imaging is a vital component in medical diagnosis, and it requires precise, efficient, careful, efficient, and reliable image analysis techniques. The authors developed a Deep Learning (DL) fusion model to classify brain tumours reliably. Deep Learning models require large amounts of training data to achieve good results, so the researchers utilised data augmentation techniques to increase the dataset size for training models. VGG16, ResNet50, and convolutional deep belief networks networks extracted deep features from MRI images. Softmax was used as the classifier, and the training set was supplemented with intentionally created MRI images of brain tumours in addition to the genuine ones. The features of two DL models were combined in the proposed model to generate a fusion model, which significantly increased classification accuracy. An openly accessible dataset from the internet was used to test the model's performance, and the experimental results showed that the proposed fusion model achieved a classification accuracy of 98.98%. Finally, the results were compared with existing methods, and the proposed model outperformed them significantly.
文摘An accurate and early diagnosis of brain tumors based on medical ima-ging modalities is of great interest because brain tumors are a harmful threat to a person’s health worldwide.Several medical imaging techniques have been used to analyze brain tumors,including computed tomography(CT)and magnetic reso-nance imaging(MRI).CT provides information about dense tissues,whereas MRI gives information about soft tissues.However,the fusion of CT and MRI images has little effect on enhancing the accuracy of the diagnosis of brain tumors.Therefore,machine learning methods have been adopted to diagnose brain tumors in recent years.This paper intends to develop a novel scheme to detect and classify brain tumors based on fused CT and MRI images.The pro-posed approach starts with preprocessing the images to reduce the noise.Then,fusion rules are applied to get the fused image,and a segmentation algorithm is employed to isolate the tumor region from the background to isolate the tumor region.Finally,a machine learning classifier classified the brain images into benign and malignant tumors.Computing statistical measures evaluate the classi-fication potential of the proposed scheme.Experimental outcomes are provided,and the Enhanced Flower Pollination Algorithm(EFPA)system shows that it out-performs other brain tumor classification methods considered for comparison.
文摘Objective To explore the efficacy of target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation.Methods We retrospectively analyzed the clinical data and images of 79 cases(68 with Parkinson's disease,11 with dystonia) who received preoperative CT/MRI image fusion in target positioning of subthalamic nucleus in deep brain stimulation.Deviation of implanted electrodes from the target nucleus of each patient were measured.Neurological evaluations of each patient before and after the treatment were performed and compared.Complications of the positioning and treatment were recorded.Results The mean deviations of the electrodes implanted on X,Y,and Z axis were 0.5 mm,0.6 mm,and 0.6 mm,respectively.Postoperative neurologic evaluations scores of unified Parkinson's disease rating scale(UPDRS) for Parkinson's disease and Burke-Fahn-Marsden Dystonia Rating Scale(BFMDRS) for dystonia patients improved significantly compared to the preoperative scores(P<0.001); Complications occurred in 10.1%(8/79) patients,and main side effects were dysarthria and diplopia.Conclusion Target positioning by preoperative CT/MRI image fusion technique in deep brain stimulation has high accuracy and good clinical outcomes.
文摘目的:分析Brain Time Stack图像融合技术在CT中的应用。方法:选取2021年3月—2022年9月衡水市第四人民医院收治的50例CT检查患者作为研究对象。所有患者进行CT检查并进行Brain Time Stack后处理。比较四组不同部位CT值、标准差(SD)、信噪比(SNR)。比较四组图像主观质量评分。分析不同部位CT值、SD、SNR与图像主观质量评分的相关性。结果:B组的延髓、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值明显低于A组;C组的延髓、脑室、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值高于A组;D组延髓、额叶灰质、颞肌肌肉CT值明显低于A组,脑室、额叶白质、小脑外侧CT值明显高于A组;C组延髓、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值明显高于B组;D组延髓、脑室、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值明显高于B组;D组延髓、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉CT值明显低于C组;D组脑室CT值明显高于C组,差异有统计学意义(P<0.05)。B组、C组、D组延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SD值明显低于A组;C组延髓、脑室、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SD值均明显高于B组;C组额叶灰质SD明显低于B组;D组延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧、肌肉SD均明显低于B组、C组,差异有统计学意义(P<0.05)。B组、C组、D组延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SNR均明显高于A组;C组、D组延髓、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SNR值明显高于B组;C组、D组脑室SNR明显低于B组;D组延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧、颞肌肌肉SNR明显高于C组,差异有统计学意义(P<0.05)。D组图像主观质量评分最高,差异有统计学意义(P<0.05)。延髓、脑室、额叶灰质、额叶白质、小脑内侧、小脑外侧及颞肌肌肉SD与主观质量评分呈明显负相关,SNR与主观质量评分间呈明显正相关,差异有统计学意义(P<0.05)。结论:利用Brain Time Stack图像融合技术对头部CT扫描检查图像处理,动脉期结合前一期及后一期的图像数据在处理后具有更好的质量和更少的噪音。
基金Project(51875491) supported by the National Natural Science Foundation of ChinaProject(2021T3069) supported by the Fujian Science and Technology Plan STS Project,China。
文摘Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion network model was constructed based on a laser paint removal experiment. The alignment of heterogeneous data under different modals was solved by combining the piecewise aggregate approximation and gramian angular field. Moreover, the attention mechanism was introduced to optimize the dual-path network and dense connection network, enabling the sampling characteristics to be extracted and integrated. Consequently, the multi-modal discriminant detection of laser paint removal was realized. According to the experimental results, the verification accuracy of the constructed model on the experimental dataset was 99.17%, which is 5.77% higher than the optimal single-modal detection results of the laser paint removal. The feature extraction network was optimized by the attention mechanism, and the model accuracy was increased by 3.3%. Results verify the improved classification performance of the constructed multi-modal feature fusion model in detecting laser paint removal, the effective integration of acoustic data and visual image data, and the accurate detection of laser paint removal.
基金This research work was funded by Institutional fund projects under grant no.(IFPHI-180-612-2020)Therefore,the authors gratefully acknowledge technical and financial support from the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.
文摘Brain cancer detection and classification is done utilizing distinct medical imaging modalities like computed tomography(CT),or magnetic resonance imaging(MRI).An automated brain cancer classification using computer aided diagnosis(CAD)models can be designed to assist radiologists.With the recent advancement in computer vision(CV)and deep learning(DL)models,it is possible to automatically detect the tumor from images using a computer-aided design.This study focuses on the design of automated Henry Gas Solubility Optimization with Fusion of Handcrafted and Deep Features(HGSO-FHDF)technique for brain cancer classification.The proposed HGSO-FHDF technique aims for detecting and classifying different stages of brain tumors.The proposed HGSO-FHDF technique involves Gabor filtering(GF)technique for removing the noise and enhancing the quality of MRI images.In addition,Tsallis entropy based image segmentation approach is applied to determine injured brain regions in the MRI image.Moreover,a fusion of handcrafted with deep features using Residual Network(ResNet)is utilized as feature extractors.Finally,HGSO algorithm with kernel extreme learning machine(KELM)model was utilized for identifying the presence of brain tumors.For examining the enhanced brain tumor classification performance,a comprehensive set of simulations take place on the BRATS 2015 dataset.
文摘A de-noising approach is proposed that based on the combination of wiener filtering, nonlinear filtering and wavelet fusion, which de-noise the LASCA (LAser Speckle Contrast Analysis) image of blood vessels in Small Animals. The approach first performs laser spectral contrast analysis on brain blood flow in rats, get their spatial and temporal contrast images. Then, a de-noising filtering method is proposed to deal with noise in LASCA. The image restoration is achieved by applying the proposed admixture filtering, and the subjective estimation and objective estimation are given to the de-noising images. As our experimental results shown, the proposed method provides clearer subjective sense and improved to over 25 db for PSNR.
基金supported by National Natural Science Funds of China (Nos. 62088102 and 62021002)Beijing Natural Science Foundation, China (No. 4222025)
文摘Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions due to the lack of valid information available from the limited input occluded frames.Event cameras are bio-inspired vision sensors that record the brightness changes at each pixel asynchronously with high temporal resolution.However,synthesizing images solely from event streams is ill-posed since only the brightness changes are recorded in the event stream,and the initial brightness is unknown.In this paper,we propose an event-enhanced multi-modal fusion hybrid network for image de-occlusion,which uses event streams to provide complete scene information and frames to provide color and texture information.An event stream encoder based on the spiking neural network(SNN)is proposed to encode and denoise the event stream efficiently.A comparison loss is proposed to generate clearer results.Experimental results on a largescale event-based and frame-based image de-occlusion dataset demonstrate that our proposed method achieves state-of-the-art performance.
文摘针对现有脑部医学图像融合算法存在的融合图像细节模糊和边缘性差等问题,设计一种扩张金字塔特征提取算法,由特征提取器、特征融合器和特征重构器3部分组成。特征提取器由扩张金字塔特征模块提取浅层和深层图像特征的结合,防止图像细节信息的丢失;特征融合器采用改进的功能能量比(Functional Energy Ratio,FER)特征融合策略增强融合图像边缘信息;特征重构器由4层卷积构成归一化图像。实验结果表明,相较于当前通用的脑部融合算法,所提出的算法具有较好的视觉效果和细节信息,客观评价指标有更好的表现。
基金funded by King Saud University,Riyadh,Saudi Arabia.Researchers Supporting Project Number(RSP2024R167),King Saud University,Riyadh,Saudi Arabia.
文摘The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions.
基金supported by the KIAS(Research No.CG076601)in part by Sejong University Faculty Research Fund.
文摘Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate.
基金supported by the National Science and Technology Support Program of China(Grant No.2012BAI23B02)the China-Canada Joint Health Research Initiative(NSFC-30911120489,CIHR CCI-102936)111 Project of China(B07038).
文摘The development of experimental animal models for head and neck tumors generally rely on the biol uminescence imaging to achieve the dynamic monitoring of the tumor growth and metastasis due to the complicated anatomical structures.Since the bioluminescence imaging is largely affected by the intracellular luciferase expression level and external D-luciferin concentrations,its imaging accuracy requires further confirmation.Here,a new triple fusion reportelr gene,which consists of a herpes simplex virus type 1 thymidine kinase(TK)gene for radioactive imaging,a far-red fuorescent protein(mLumin)gene for fuorescent imaging,and a firefly luciferase gene for bioluminescence imaging,was introduced for in vrivo observation of the head and neck tumors through multi-modality imaging.Results show that fuorescence and bioluminescence signals from mLumin and luciferase,respectively,were clearly observed in tumor cells,and TK could activate suicide pathway of the cells in the presence of nucleotide analog-ganciclovir(GCV),demonstrating the effecti veness of individual functions of each gene.Moreover,subcutaneous and metastasis animal models for head and neck tumors using the fusion reporter gene-expressing cell lines were established,allowing multi-modality imaging in vio.Together,the established tumor models of head and neck cancer based on the newly developed triple fusion reporter gene are ideal for monitoring tumor growth,assessing the drug therapeutic efficacy and verifying the effec-tiveness of new treatments.