Objective: To explore the significance of dual-energy CT non-linear fusion technique in improving the quality of CTA image of renal cancer. Methods: The CTA images of 100 patients who had been confirmed by pathology a...Objective: To explore the significance of dual-energy CT non-linear fusion technique in improving the quality of CTA image of renal cancer. Methods: The CTA images of 100 patients who had been confirmed by pathology as renal cancer were collected and were randomly divided into experimental group and control group with 50 cases respectively. The two groups of patients were treated with iodine concentration of 300 mg/ml and 350 mg/ml non-ionic contrast agent, with a dosage of 1.5 ml/kg and an injection rate of 4 ml/s. The contrast agent intelligently tracking method was adopted bolus. The control group used the conventional CTA scanning, with a reference tube voltage/tube current of 100 kv/ref150 mas. The experimental group adopted the double energy scanning, with ball tube A and ball tube B. The reference tube voltage/tube current was 100 kv/ref250 mas and sn150 kv/ref125 mas respectively. The images of the experimental group were non-linear fused to obtain the Mono+ 55 kev single-energy images. The CT value, SNR contrast ratio of the abdominal aorta, renal artery and tumor tissue of the experimental group images and the 100 KV images and the Mono+ 55 kev images of the control group were compared. The objective evaluation and subjective evaluation of the image quality of the three groups of images was performed. Results: The results showed that the 100 kV images of the experimental group were statistically different from those of the control group (P05) in CT value, SNR and CNR (P 0.05). And there was no statistically significant difference between the non-linear fusion single-energy Mono+ 55 kev images and the control group images in CT value, SNR and CNR (P > 0.05). The subjective evaluation of image quality showed that there was no significant difference between Mono+ 55 kev images and control group images, and the quality of Mono+ 55 kev images was higher than that of experimental group 100 kV images. Conclusion: The dual-energy CT non-linear fusion technique can improve the quality of CTA image in patients with renal cancer, and it is possible to obtain high quality CTA images with low iodine concentration contrast agent.展开更多
This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to pr...This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to process different types of images.The use of this method allows the detection of road cracks,which not only reduces the professional requirements for inspectors,but also improves the accuracy of road crack detection.Based on infrared image processing technology,on the basis of in-depth analysis of infrared image features,a road crack detection method is proposed,which can accurately identify the road crack location,direction,length,and other characteristic information.Experiments showed that this method has a good effect,and can meet the requirement of road crack detection.展开更多
Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion net...Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion network model was constructed based on a laser paint removal experiment. The alignment of heterogeneous data under different modals was solved by combining the piecewise aggregate approximation and gramian angular field. Moreover, the attention mechanism was introduced to optimize the dual-path network and dense connection network, enabling the sampling characteristics to be extracted and integrated. Consequently, the multi-modal discriminant detection of laser paint removal was realized. According to the experimental results, the verification accuracy of the constructed model on the experimental dataset was 99.17%, which is 5.77% higher than the optimal single-modal detection results of the laser paint removal. The feature extraction network was optimized by the attention mechanism, and the model accuracy was increased by 3.3%. Results verify the improved classification performance of the constructed multi-modal feature fusion model in detecting laser paint removal, the effective integration of acoustic data and visual image data, and the accurate detection of laser paint removal.展开更多
Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of br...Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate.展开更多
Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions du...Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions due to the lack of valid information available from the limited input occluded frames.Event cameras are bio-inspired vision sensors that record the brightness changes at each pixel asynchronously with high temporal resolution.However,synthesizing images solely from event streams is ill-posed since only the brightness changes are recorded in the event stream,and the initial brightness is unknown.In this paper,we propose an event-enhanced multi-modal fusion hybrid network for image de-occlusion,which uses event streams to provide complete scene information and frames to provide color and texture information.An event stream encoder based on the spiking neural network(SNN)is proposed to encode and denoise the event stream efficiently.A comparison loss is proposed to generate clearer results.Experimental results on a largescale event-based and frame-based image de-occlusion dataset demonstrate that our proposed method achieves state-of-the-art performance.展开更多
The development of experimental animal models for head and neck tumors generally rely on the biol uminescence imaging to achieve the dynamic monitoring of the tumor growth and metastasis due to the complicated anatomi...The development of experimental animal models for head and neck tumors generally rely on the biol uminescence imaging to achieve the dynamic monitoring of the tumor growth and metastasis due to the complicated anatomical structures.Since the bioluminescence imaging is largely affected by the intracellular luciferase expression level and external D-luciferin concentrations,its imaging accuracy requires further confirmation.Here,a new triple fusion reportelr gene,which consists of a herpes simplex virus type 1 thymidine kinase(TK)gene for radioactive imaging,a far-red fuorescent protein(mLumin)gene for fuorescent imaging,and a firefly luciferase gene for bioluminescence imaging,was introduced for in vrivo observation of the head and neck tumors through multi-modality imaging.Results show that fuorescence and bioluminescence signals from mLumin and luciferase,respectively,were clearly observed in tumor cells,and TK could activate suicide pathway of the cells in the presence of nucleotide analog-ganciclovir(GCV),demonstrating the effecti veness of individual functions of each gene.Moreover,subcutaneous and metastasis animal models for head and neck tumors using the fusion reporter gene-expressing cell lines were established,allowing multi-modality imaging in vio.Together,the established tumor models of head and neck cancer based on the newly developed triple fusion reporter gene are ideal for monitoring tumor growth,assessing the drug therapeutic efficacy and verifying the effec-tiveness of new treatments.展开更多
The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for preci...The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions.展开更多
文摘Objective: To explore the significance of dual-energy CT non-linear fusion technique in improving the quality of CTA image of renal cancer. Methods: The CTA images of 100 patients who had been confirmed by pathology as renal cancer were collected and were randomly divided into experimental group and control group with 50 cases respectively. The two groups of patients were treated with iodine concentration of 300 mg/ml and 350 mg/ml non-ionic contrast agent, with a dosage of 1.5 ml/kg and an injection rate of 4 ml/s. The contrast agent intelligently tracking method was adopted bolus. The control group used the conventional CTA scanning, with a reference tube voltage/tube current of 100 kv/ref150 mas. The experimental group adopted the double energy scanning, with ball tube A and ball tube B. The reference tube voltage/tube current was 100 kv/ref250 mas and sn150 kv/ref125 mas respectively. The images of the experimental group were non-linear fused to obtain the Mono+ 55 kev single-energy images. The CT value, SNR contrast ratio of the abdominal aorta, renal artery and tumor tissue of the experimental group images and the 100 KV images and the Mono+ 55 kev images of the control group were compared. The objective evaluation and subjective evaluation of the image quality of the three groups of images was performed. Results: The results showed that the 100 kV images of the experimental group were statistically different from those of the control group (P05) in CT value, SNR and CNR (P 0.05). And there was no statistically significant difference between the non-linear fusion single-energy Mono+ 55 kev images and the control group images in CT value, SNR and CNR (P > 0.05). The subjective evaluation of image quality showed that there was no significant difference between Mono+ 55 kev images and control group images, and the quality of Mono+ 55 kev images was higher than that of experimental group 100 kV images. Conclusion: The dual-energy CT non-linear fusion technique can improve the quality of CTA image in patients with renal cancer, and it is possible to obtain high quality CTA images with low iodine concentration contrast agent.
文摘This study aimed to propose road crack detection method based on infrared image fusion technology.By analyzing the characteristics of road crack images,this method uses a variety of infrared image fusion methods to process different types of images.The use of this method allows the detection of road cracks,which not only reduces the professional requirements for inspectors,but also improves the accuracy of road crack detection.Based on infrared image processing technology,on the basis of in-depth analysis of infrared image features,a road crack detection method is proposed,which can accurately identify the road crack location,direction,length,and other characteristic information.Experiments showed that this method has a good effect,and can meet the requirement of road crack detection.
基金Project(51875491) supported by the National Natural Science Foundation of ChinaProject(2021T3069) supported by the Fujian Science and Technology Plan STS Project,China。
文摘Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion network model was constructed based on a laser paint removal experiment. The alignment of heterogeneous data under different modals was solved by combining the piecewise aggregate approximation and gramian angular field. Moreover, the attention mechanism was introduced to optimize the dual-path network and dense connection network, enabling the sampling characteristics to be extracted and integrated. Consequently, the multi-modal discriminant detection of laser paint removal was realized. According to the experimental results, the verification accuracy of the constructed model on the experimental dataset was 99.17%, which is 5.77% higher than the optimal single-modal detection results of the laser paint removal. The feature extraction network was optimized by the attention mechanism, and the model accuracy was increased by 3.3%. Results verify the improved classification performance of the constructed multi-modal feature fusion model in detecting laser paint removal, the effective integration of acoustic data and visual image data, and the accurate detection of laser paint removal.
基金supported by the KIAS(Research No.CG076601)in part by Sejong University Faculty Research Fund.
文摘Breast cancer is the most frequently detected tumor that eventually could result in a significant increase in female mortality globally.According to clinical statistics,one woman out of eight is under the threat of breast cancer.Lifestyle and inheritance patterns may be a reason behind its spread among women.However,some preventive measures,such as tests and periodic clinical checks can mitigate its risk thereby,improving its survival chances substantially.Early diagnosis and initial stage treatment can help increase the survival rate.For that purpose,pathologists can gather support from nondestructive and efficient computer-aided diagnosis(CAD)systems.This study explores the breast cancer CAD method relying on multimodal medical imaging and decision-based fusion.In multimodal medical imaging fusion,a deep learning approach is applied,obtaining 97.5%accuracy with a 2.5%miss rate for breast cancer prediction.A deep extreme learning machine technique applied on feature-based data provided a 97.41%accuracy.Finally,decisionbased fusion applied to both breast cancer prediction models to diagnose its stages,resulted in an overall accuracy of 97.97%.The proposed system model provides more accurate results compared with other state-of-the-art approaches,rapidly diagnosing breast cancer to decrease its mortality rate.
基金supported by National Natural Science Funds of China (Nos. 62088102 and 62021002)Beijing Natural Science Foundation, China (No. 4222025)
文摘Seeing through dense occlusions and reconstructing scene images is an important but challenging task.Traditional framebased image de-occlusion methods may lead to fatal errors when facing extremely dense occlusions due to the lack of valid information available from the limited input occluded frames.Event cameras are bio-inspired vision sensors that record the brightness changes at each pixel asynchronously with high temporal resolution.However,synthesizing images solely from event streams is ill-posed since only the brightness changes are recorded in the event stream,and the initial brightness is unknown.In this paper,we propose an event-enhanced multi-modal fusion hybrid network for image de-occlusion,which uses event streams to provide complete scene information and frames to provide color and texture information.An event stream encoder based on the spiking neural network(SNN)is proposed to encode and denoise the event stream efficiently.A comparison loss is proposed to generate clearer results.Experimental results on a largescale event-based and frame-based image de-occlusion dataset demonstrate that our proposed method achieves state-of-the-art performance.
基金supported by the National Science and Technology Support Program of China(Grant No.2012BAI23B02)the China-Canada Joint Health Research Initiative(NSFC-30911120489,CIHR CCI-102936)111 Project of China(B07038).
文摘The development of experimental animal models for head and neck tumors generally rely on the biol uminescence imaging to achieve the dynamic monitoring of the tumor growth and metastasis due to the complicated anatomical structures.Since the bioluminescence imaging is largely affected by the intracellular luciferase expression level and external D-luciferin concentrations,its imaging accuracy requires further confirmation.Here,a new triple fusion reportelr gene,which consists of a herpes simplex virus type 1 thymidine kinase(TK)gene for radioactive imaging,a far-red fuorescent protein(mLumin)gene for fuorescent imaging,and a firefly luciferase gene for bioluminescence imaging,was introduced for in vrivo observation of the head and neck tumors through multi-modality imaging.Results show that fuorescence and bioluminescence signals from mLumin and luciferase,respectively,were clearly observed in tumor cells,and TK could activate suicide pathway of the cells in the presence of nucleotide analog-ganciclovir(GCV),demonstrating the effecti veness of individual functions of each gene.Moreover,subcutaneous and metastasis animal models for head and neck tumors using the fusion reporter gene-expressing cell lines were established,allowing multi-modality imaging in vio.Together,the established tumor models of head and neck cancer based on the newly developed triple fusion reporter gene are ideal for monitoring tumor growth,assessing the drug therapeutic efficacy and verifying the effec-tiveness of new treatments.
基金funded by King Saud University,Riyadh,Saudi Arabia.Researchers Supporting Project Number(RSP2024R167),King Saud University,Riyadh,Saudi Arabia.
文摘The early implementation of treatment therapies necessitates the swift and precise identification of COVID-19 pneumonia by the analysis of chest CT scans.This study aims to investigate the indispensable need for precise and interpretable diagnostic tools for improving clinical decision-making for COVID-19 diagnosis.This paper proposes a novel deep learning approach,called Conformer Network,for explainable discrimination of viral pneumonia depending on the lung Region of Infections(ROI)within a single modality radiographic CT scan.Firstly,an efficient U-shaped transformer network is integrated for lung image segmentation.Then,a robust transfer learning technique is introduced to design a robust feature extractor based on pre-trained lightweight Big Transfer(BiT-L)and finetuned on medical data to effectively learn the patterns of infection in the input image.Secondly,this work presents a visual explanation method to guarantee clinical explainability for decisions made by Conformer Network.Experimental evaluation of real-world CT data demonstrated that the diagnostic accuracy of ourmodel outperforms cutting-edge studies with statistical significance.The Conformer Network achieves 97.40% of detection accuracy under cross-validation settings.Our model not only achieves high sensitivity and specificity but also affords visualizations of salient features contributing to each classification decision,enhancing the overall transparency and trustworthiness of our model.The findings provide obvious implications for the ability of our model to empower clinical staff by generating transparent intuitions about the features driving diagnostic decisions.