Introduction: Medical imaging is a medical specialty that involves producing images of the human body and interpreting them for diagnostic, therapeutic purposes, and for monitoring the progress of pathologies. We aime...Introduction: Medical imaging is a medical specialty that involves producing images of the human body and interpreting them for diagnostic, therapeutic purposes, and for monitoring the progress of pathologies. We aimed to assess the theoretical knowledge of doctors and interns in medical imaging in the northern region of Burkina Faso. Methodology: This was a descriptive cross-sectional survey based on a self-administered questionnaire. Prescribers knowledge was estimated based on scores derived from questionnaire responses. Results: We collected 106 questionnaires out of 163, i.e. a participation rate of 65.03%. The average knowledge score was 81.71% for the contribution of medical imaging to patient management. It was 60.02% for the indications/counter-indications of radiological examinations and 72.56% for the risks associated with exposure to radiation during these examinations. The score was 59.83% for the methods used to select the appropriate radiological examination. As regards the completeness of the clinical and biological information on the forms requesting imaging examinations, the score was 96.65%. Specialist doctors had the highest overall level of knowledge (74.68%). Conclusion: Improved technical facilities, good initial and in-service training, and interdisciplinary collaboration will help to ensure that imaging tests are properly prescribed, leading to better patient care.展开更多
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia...In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.展开更多
This article proposes a novel fractional heterogeneous neural network by coupling a Rulkov neuron with a Hopfield neural network(FRHNN),utilizing memristors for emulating neural synapses.The study firstly demonstrates...This article proposes a novel fractional heterogeneous neural network by coupling a Rulkov neuron with a Hopfield neural network(FRHNN),utilizing memristors for emulating neural synapses.The study firstly demonstrates the coexistence of multiple firing patterns through phase diagrams,Lyapunov exponents(LEs),and bifurcation diagrams.Secondly,the parameter related firing behaviors are described through two-parameter bifurcation diagrams.Subsequently,local attraction basins reveal multi-stability phenomena related to initial values.Moreover,the proposed model is implemented on a microcomputer-based ARM platform,and the experimental results correspond to the numerical simulations.Finally,the article explores the application of digital watermarking for medical images,illustrating its features of excellent imperceptibility,extensive key space,and robustness against attacks including noise and cropping.展开更多
In the intricate network environment,the secure transmission of medical images faces challenges such as information leakage and malicious tampering,significantly impacting the accuracy of disease diagnoses by medical ...In the intricate network environment,the secure transmission of medical images faces challenges such as information leakage and malicious tampering,significantly impacting the accuracy of disease diagnoses by medical professionals.To address this problem,the authors propose a robust feature watermarking algorithm for encrypted medical images based on multi-stage discrete wavelet transform(DWT),Daisy descriptor,and discrete cosine transform(DCT).The algorithm initially encrypts the original medical image through DWT-DCT and Logistic mapping.Subsequently,a 3-stage DWT transformation is applied to the encrypted medical image,with the centre point of the LL3 sub-band within its low-frequency component serving as the sampling point.The Daisy descriptor matrix for this point is then computed.Finally,a DCT transformation is performed on the Daisy descriptor matrix,and the low-frequency portion is processed using the perceptual hashing algorithm to generate a 32-bit binary feature vector for the medical image.This scheme utilises cryptographic knowledge and zero-watermarking technique to embed watermarks without modifying medical images and can extract the watermark from test images without the original image,which meets the basic re-quirements of medical image watermarking.The embedding and extraction of water-marks are accomplished in a mere 0.160 and 0.411s,respectively,with minimal computational overhead.Simulation results demonstrate the robustness of the algorithm against both conventional attacks and geometric attacks,with a notable performance in resisting rotation attacks.展开更多
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier...Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.展开更多
Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based di...Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based diagnosis,teaching,and research.Although the retrieval accuracy has largely improved,there has been limited development toward visualizing important image features that indicate the similarity of retrieved images.Despite the prevalence of 3D volumetric data in medical imaging such as computed tomography(CT),current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images.Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information,including the size,shape,and spatial relations of multiple structures.This process is time-consuming and reliant on users'experience.Methods In this study,we proposed an importance-aware 3D volume visualization method.The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process.We then integrated the proposed visualization into a CBIR system,thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.Results Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography(PETCT)images of a non-small cell lung cancer dataset.展开更多
Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in ...Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.展开更多
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans...Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.展开更多
Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance inte...Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance interdependence, which limits the segmentation performance. Transformer has been successfully applied to various computer vision, using self-attention mechanism to simulate long-distance interaction, so as to capture global information. However, self-attention lacks spatial location and high-performance computing. In order to solve the above problems, we develop a new medical transformer, which has a multi-scale context fusion function and can be used for medical image segmentation. The proposed model combines convolution operation and attention mechanism to form a u-shaped framework, which can capture both local and global information. First, the traditional converter module is improved to an advanced converter module, which uses post-layer normalization to obtain mild activation values, and uses scaled cosine attention with a moving window to obtain accurate spatial information. Secondly, we also introduce a deep supervision strategy to guide the model to fuse multi-scale feature information. It further enables the proposed model to effectively propagate feature information across layers, Thanks to this, it can achieve better segmentation performance while being more robust and efficient. The proposed model is evaluated on multiple medical image segmentation datasets. Experimental results demonstrate that the proposed model achieves better performance on a challenging dataset (ETIS) compared to existing methods that rely only on convolutional neural networks, transformers, or a combination of both. The mDice and mIou indicators increased by 2.74% and 3.3% respectively.展开更多
The progress in medical imaging technology highlights the importance of image quality for effective diagnosis and treatment.Yet,noise during capture and transmission can compromise image accuracy and reliability,compl...The progress in medical imaging technology highlights the importance of image quality for effective diagnosis and treatment.Yet,noise during capture and transmission can compromise image accuracy and reliability,complicating clinical decisions.The rising interest in diffusion models has led to their exploration of denoising images.We present Be-FOI(Better Fluoro Images),a weakly supervised model that uses cine images to denoise fluoroscopic images,both DR types.Trained through precise noise estimation and simulation,BeFOI employs Markov chains to denoise using only the fluoroscopic image as guidance.Our tests show that BeFOI outperforms other methods,reducing noise and enhancing clar-ity and diagnostic utility,making it an effective post-processing tool for medical images.展开更多
Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical ima...Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical imaging. With the recent advances in deep learning (DL) and itsconfounding results in image segmentation, more attention has been drawnto its use in medical image segmentation. This article introduces a surveyof the state-of-the-art deep convolution neural network (CNN) models andmechanisms utilized in image segmentation. First, segmentation models arecategorized based on their model architecture and primary working principle.Then, CNN categories are described, and various models are discussed withineach category. Compared with other existing surveys, several applicationswith multiple architectural adaptations are discussed within each category.A comparative summary is included to give the reader insights into utilizedarchitectures in different applications and datasets. This study focuses onmedical image segmentation applications, where the most widely used architecturesare illustrated, and other promising models are suggested that haveproven their success in different domains. Finally, the present work discussescurrent limitations and solutions along with future trends in the field.展开更多
In the process of continuous maturity and development of medical imaging diagnosis,it is common to transmit images through public networks.How to ensure the security of transmission,cultivate talents who combine medic...In the process of continuous maturity and development of medical imaging diagnosis,it is common to transmit images through public networks.How to ensure the security of transmission,cultivate talents who combine medical imaging and information security,and explore and cultivate new discipline growth points are difficult problems and challenges for schools and educators.In order to cope with industrial changes,a new round of scientific and technological revolution,and the challenges of the further development of artificial intelligence in medicine,this article will analyze the existing problems in the training of postgraduates in medical imaging information security by combining the actual conditions and characteristics of universities,and put forward countermeasures and suggestions to promote the progress of technology in universities.展开更多
Objective:To explore the application effect of virtual simulation teaching platform in the practical teaching of medical imaging.Methods:A total of 97 students majoring in medical imaging technology of class 2022 were...Objective:To explore the application effect of virtual simulation teaching platform in the practical teaching of medical imaging.Methods:A total of 97 students majoring in medical imaging technology of class 2022 were selected and divided into two groups according to the random number method:control group(n=48)and observation group(n=49).The observation group was under the practical teaching mode based on the virtual simulation teaching platform,while the control group was under the traditional multimedia teaching mode.Questionnaire survey and teaching assessment were carried out after the teaching period,and the application effects of the two teaching modes were compared.Results:The reading and theoretical scores of the students in the observation group were significantly higher than those of the students in the control group(P<0.01);there were statistically significant differences in the results of the questionnaire survey(improved learning interest,improved language expression,improved ability to comprehensively analyze problems,and improved teamwork awareness)between the two groups of students(P<0.05);the students in the observation group were markedly more satisfied with the teaching content,teaching methods,and teaching quality than the students in the control group(P<0.05).Conclusion:The medical imaging practical teaching mode based on virtual simulation platform not only helps improve students’theoretical understanding and practical ability in medical imaging technology,but also improves students’learning interest,language expression ability,ability to comprehensively analyze problems,communication skills,teamwork awareness,and satisfaction with the teaching content,teaching methods,and teaching quality.Therefore,it has wide application value in medical specialty education.展开更多
Objective:To explore the application effect of virtual simulation experiment combined with picture archiving and communication system(PACS)in medical imaging practical teaching.Methods:97 students from the medical ima...Objective:To explore the application effect of virtual simulation experiment combined with picture archiving and communication system(PACS)in medical imaging practical teaching.Methods:97 students from the medical imaging class of 2022 were divided into two groups;the control group(n=48)was taught by the traditional teaching method,whereas the research group was taught by virtual simulation experiment combined with PACS(n=49).The teaching achievements and teaching effects of the two groups were compared to define the advantages of the two teaching modes.Results:Initially,there were no significant differences in the basic theory,image analysis,report writing,and differential diagnosis scores between the two groups of students(P>0.05);however,after 16 weeks of teaching,the scores of the research group were better than those of the control group(P<0.05);the pass rate of students in the study group(93.88%)was higher than that in the control group(81.25%);the scores of students in the research group in terms of clinical inquiry skills,X-ray/computed tomography/magnetic resonance imaging(X-ray/CT/MRI)operation skills,and doctor-patient communication skills were significantly higher than those in the control group(P<0.05).Conclusion:In medical imaging practical teaching,the application of virtual simulation experiment combined with PACS can effectively address several problems in the traditional teaching mode,including the single teaching method,the single teaching content,and the lack of innovation,and,at the same time,improve students’basic theoretical knowledge,X-ray/CT/MRI operation skills,consultation skills,and doctor-patient communication skills,thereby effectively improving the teaching quality and learning effect.展开更多
Medical image steganography aims to increase data security by concealing patient-personal information as well as diagnostic and therapeutic data in the spatial or frequency domain of radiological images.On the other h...Medical image steganography aims to increase data security by concealing patient-personal information as well as diagnostic and therapeutic data in the spatial or frequency domain of radiological images.On the other hand,the discipline of image steganalysis generally provides a classification based on whether an image has hidden data or not.Inspired by previous studies on image steganalysis,this study proposes a deep ensemble learning model for medical image steganalysis to detect malicious hidden data in medical images and develop medical image steganography methods aimed at securing personal information.With this purpose in mind,a dataset containing brain Magnetic Resonance(MR)images of healthy individuals and epileptic patients was built.Spatial Version of the Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Stego(HUGO),and Minimizing the Power of Optimal Detector(MIPOD)techniques used in spatial image steganalysis were adapted to the problem,and various payloads of confidential data were hidden in medical images.The architectures of medical image steganalysis networks were transferred separately from eleven Dense Convolutional Network(DenseNet),Residual Neural Network(ResNet),and Inception-based models.The steganalysis outputs of these networks were determined by assembling models separately for each spatial embedding method with different payload ratios.The study demonstrated the success of pre-trained ResNet,DenseNet,and Inception models in the cover-stego mismatch scenario for each hiding technique with different payloads.Due to the high detection accuracy achieved,the proposed model has the potential to lead to the development of novel medical image steganography algorithms that existing deep learning-based steganalysis methods cannot detect.The experiments and the evaluations clearly proved this attempt.展开更多
As a mainstream research direction in the field of image segmentation,medical image segmentation plays a key role in the quantification of lesions,three-dimensional reconstruction,region of interest extraction and so ...As a mainstream research direction in the field of image segmentation,medical image segmentation plays a key role in the quantification of lesions,three-dimensional reconstruction,region of interest extraction and so on.Compared with natural images,medical images have a variety of modes.Besides,the emphasis of information which is conveyed by images of different modes is quite different.Because it is time-consuming and inefficient to manually segment medical images only by professional and experienced doctors.Therefore,large quantities of automated medical image segmentation methods have been developed.However,until now,researchers have not developed a universal method for all types of medical image segmentation.This paper reviews the literature on segmentation techniques that have produced major breakthroughs in recent years.Among the large quantities of medical image segmentation methods,this paper mainly discusses two categories of medical image segmentation methods.One is the improved strategies based on traditional clustering method.The other is the research progress of the improved image segmentation network structure model based on U-Net.The power of technology proves that the performance of the deep learning-based method is significantly better than that of the traditional method.This paper discussed both advantages and disadvantages of different algorithms and detailed how these methods can be used for the segmentation of lesions or other organs and tissues,as well as possible technical trends for future work.展开更多
In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intel...In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.展开更多
Medical image segmentation plays a crucial role in clinical diagnosis and therapy systems,yet still faces many challenges.Building on convolutional neural networks(CNNs),medical image segmentation has achieved tremend...Medical image segmentation plays a crucial role in clinical diagnosis and therapy systems,yet still faces many challenges.Building on convolutional neural networks(CNNs),medical image segmentation has achieved tremendous progress.However,owing to the locality of convolution operations,CNNs have the inherent limitation in learning global context.To address the limitation in building global context relationship from CNNs,we propose LGNet,a semantic segmentation network aiming to learn local and global features for fast and accurate medical image segmentation in this paper.Specifically,we employ a two-branch architecture consisting of convolution layers in one branch to learn local features and transformer layers in the other branch to learn global features.LGNet has two key insights:(1)We bridge two-branch to learn local and global features in an interactive way;(2)we present a novel multi-feature fusion model(MSFFM)to leverage the global contexture information from transformer and the local representational features from convolutions.Our method achieves state-of-the-art trade-off in terms of accuracy and efficiency on several medical image segmentation benchmarks including Synapse,ACDC and MOST.Specifically,LGNet achieves the state-of-the-art performance with Dice's indexes of 80.15%on Synapse,of 91.70%on ACDC,and of 95.56%on MOST.Meanwhile,the inference speed attains at 172 frames per second with 224-224 input resolution.The extensive experiments demonstrate the effectiveness of the proposed LGNet for fast and accurate for medical image segmentation.展开更多
Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis app...Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis.展开更多
The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedi...The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis.展开更多
文摘Introduction: Medical imaging is a medical specialty that involves producing images of the human body and interpreting them for diagnostic, therapeutic purposes, and for monitoring the progress of pathologies. We aimed to assess the theoretical knowledge of doctors and interns in medical imaging in the northern region of Burkina Faso. Methodology: This was a descriptive cross-sectional survey based on a self-administered questionnaire. Prescribers knowledge was estimated based on scores derived from questionnaire responses. Results: We collected 106 questionnaires out of 163, i.e. a participation rate of 65.03%. The average knowledge score was 81.71% for the contribution of medical imaging to patient management. It was 60.02% for the indications/counter-indications of radiological examinations and 72.56% for the risks associated with exposure to radiation during these examinations. The score was 59.83% for the methods used to select the appropriate radiological examination. As regards the completeness of the clinical and biological information on the forms requesting imaging examinations, the score was 96.65%. Specialist doctors had the highest overall level of knowledge (74.68%). Conclusion: Improved technical facilities, good initial and in-service training, and interdisciplinary collaboration will help to ensure that imaging tests are properly prescribed, leading to better patient care.
基金funded by Researchers Supporting Program at King Saud University,(RSPD2024R809).
文摘In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.
文摘This article proposes a novel fractional heterogeneous neural network by coupling a Rulkov neuron with a Hopfield neural network(FRHNN),utilizing memristors for emulating neural synapses.The study firstly demonstrates the coexistence of multiple firing patterns through phase diagrams,Lyapunov exponents(LEs),and bifurcation diagrams.Secondly,the parameter related firing behaviors are described through two-parameter bifurcation diagrams.Subsequently,local attraction basins reveal multi-stability phenomena related to initial values.Moreover,the proposed model is implemented on a microcomputer-based ARM platform,and the experimental results correspond to the numerical simulations.Finally,the article explores the application of digital watermarking for medical images,illustrating its features of excellent imperceptibility,extensive key space,and robustness against attacks including noise and cropping.
基金National Natural Science Foundation of China,Grant/Award Numbers:62063004,62350410483Key Research and Development Project of Hainan Province,Grant/Award Number:ZDYF2021SHFZ093Zhejiang Provincial Postdoctoral Science Foundation,Grant/Award Number:ZJ2021028。
文摘In the intricate network environment,the secure transmission of medical images faces challenges such as information leakage and malicious tampering,significantly impacting the accuracy of disease diagnoses by medical professionals.To address this problem,the authors propose a robust feature watermarking algorithm for encrypted medical images based on multi-stage discrete wavelet transform(DWT),Daisy descriptor,and discrete cosine transform(DCT).The algorithm initially encrypts the original medical image through DWT-DCT and Logistic mapping.Subsequently,a 3-stage DWT transformation is applied to the encrypted medical image,with the centre point of the LL3 sub-band within its low-frequency component serving as the sampling point.The Daisy descriptor matrix for this point is then computed.Finally,a DCT transformation is performed on the Daisy descriptor matrix,and the low-frequency portion is processed using the perceptual hashing algorithm to generate a 32-bit binary feature vector for the medical image.This scheme utilises cryptographic knowledge and zero-watermarking technique to embed watermarks without modifying medical images and can extract the watermark from test images without the original image,which meets the basic re-quirements of medical image watermarking.The embedding and extraction of water-marks are accomplished in a mere 0.160 and 0.411s,respectively,with minimal computational overhead.Simulation results demonstrate the robustness of the algorithm against both conventional attacks and geometric attacks,with a notable performance in resisting rotation attacks.
基金Major Program of National Natural Science Foundation of China(NSFC12292980,NSFC12292984)National Key R&D Program of China(2023YFA1009000,2023YFA1009004,2020YFA0712203,2020YFA0712201)+2 种基金Major Program of National Natural Science Foundation of China(NSFC12031016)Beijing Natural Science Foundation(BNSFZ210003)Department of Science,Technology and Information of the Ministry of Education(8091B042240).
文摘Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.
文摘Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based diagnosis,teaching,and research.Although the retrieval accuracy has largely improved,there has been limited development toward visualizing important image features that indicate the similarity of retrieved images.Despite the prevalence of 3D volumetric data in medical imaging such as computed tomography(CT),current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images.Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information,including the size,shape,and spatial relations of multiple structures.This process is time-consuming and reliant on users'experience.Methods In this study,we proposed an importance-aware 3D volume visualization method.The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process.We then integrated the proposed visualization into a CBIR system,thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.Results Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography(PETCT)images of a non-small cell lung cancer dataset.
文摘Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.
基金supported by the National Key R&D Program of China(2018AAA0102100)the National Natural Science Foundation of China(No.62376287)+3 种基金the International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province(2021CB1013)the Key Research and Development Program of Hunan Province(2022SK2054)the Natural Science Foundation of Hunan Province(No.2022JJ30762,2023JJ70016)the 111 Project under Grant(No.B18059).
文摘Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.
文摘Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance interdependence, which limits the segmentation performance. Transformer has been successfully applied to various computer vision, using self-attention mechanism to simulate long-distance interaction, so as to capture global information. However, self-attention lacks spatial location and high-performance computing. In order to solve the above problems, we develop a new medical transformer, which has a multi-scale context fusion function and can be used for medical image segmentation. The proposed model combines convolution operation and attention mechanism to form a u-shaped framework, which can capture both local and global information. First, the traditional converter module is improved to an advanced converter module, which uses post-layer normalization to obtain mild activation values, and uses scaled cosine attention with a moving window to obtain accurate spatial information. Secondly, we also introduce a deep supervision strategy to guide the model to fuse multi-scale feature information. It further enables the proposed model to effectively propagate feature information across layers, Thanks to this, it can achieve better segmentation performance while being more robust and efficient. The proposed model is evaluated on multiple medical image segmentation datasets. Experimental results demonstrate that the proposed model achieves better performance on a challenging dataset (ETIS) compared to existing methods that rely only on convolutional neural networks, transformers, or a combination of both. The mDice and mIou indicators increased by 2.74% and 3.3% respectively.
文摘The progress in medical imaging technology highlights the importance of image quality for effective diagnosis and treatment.Yet,noise during capture and transmission can compromise image accuracy and reliability,complicating clinical decisions.The rising interest in diffusion models has led to their exploration of denoising images.We present Be-FOI(Better Fluoro Images),a weakly supervised model that uses cine images to denoise fluoroscopic images,both DR types.Trained through precise noise estimation and simulation,BeFOI employs Markov chains to denoise using only the fluoroscopic image as guidance.Our tests show that BeFOI outperforms other methods,reducing noise and enhancing clar-ity and diagnostic utility,making it an effective post-processing tool for medical images.
基金supported by the Information Technology Industry Development Agency (ITIDA),Egypt (Project No.CFP181).
文摘Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical imaging. With the recent advances in deep learning (DL) and itsconfounding results in image segmentation, more attention has been drawnto its use in medical image segmentation. This article introduces a surveyof the state-of-the-art deep convolution neural network (CNN) models andmechanisms utilized in image segmentation. First, segmentation models arecategorized based on their model architecture and primary working principle.Then, CNN categories are described, and various models are discussed withineach category. Compared with other existing surveys, several applicationswith multiple architectural adaptations are discussed within each category.A comparative summary is included to give the reader insights into utilizedarchitectures in different applications and datasets. This study focuses onmedical image segmentation applications, where the most widely used architecturesare illustrated, and other promising models are suggested that haveproven their success in different domains. Finally, the present work discussescurrent limitations and solutions along with future trends in the field.
文摘In the process of continuous maturity and development of medical imaging diagnosis,it is common to transmit images through public networks.How to ensure the security of transmission,cultivate talents who combine medical imaging and information security,and explore and cultivate new discipline growth points are difficult problems and challenges for schools and educators.In order to cope with industrial changes,a new round of scientific and technological revolution,and the challenges of the further development of artificial intelligence in medicine,this article will analyze the existing problems in the training of postgraduates in medical imaging information security by combining the actual conditions and characteristics of universities,and put forward countermeasures and suggestions to promote the progress of technology in universities.
基金This work was supported by Xinjiang Medical University Education and Teaching Research Project“Virtual Simulation Technology Combined with PACS System in Medical Imaging Practice”(Project no.YG2021044).
文摘Objective:To explore the application effect of virtual simulation teaching platform in the practical teaching of medical imaging.Methods:A total of 97 students majoring in medical imaging technology of class 2022 were selected and divided into two groups according to the random number method:control group(n=48)and observation group(n=49).The observation group was under the practical teaching mode based on the virtual simulation teaching platform,while the control group was under the traditional multimedia teaching mode.Questionnaire survey and teaching assessment were carried out after the teaching period,and the application effects of the two teaching modes were compared.Results:The reading and theoretical scores of the students in the observation group were significantly higher than those of the students in the control group(P<0.01);there were statistically significant differences in the results of the questionnaire survey(improved learning interest,improved language expression,improved ability to comprehensively analyze problems,and improved teamwork awareness)between the two groups of students(P<0.05);the students in the observation group were markedly more satisfied with the teaching content,teaching methods,and teaching quality than the students in the control group(P<0.05).Conclusion:The medical imaging practical teaching mode based on virtual simulation platform not only helps improve students’theoretical understanding and practical ability in medical imaging technology,but also improves students’learning interest,language expression ability,ability to comprehensively analyze problems,communication skills,teamwork awareness,and satisfaction with the teaching content,teaching methods,and teaching quality.Therefore,it has wide application value in medical specialty education.
基金supported by Xinjiang Medical University Education and Teaching Research Project“Virtual Simulation Technology Combined with PACS System in Medical Imaging Practice”(Project No.YG2021044).
文摘Objective:To explore the application effect of virtual simulation experiment combined with picture archiving and communication system(PACS)in medical imaging practical teaching.Methods:97 students from the medical imaging class of 2022 were divided into two groups;the control group(n=48)was taught by the traditional teaching method,whereas the research group was taught by virtual simulation experiment combined with PACS(n=49).The teaching achievements and teaching effects of the two groups were compared to define the advantages of the two teaching modes.Results:Initially,there were no significant differences in the basic theory,image analysis,report writing,and differential diagnosis scores between the two groups of students(P>0.05);however,after 16 weeks of teaching,the scores of the research group were better than those of the control group(P<0.05);the pass rate of students in the study group(93.88%)was higher than that in the control group(81.25%);the scores of students in the research group in terms of clinical inquiry skills,X-ray/computed tomography/magnetic resonance imaging(X-ray/CT/MRI)operation skills,and doctor-patient communication skills were significantly higher than those in the control group(P<0.05).Conclusion:In medical imaging practical teaching,the application of virtual simulation experiment combined with PACS can effectively address several problems in the traditional teaching mode,including the single teaching method,the single teaching content,and the lack of innovation,and,at the same time,improve students’basic theoretical knowledge,X-ray/CT/MRI operation skills,consultation skills,and doctor-patient communication skills,thereby effectively improving the teaching quality and learning effect.
文摘Medical image steganography aims to increase data security by concealing patient-personal information as well as diagnostic and therapeutic data in the spatial or frequency domain of radiological images.On the other hand,the discipline of image steganalysis generally provides a classification based on whether an image has hidden data or not.Inspired by previous studies on image steganalysis,this study proposes a deep ensemble learning model for medical image steganalysis to detect malicious hidden data in medical images and develop medical image steganography methods aimed at securing personal information.With this purpose in mind,a dataset containing brain Magnetic Resonance(MR)images of healthy individuals and epileptic patients was built.Spatial Version of the Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Stego(HUGO),and Minimizing the Power of Optimal Detector(MIPOD)techniques used in spatial image steganalysis were adapted to the problem,and various payloads of confidential data were hidden in medical images.The architectures of medical image steganalysis networks were transferred separately from eleven Dense Convolutional Network(DenseNet),Residual Neural Network(ResNet),and Inception-based models.The steganalysis outputs of these networks were determined by assembling models separately for each spatial embedding method with different payload ratios.The study demonstrated the success of pre-trained ResNet,DenseNet,and Inception models in the cover-stego mismatch scenario for each hiding technique with different payloads.Due to the high detection accuracy achieved,the proposed model has the potential to lead to the development of novel medical image steganography algorithms that existing deep learning-based steganalysis methods cannot detect.The experiments and the evaluations clearly proved this attempt.
基金supported partly by the Open Project of State Key Laboratory of Millimeter Wave under Grant K202218partly by Innovation and Entrepreneurship Training Program of College Students under Grants 202210700006Y and 202210700005Z.
文摘As a mainstream research direction in the field of image segmentation,medical image segmentation plays a key role in the quantification of lesions,three-dimensional reconstruction,region of interest extraction and so on.Compared with natural images,medical images have a variety of modes.Besides,the emphasis of information which is conveyed by images of different modes is quite different.Because it is time-consuming and inefficient to manually segment medical images only by professional and experienced doctors.Therefore,large quantities of automated medical image segmentation methods have been developed.However,until now,researchers have not developed a universal method for all types of medical image segmentation.This paper reviews the literature on segmentation techniques that have produced major breakthroughs in recent years.Among the large quantities of medical image segmentation methods,this paper mainly discusses two categories of medical image segmentation methods.One is the improved strategies based on traditional clustering method.The other is the research progress of the improved image segmentation network structure model based on U-Net.The power of technology proves that the performance of the deep learning-based method is significantly better than that of the traditional method.This paper discussed both advantages and disadvantages of different algorithms and detailed how these methods can be used for the segmentation of lesions or other organs and tissues,as well as possible technical trends for future work.
基金supported by National Natural Science Foundation of China(NSFC)(61976123,62072213)Taishan Young Scholars Program of Shandong Provinceand Key Development Program for Basic Research of Shandong Province(ZR2020ZD44).
文摘In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.
基金supported by the Open-Fund of WNLO (Grant No.2018WNLOKF027)the Hubei Key Laboratory of Intelligent Robot in Wuhan Institute of Technology (Grant No.HBIRL 202003).
文摘Medical image segmentation plays a crucial role in clinical diagnosis and therapy systems,yet still faces many challenges.Building on convolutional neural networks(CNNs),medical image segmentation has achieved tremendous progress.However,owing to the locality of convolution operations,CNNs have the inherent limitation in learning global context.To address the limitation in building global context relationship from CNNs,we propose LGNet,a semantic segmentation network aiming to learn local and global features for fast and accurate medical image segmentation in this paper.Specifically,we employ a two-branch architecture consisting of convolution layers in one branch to learn local features and transformer layers in the other branch to learn global features.LGNet has two key insights:(1)We bridge two-branch to learn local and global features in an interactive way;(2)we present a novel multi-feature fusion model(MSFFM)to leverage the global contexture information from transformer and the local representational features from convolutions.Our method achieves state-of-the-art trade-off in terms of accuracy and efficiency on several medical image segmentation benchmarks including Synapse,ACDC and MOST.Specifically,LGNet achieves the state-of-the-art performance with Dice's indexes of 80.15%on Synapse,of 91.70%on ACDC,and of 95.56%on MOST.Meanwhile,the inference speed attains at 172 frames per second with 224-224 input resolution.The extensive experiments demonstrate the effectiveness of the proposed LGNet for fast and accurate for medical image segmentation.
文摘Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis.
文摘The synthesis of visual information from multiple medical imaging inputs to a single fused image without any loss of detail and distortion is known as multimodal medical image fusion.It improves the quality of biomedical images by preserving detailed features to advance the clinical utility of medical imaging meant for the analysis and treatment of medical disor-ders.This study develops a novel approach to fuse multimodal medical images utilizing anisotropic diffusion(AD)and non-subsampled contourlet transform(NSCT).First,the method employs anisotropic diffusion for decomposing input images to their base and detail layers to coarsely split two features of input images such as structural and textural information.The detail and base layers are further combined utilizing a sum-based fusion rule which maximizes noise filtering contrast level by effectively preserving most of the structural and textural details.NSCT is utilized to further decompose these images into their low and high-frequency coefficients.These coefficients are then combined utilizing the principal component analysis/Karhunen-Loeve(PCA/KL)based fusion rule independently by substantiating eigenfeature reinforcement in the fusion results.An NSCT-based multiresolution analysis is performed on the combined salient feature information and the contrast-enhanced fusion coefficients.Finally,an inverse NSCT is applied to each coef-ficient to produce the final fusion result.Experimental results demonstrate an advantage of the proposed technique using a publicly accessible dataset and conducted comparative studies on three pairs of medical images from different modalities and health.Our approach offers better visual and robust performance with better objective measurements for research development since it excellently preserves significant salient features and precision without producing abnormal information in the case of qualitative and quantitative analysis.