With the rapid advancement in artificial intelligence(AI)and its application in the Internet of Things(IoT),intelligent technologies are being introduced in the medical field,giving rise to smart healthcare systems.Th...With the rapid advancement in artificial intelligence(AI)and its application in the Internet of Things(IoT),intelligent technologies are being introduced in the medical field,giving rise to smart healthcare systems.The medical imaging data contains sensitive information,which can easily be stolen or tampered with,necessitating secure encryption schemes designed specifically to protect these images.This paper introduces an artificial intelligence-driven novel encryption scheme tailored for the secure transmission and storage of high-resolution medical images.The proposed scheme utilizes an artificial intelligence-based autoencoder to compress highresolution medical images and to facilitate fast encryption and decryption.The proposed autoencoder retains important diagnostic information even after reducing the image dimensions.The low-resolution images then undergo a four-stage encryption process.The first two encryption stages involve permutation and the next two stages involve confusion.The first two stages ensure the disruption of the structure of the image,making it secure against statistical attacks.Whereas the two stages of confusion ensure the effective concealment of the pixel values making it difficult to decrypt without secret keys.This encrypted image is then safe for storage or transmission.The proposed scheme has been extensively evaluated against various attacks and statistical security parameters confirming its effectiveness in securing medical image data.展开更多
In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia...In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.展开更多
This article proposes a novel fractional heterogeneous neural network by coupling a Rulkov neuron with a Hopfield neural network(FRHNN),utilizing memristors for emulating neural synapses.The study firstly demonstrates...This article proposes a novel fractional heterogeneous neural network by coupling a Rulkov neuron with a Hopfield neural network(FRHNN),utilizing memristors for emulating neural synapses.The study firstly demonstrates the coexistence of multiple firing patterns through phase diagrams,Lyapunov exponents(LEs),and bifurcation diagrams.Secondly,the parameter related firing behaviors are described through two-parameter bifurcation diagrams.Subsequently,local attraction basins reveal multi-stability phenomena related to initial values.Moreover,the proposed model is implemented on a microcomputer-based ARM platform,and the experimental results correspond to the numerical simulations.Finally,the article explores the application of digital watermarking for medical images,illustrating its features of excellent imperceptibility,extensive key space,and robustness against attacks including noise and cropping.展开更多
In the intricate network environment,the secure transmission of medical images faces challenges such as information leakage and malicious tampering,significantly impacting the accuracy of disease diagnoses by medical ...In the intricate network environment,the secure transmission of medical images faces challenges such as information leakage and malicious tampering,significantly impacting the accuracy of disease diagnoses by medical professionals.To address this problem,the authors propose a robust feature watermarking algorithm for encrypted medical images based on multi-stage discrete wavelet transform(DWT),Daisy descriptor,and discrete cosine transform(DCT).The algorithm initially encrypts the original medical image through DWT-DCT and Logistic mapping.Subsequently,a 3-stage DWT transformation is applied to the encrypted medical image,with the centre point of the LL3 sub-band within its low-frequency component serving as the sampling point.The Daisy descriptor matrix for this point is then computed.Finally,a DCT transformation is performed on the Daisy descriptor matrix,and the low-frequency portion is processed using the perceptual hashing algorithm to generate a 32-bit binary feature vector for the medical image.This scheme utilises cryptographic knowledge and zero-watermarking technique to embed watermarks without modifying medical images and can extract the watermark from test images without the original image,which meets the basic re-quirements of medical image watermarking.The embedding and extraction of water-marks are accomplished in a mere 0.160 and 0.411s,respectively,with minimal computational overhead.Simulation results demonstrate the robustness of the algorithm against both conventional attacks and geometric attacks,with a notable performance in resisting rotation attacks.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier...Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.展开更多
The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started...The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started translating and introducing anatomical and physiological knowledge about the pancreas.As for the word pancreas,an early and influential translation was “sweet meat”(甜肉),proposed by Benjamin Hobson(合信).The translation “sweet meat” is not faithful to the original meaning of “pancreas”,but is a term coined by Hobson based on his personal habits,and the word “sweet” appeared by chance.However,in the decades since the term “sweet meat” became popular,Chinese medicine practitioners,such as Tang Zonghai(唐宗海),reinterpreted it by drawing new medical illustrations for “sweet meat” and giving new connotations to the word “sweet”.This discussion and interpretation of “sweet meat” in modern China,particularly among Chinese medicine professionals,is not only a dissemination and interpretation of the knowledge of “pancreas”,but also a construction of knowledge around the term “sweet meat”.展开更多
Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based di...Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based diagnosis,teaching,and research.Although the retrieval accuracy has largely improved,there has been limited development toward visualizing important image features that indicate the similarity of retrieved images.Despite the prevalence of 3D volumetric data in medical imaging such as computed tomography(CT),current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images.Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information,including the size,shape,and spatial relations of multiple structures.This process is time-consuming and reliant on users'experience.Methods In this study,we proposed an importance-aware 3D volume visualization method.The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process.We then integrated the proposed visualization into a CBIR system,thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.Results Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography(PETCT)images of a non-small cell lung cancer dataset.展开更多
Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in ...Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.展开更多
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans...Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.展开更多
Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance inte...Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance interdependence, which limits the segmentation performance. Transformer has been successfully applied to various computer vision, using self-attention mechanism to simulate long-distance interaction, so as to capture global information. However, self-attention lacks spatial location and high-performance computing. In order to solve the above problems, we develop a new medical transformer, which has a multi-scale context fusion function and can be used for medical image segmentation. The proposed model combines convolution operation and attention mechanism to form a u-shaped framework, which can capture both local and global information. First, the traditional converter module is improved to an advanced converter module, which uses post-layer normalization to obtain mild activation values, and uses scaled cosine attention with a moving window to obtain accurate spatial information. Secondly, we also introduce a deep supervision strategy to guide the model to fuse multi-scale feature information. It further enables the proposed model to effectively propagate feature information across layers, Thanks to this, it can achieve better segmentation performance while being more robust and efficient. The proposed model is evaluated on multiple medical image segmentation datasets. Experimental results demonstrate that the proposed model achieves better performance on a challenging dataset (ETIS) compared to existing methods that rely only on convolutional neural networks, transformers, or a combination of both. The mDice and mIou indicators increased by 2.74% and 3.3% respectively.展开更多
The progress in medical imaging technology highlights the importance of image quality for effective diagnosis and treatment.Yet,noise during capture and transmission can compromise image accuracy and reliability,compl...The progress in medical imaging technology highlights the importance of image quality for effective diagnosis and treatment.Yet,noise during capture and transmission can compromise image accuracy and reliability,complicating clinical decisions.The rising interest in diffusion models has led to their exploration of denoising images.We present Be-FOI(Better Fluoro Images),a weakly supervised model that uses cine images to denoise fluoroscopic images,both DR types.Trained through precise noise estimation and simulation,BeFOI employs Markov chains to denoise using only the fluoroscopic image as guidance.Our tests show that BeFOI outperforms other methods,reducing noise and enhancing clar-ity and diagnostic utility,making it an effective post-processing tool for medical images.展开更多
Introduction: Medical imaging is a medical specialty that involves producing images of the human body and interpreting them for diagnostic, therapeutic purposes, and for monitoring the progress of pathologies. We aime...Introduction: Medical imaging is a medical specialty that involves producing images of the human body and interpreting them for diagnostic, therapeutic purposes, and for monitoring the progress of pathologies. We aimed to assess the theoretical knowledge of doctors and interns in medical imaging in the northern region of Burkina Faso. Methodology: This was a descriptive cross-sectional survey based on a self-administered questionnaire. Prescribers knowledge was estimated based on scores derived from questionnaire responses. Results: We collected 106 questionnaires out of 163, i.e. a participation rate of 65.03%. The average knowledge score was 81.71% for the contribution of medical imaging to patient management. It was 60.02% for the indications/counter-indications of radiological examinations and 72.56% for the risks associated with exposure to radiation during these examinations. The score was 59.83% for the methods used to select the appropriate radiological examination. As regards the completeness of the clinical and biological information on the forms requesting imaging examinations, the score was 96.65%. Specialist doctors had the highest overall level of knowledge (74.68%). Conclusion: Improved technical facilities, good initial and in-service training, and interdisciplinary collaboration will help to ensure that imaging tests are properly prescribed, leading to better patient care.展开更多
Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical ima...Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical imaging. With the recent advances in deep learning (DL) and itsconfounding results in image segmentation, more attention has been drawnto its use in medical image segmentation. This article introduces a surveyof the state-of-the-art deep convolution neural network (CNN) models andmechanisms utilized in image segmentation. First, segmentation models arecategorized based on their model architecture and primary working principle.Then, CNN categories are described, and various models are discussed withineach category. Compared with other existing surveys, several applicationswith multiple architectural adaptations are discussed within each category.A comparative summary is included to give the reader insights into utilizedarchitectures in different applications and datasets. This study focuses onmedical image segmentation applications, where the most widely used architecturesare illustrated, and other promising models are suggested that haveproven their success in different domains. Finally, the present work discussescurrent limitations and solutions along with future trends in the field.展开更多
Medical image steganography aims to increase data security by concealing patient-personal information as well as diagnostic and therapeutic data in the spatial or frequency domain of radiological images.On the other h...Medical image steganography aims to increase data security by concealing patient-personal information as well as diagnostic and therapeutic data in the spatial or frequency domain of radiological images.On the other hand,the discipline of image steganalysis generally provides a classification based on whether an image has hidden data or not.Inspired by previous studies on image steganalysis,this study proposes a deep ensemble learning model for medical image steganalysis to detect malicious hidden data in medical images and develop medical image steganography methods aimed at securing personal information.With this purpose in mind,a dataset containing brain Magnetic Resonance(MR)images of healthy individuals and epileptic patients was built.Spatial Version of the Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Stego(HUGO),and Minimizing the Power of Optimal Detector(MIPOD)techniques used in spatial image steganalysis were adapted to the problem,and various payloads of confidential data were hidden in medical images.The architectures of medical image steganalysis networks were transferred separately from eleven Dense Convolutional Network(DenseNet),Residual Neural Network(ResNet),and Inception-based models.The steganalysis outputs of these networks were determined by assembling models separately for each spatial embedding method with different payload ratios.The study demonstrated the success of pre-trained ResNet,DenseNet,and Inception models in the cover-stego mismatch scenario for each hiding technique with different payloads.Due to the high detection accuracy achieved,the proposed model has the potential to lead to the development of novel medical image steganography algorithms that existing deep learning-based steganalysis methods cannot detect.The experiments and the evaluations clearly proved this attempt.展开更多
As a mainstream research direction in the field of image segmentation,medical image segmentation plays a key role in the quantification of lesions,three-dimensional reconstruction,region of interest extraction and so ...As a mainstream research direction in the field of image segmentation,medical image segmentation plays a key role in the quantification of lesions,three-dimensional reconstruction,region of interest extraction and so on.Compared with natural images,medical images have a variety of modes.Besides,the emphasis of information which is conveyed by images of different modes is quite different.Because it is time-consuming and inefficient to manually segment medical images only by professional and experienced doctors.Therefore,large quantities of automated medical image segmentation methods have been developed.However,until now,researchers have not developed a universal method for all types of medical image segmentation.This paper reviews the literature on segmentation techniques that have produced major breakthroughs in recent years.Among the large quantities of medical image segmentation methods,this paper mainly discusses two categories of medical image segmentation methods.One is the improved strategies based on traditional clustering method.The other is the research progress of the improved image segmentation network structure model based on U-Net.The power of technology proves that the performance of the deep learning-based method is significantly better than that of the traditional method.This paper discussed both advantages and disadvantages of different algorithms and detailed how these methods can be used for the segmentation of lesions or other organs and tissues,as well as possible technical trends for future work.展开更多
In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intel...In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.展开更多
Medical image segmentation plays a crucial role in clinical diagnosis and therapy systems,yet still faces many challenges.Building on convolutional neural networks(CNNs),medical image segmentation has achieved tremend...Medical image segmentation plays a crucial role in clinical diagnosis and therapy systems,yet still faces many challenges.Building on convolutional neural networks(CNNs),medical image segmentation has achieved tremendous progress.However,owing to the locality of convolution operations,CNNs have the inherent limitation in learning global context.To address the limitation in building global context relationship from CNNs,we propose LGNet,a semantic segmentation network aiming to learn local and global features for fast and accurate medical image segmentation in this paper.Specifically,we employ a two-branch architecture consisting of convolution layers in one branch to learn local features and transformer layers in the other branch to learn global features.LGNet has two key insights:(1)We bridge two-branch to learn local and global features in an interactive way;(2)we present a novel multi-feature fusion model(MSFFM)to leverage the global contexture information from transformer and the local representational features from convolutions.Our method achieves state-of-the-art trade-off in terms of accuracy and efficiency on several medical image segmentation benchmarks including Synapse,ACDC and MOST.Specifically,LGNet achieves the state-of-the-art performance with Dice's indexes of 80.15%on Synapse,of 91.70%on ACDC,and of 95.56%on MOST.Meanwhile,the inference speed attains at 172 frames per second with 224-224 input resolution.The extensive experiments demonstrate the effectiveness of the proposed LGNet for fast and accurate for medical image segmentation.展开更多
Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis app...Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis.展开更多
Medical images are used as a diagnostic tool, so protecting theirconfidentiality has long been a topic of study. From this, we propose aResnet50-DCT-based zero watermarking algorithm for use with medicalimages. To beg...Medical images are used as a diagnostic tool, so protecting theirconfidentiality has long been a topic of study. From this, we propose aResnet50-DCT-based zero watermarking algorithm for use with medicalimages. To begin, we use Resnet50, a pre-training network, to draw out thedeep features of medical images. Then the deep features are transformedby DCT transform and the perceptual hash function is used to generatethe feature vector. The original watermark is chaotic scrambled to get theencrypted watermark, and the watermark information is embedded into theoriginal medical image by XOR operation, and the logical key vector isobtained and saved at the same time. Similarly, the same feature extractionmethod is used to extract the deep features of the medical image to be testedand generate the feature vector. Later, the XOR operation is carried outbetween the feature vector and the logical key vector, and the encryptedwatermark is extracted and decrypted to get the restored watermark;thenormalized correlation coefficient (NC) of the original watermark and therestored watermark is calculated to determine the ownership and watermarkinformation of the medical image to be tested. After calculation, most ofthe NC values are greater than 0.50. The experimental results demonstratethe algorithm’s robustness, invisibility, and security, as well as its ability toaccurately extract watermark information. The algorithm also shows goodresistance to conventional attacks and geometric attacks.展开更多
文摘With the rapid advancement in artificial intelligence(AI)and its application in the Internet of Things(IoT),intelligent technologies are being introduced in the medical field,giving rise to smart healthcare systems.The medical imaging data contains sensitive information,which can easily be stolen or tampered with,necessitating secure encryption schemes designed specifically to protect these images.This paper introduces an artificial intelligence-driven novel encryption scheme tailored for the secure transmission and storage of high-resolution medical images.The proposed scheme utilizes an artificial intelligence-based autoencoder to compress highresolution medical images and to facilitate fast encryption and decryption.The proposed autoencoder retains important diagnostic information even after reducing the image dimensions.The low-resolution images then undergo a four-stage encryption process.The first two encryption stages involve permutation and the next two stages involve confusion.The first two stages ensure the disruption of the structure of the image,making it secure against statistical attacks.Whereas the two stages of confusion ensure the effective concealment of the pixel values making it difficult to decrypt without secret keys.This encrypted image is then safe for storage or transmission.The proposed scheme has been extensively evaluated against various attacks and statistical security parameters confirming its effectiveness in securing medical image data.
基金funded by Researchers Supporting Program at King Saud University,(RSPD2024R809).
文摘In blood or bone marrow,leukemia is a form of cancer.A person with leukemia has an expansion of white blood cells(WBCs).It primarily affects children and rarely affects adults.Treatment depends on the type of leukemia and the extent to which cancer has established throughout the body.Identifying leukemia in the initial stage is vital to providing timely patient care.Medical image-analysis-related approaches grant safer,quicker,and less costly solutions while ignoring the difficulties of these invasive processes.It can be simple to generalize Computer vision(CV)-based and image-processing techniques and eradicate human error.Many researchers have implemented computer-aided diagnosticmethods andmachine learning(ML)for laboratory image analysis,hopefully overcoming the limitations of late leukemia detection and determining its subgroups.This study establishes a Marine Predators Algorithm with Deep Learning Leukemia Cancer Classification(MPADL-LCC)algorithm onMedical Images.The projectedMPADL-LCC system uses a bilateral filtering(BF)technique to pre-process medical images.The MPADL-LCC system uses Faster SqueezeNet withMarine Predators Algorithm(MPA)as a hyperparameter optimizer for feature extraction.Lastly,the denoising autoencoder(DAE)methodology can be executed to accurately detect and classify leukemia cancer.The hyperparameter tuning process using MPA helps enhance leukemia cancer classification performance.Simulation results are compared with other recent approaches concerning various measurements and the MPADL-LCC algorithm exhibits the best results over other recent approaches.
文摘This article proposes a novel fractional heterogeneous neural network by coupling a Rulkov neuron with a Hopfield neural network(FRHNN),utilizing memristors for emulating neural synapses.The study firstly demonstrates the coexistence of multiple firing patterns through phase diagrams,Lyapunov exponents(LEs),and bifurcation diagrams.Secondly,the parameter related firing behaviors are described through two-parameter bifurcation diagrams.Subsequently,local attraction basins reveal multi-stability phenomena related to initial values.Moreover,the proposed model is implemented on a microcomputer-based ARM platform,and the experimental results correspond to the numerical simulations.Finally,the article explores the application of digital watermarking for medical images,illustrating its features of excellent imperceptibility,extensive key space,and robustness against attacks including noise and cropping.
基金National Natural Science Foundation of China,Grant/Award Numbers:62063004,62350410483Key Research and Development Project of Hainan Province,Grant/Award Number:ZDYF2021SHFZ093Zhejiang Provincial Postdoctoral Science Foundation,Grant/Award Number:ZJ2021028。
文摘In the intricate network environment,the secure transmission of medical images faces challenges such as information leakage and malicious tampering,significantly impacting the accuracy of disease diagnoses by medical professionals.To address this problem,the authors propose a robust feature watermarking algorithm for encrypted medical images based on multi-stage discrete wavelet transform(DWT),Daisy descriptor,and discrete cosine transform(DCT).The algorithm initially encrypts the original medical image through DWT-DCT and Logistic mapping.Subsequently,a 3-stage DWT transformation is applied to the encrypted medical image,with the centre point of the LL3 sub-band within its low-frequency component serving as the sampling point.The Daisy descriptor matrix for this point is then computed.Finally,a DCT transformation is performed on the Daisy descriptor matrix,and the low-frequency portion is processed using the perceptual hashing algorithm to generate a 32-bit binary feature vector for the medical image.This scheme utilises cryptographic knowledge and zero-watermarking technique to embed watermarks without modifying medical images and can extract the watermark from test images without the original image,which meets the basic re-quirements of medical image watermarking.The embedding and extraction of water-marks are accomplished in a mere 0.160 and 0.411s,respectively,with minimal computational overhead.Simulation results demonstrate the robustness of the algorithm against both conventional attacks and geometric attacks,with a notable performance in resisting rotation attacks.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
基金Major Program of National Natural Science Foundation of China(NSFC12292980,NSFC12292984)National Key R&D Program of China(2023YFA1009000,2023YFA1009004,2020YFA0712203,2020YFA0712201)+2 种基金Major Program of National Natural Science Foundation of China(NSFC12031016)Beijing Natural Science Foundation(BNSFZ210003)Department of Science,Technology and Information of the Ministry of Education(8091B042240).
文摘Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.
基金financed by the grant from the Youth Fund for Humanities and Social Sciences Research of the Ministry of Education (No. 19YJCZH040)。
文摘The pancreas is neither part of the five Zang organs(五脏) nor the six Fu organs(六腑).Thus,it has received little attention in Chinese medical literature.In the late 19th century,medical missionaries in China started translating and introducing anatomical and physiological knowledge about the pancreas.As for the word pancreas,an early and influential translation was “sweet meat”(甜肉),proposed by Benjamin Hobson(合信).The translation “sweet meat” is not faithful to the original meaning of “pancreas”,but is a term coined by Hobson based on his personal habits,and the word “sweet” appeared by chance.However,in the decades since the term “sweet meat” became popular,Chinese medicine practitioners,such as Tang Zonghai(唐宗海),reinterpreted it by drawing new medical illustrations for “sweet meat” and giving new connotations to the word “sweet”.This discussion and interpretation of “sweet meat” in modern China,particularly among Chinese medicine professionals,is not only a dissemination and interpretation of the knowledge of “pancreas”,but also a construction of knowledge around the term “sweet meat”.
文摘Background A medical content-based image retrieval(CBIR)system is designed to retrieve images from large imaging repositories that are visually similar to a user′s query image.CBIR is widely used in evidence-based diagnosis,teaching,and research.Although the retrieval accuracy has largely improved,there has been limited development toward visualizing important image features that indicate the similarity of retrieved images.Despite the prevalence of 3D volumetric data in medical imaging such as computed tomography(CT),current CBIR systems still rely on 2D cross-sectional views for the visualization of retrieved images.Such 2D visualization requires users to browse through the image stacks to confirm the similarity of the retrieved images and often involves mental reconstruction of 3D information,including the size,shape,and spatial relations of multiple structures.This process is time-consuming and reliant on users'experience.Methods In this study,we proposed an importance-aware 3D volume visualization method.The rendering parameters were automatically optimized to maximize the visibility of important structures that were detected and prioritized in the retrieval process.We then integrated the proposed visualization into a CBIR system,thereby complementing the 2D cross-sectional views for relevance feedback and further analyses.Results Our preliminary results demonstrate that 3D visualization can provide additional information using multimodal positron emission tomography and computed tomography(PETCT)images of a non-small cell lung cancer dataset.
文摘Deep learning has been extensively applied to medical image segmentation,resulting in significant advancements in the field of deep neural networks for medical image segmentation since the notable success of U Net in 2015.However,the application of deep learning models to ocular medical image segmentation poses unique challenges,especially compared to other body parts,due to the complexity,small size,and blurriness of such images,coupled with the scarcity of data.This article aims to provide a comprehensive review of medical image segmentation from two perspectives:the development of deep network structures and the application of segmentation in ocular imaging.Initially,the article introduces an overview of medical imaging,data processing,and performance evaluation metrics.Subsequently,it analyzes recent developments in U-Net-based network structures.Finally,for the segmentation of ocular medical images,the application of deep learning is reviewed and categorized by the type of ocular tissue.
基金supported by the National Key R&D Program of China(2018AAA0102100)the National Natural Science Foundation of China(No.62376287)+3 种基金the International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province(2021CB1013)the Key Research and Development Program of Hunan Province(2022SK2054)the Natural Science Foundation of Hunan Province(No.2022JJ30762,2023JJ70016)the 111 Project under Grant(No.B18059).
文摘Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance.
文摘Deep convolutional neural network (CNN) greatly promotes the automatic segmentation of medical images. However, due to the inherent properties of convolution operations, CNN usually cannot establish long-distance interdependence, which limits the segmentation performance. Transformer has been successfully applied to various computer vision, using self-attention mechanism to simulate long-distance interaction, so as to capture global information. However, self-attention lacks spatial location and high-performance computing. In order to solve the above problems, we develop a new medical transformer, which has a multi-scale context fusion function and can be used for medical image segmentation. The proposed model combines convolution operation and attention mechanism to form a u-shaped framework, which can capture both local and global information. First, the traditional converter module is improved to an advanced converter module, which uses post-layer normalization to obtain mild activation values, and uses scaled cosine attention with a moving window to obtain accurate spatial information. Secondly, we also introduce a deep supervision strategy to guide the model to fuse multi-scale feature information. It further enables the proposed model to effectively propagate feature information across layers, Thanks to this, it can achieve better segmentation performance while being more robust and efficient. The proposed model is evaluated on multiple medical image segmentation datasets. Experimental results demonstrate that the proposed model achieves better performance on a challenging dataset (ETIS) compared to existing methods that rely only on convolutional neural networks, transformers, or a combination of both. The mDice and mIou indicators increased by 2.74% and 3.3% respectively.
文摘The progress in medical imaging technology highlights the importance of image quality for effective diagnosis and treatment.Yet,noise during capture and transmission can compromise image accuracy and reliability,complicating clinical decisions.The rising interest in diffusion models has led to their exploration of denoising images.We present Be-FOI(Better Fluoro Images),a weakly supervised model that uses cine images to denoise fluoroscopic images,both DR types.Trained through precise noise estimation and simulation,BeFOI employs Markov chains to denoise using only the fluoroscopic image as guidance.Our tests show that BeFOI outperforms other methods,reducing noise and enhancing clar-ity and diagnostic utility,making it an effective post-processing tool for medical images.
文摘Introduction: Medical imaging is a medical specialty that involves producing images of the human body and interpreting them for diagnostic, therapeutic purposes, and for monitoring the progress of pathologies. We aimed to assess the theoretical knowledge of doctors and interns in medical imaging in the northern region of Burkina Faso. Methodology: This was a descriptive cross-sectional survey based on a self-administered questionnaire. Prescribers knowledge was estimated based on scores derived from questionnaire responses. Results: We collected 106 questionnaires out of 163, i.e. a participation rate of 65.03%. The average knowledge score was 81.71% for the contribution of medical imaging to patient management. It was 60.02% for the indications/counter-indications of radiological examinations and 72.56% for the risks associated with exposure to radiation during these examinations. The score was 59.83% for the methods used to select the appropriate radiological examination. As regards the completeness of the clinical and biological information on the forms requesting imaging examinations, the score was 96.65%. Specialist doctors had the highest overall level of knowledge (74.68%). Conclusion: Improved technical facilities, good initial and in-service training, and interdisciplinary collaboration will help to ensure that imaging tests are properly prescribed, leading to better patient care.
基金supported by the Information Technology Industry Development Agency (ITIDA),Egypt (Project No.CFP181).
文摘Image segmentation is crucial for various research areas. Manycomputer vision applications depend on segmenting images to understandthe scene, such as autonomous driving, surveillance systems, robotics, andmedical imaging. With the recent advances in deep learning (DL) and itsconfounding results in image segmentation, more attention has been drawnto its use in medical image segmentation. This article introduces a surveyof the state-of-the-art deep convolution neural network (CNN) models andmechanisms utilized in image segmentation. First, segmentation models arecategorized based on their model architecture and primary working principle.Then, CNN categories are described, and various models are discussed withineach category. Compared with other existing surveys, several applicationswith multiple architectural adaptations are discussed within each category.A comparative summary is included to give the reader insights into utilizedarchitectures in different applications and datasets. This study focuses onmedical image segmentation applications, where the most widely used architecturesare illustrated, and other promising models are suggested that haveproven their success in different domains. Finally, the present work discussescurrent limitations and solutions along with future trends in the field.
文摘Medical image steganography aims to increase data security by concealing patient-personal information as well as diagnostic and therapeutic data in the spatial or frequency domain of radiological images.On the other hand,the discipline of image steganalysis generally provides a classification based on whether an image has hidden data or not.Inspired by previous studies on image steganalysis,this study proposes a deep ensemble learning model for medical image steganalysis to detect malicious hidden data in medical images and develop medical image steganography methods aimed at securing personal information.With this purpose in mind,a dataset containing brain Magnetic Resonance(MR)images of healthy individuals and epileptic patients was built.Spatial Version of the Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Stego(HUGO),and Minimizing the Power of Optimal Detector(MIPOD)techniques used in spatial image steganalysis were adapted to the problem,and various payloads of confidential data were hidden in medical images.The architectures of medical image steganalysis networks were transferred separately from eleven Dense Convolutional Network(DenseNet),Residual Neural Network(ResNet),and Inception-based models.The steganalysis outputs of these networks were determined by assembling models separately for each spatial embedding method with different payload ratios.The study demonstrated the success of pre-trained ResNet,DenseNet,and Inception models in the cover-stego mismatch scenario for each hiding technique with different payloads.Due to the high detection accuracy achieved,the proposed model has the potential to lead to the development of novel medical image steganography algorithms that existing deep learning-based steganalysis methods cannot detect.The experiments and the evaluations clearly proved this attempt.
基金supported partly by the Open Project of State Key Laboratory of Millimeter Wave under Grant K202218partly by Innovation and Entrepreneurship Training Program of College Students under Grants 202210700006Y and 202210700005Z.
文摘As a mainstream research direction in the field of image segmentation,medical image segmentation plays a key role in the quantification of lesions,three-dimensional reconstruction,region of interest extraction and so on.Compared with natural images,medical images have a variety of modes.Besides,the emphasis of information which is conveyed by images of different modes is quite different.Because it is time-consuming and inefficient to manually segment medical images only by professional and experienced doctors.Therefore,large quantities of automated medical image segmentation methods have been developed.However,until now,researchers have not developed a universal method for all types of medical image segmentation.This paper reviews the literature on segmentation techniques that have produced major breakthroughs in recent years.Among the large quantities of medical image segmentation methods,this paper mainly discusses two categories of medical image segmentation methods.One is the improved strategies based on traditional clustering method.The other is the research progress of the improved image segmentation network structure model based on U-Net.The power of technology proves that the performance of the deep learning-based method is significantly better than that of the traditional method.This paper discussed both advantages and disadvantages of different algorithms and detailed how these methods can be used for the segmentation of lesions or other organs and tissues,as well as possible technical trends for future work.
基金supported by National Natural Science Foundation of China(NSFC)(61976123,62072213)Taishan Young Scholars Program of Shandong Provinceand Key Development Program for Basic Research of Shandong Province(ZR2020ZD44).
文摘In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.
基金supported by the Open-Fund of WNLO (Grant No.2018WNLOKF027)the Hubei Key Laboratory of Intelligent Robot in Wuhan Institute of Technology (Grant No.HBIRL 202003).
文摘Medical image segmentation plays a crucial role in clinical diagnosis and therapy systems,yet still faces many challenges.Building on convolutional neural networks(CNNs),medical image segmentation has achieved tremendous progress.However,owing to the locality of convolution operations,CNNs have the inherent limitation in learning global context.To address the limitation in building global context relationship from CNNs,we propose LGNet,a semantic segmentation network aiming to learn local and global features for fast and accurate medical image segmentation in this paper.Specifically,we employ a two-branch architecture consisting of convolution layers in one branch to learn local features and transformer layers in the other branch to learn global features.LGNet has two key insights:(1)We bridge two-branch to learn local and global features in an interactive way;(2)we present a novel multi-feature fusion model(MSFFM)to leverage the global contexture information from transformer and the local representational features from convolutions.Our method achieves state-of-the-art trade-off in terms of accuracy and efficiency on several medical image segmentation benchmarks including Synapse,ACDC and MOST.Specifically,LGNet achieves the state-of-the-art performance with Dice's indexes of 80.15%on Synapse,of 91.70%on ACDC,and of 95.56%on MOST.Meanwhile,the inference speed attains at 172 frames per second with 224-224 input resolution.The extensive experiments demonstrate the effectiveness of the proposed LGNet for fast and accurate for medical image segmentation.
文摘Medical image fusion is considered the best method for obtaining one image with rich details for efficient medical diagnosis and therapy.Deep learning provides a high performance for several medical image analysis applications.This paper proposes a deep learning model for the medical image fusion process.This model depends on Convolutional Neural Network(CNN).The basic idea of the proposed model is to extract features from both CT and MR images.Then,an additional process is executed on the extracted features.After that,the fused feature map is reconstructed to obtain the resulting fused image.Finally,the quality of the resulting fused image is enhanced by various enhancement techniques such as Histogram Matching(HM),Histogram Equalization(HE),fuzzy technique,fuzzy type,and Contrast Limited Histogram Equalization(CLAHE).The performance of the proposed fusion-based CNN model is measured by various metrics of the fusion and enhancement quality.Different realistic datasets of different modalities and diseases are tested and implemented.Also,real datasets are tested in the simulation analysis.
基金supported in part by the Natural Science Foundation of China under Grants 62063004the Key Research Project of Hainan Province under Grant ZDYF2021SHFZ093+1 种基金the Hainan Provincial Natural Science Foundation of China under Grants 2019RC018 and 619QN246the postdoctor research from Zhejiang Province under Grant ZJ2021028.
文摘Medical images are used as a diagnostic tool, so protecting theirconfidentiality has long been a topic of study. From this, we propose aResnet50-DCT-based zero watermarking algorithm for use with medicalimages. To begin, we use Resnet50, a pre-training network, to draw out thedeep features of medical images. Then the deep features are transformedby DCT transform and the perceptual hash function is used to generatethe feature vector. The original watermark is chaotic scrambled to get theencrypted watermark, and the watermark information is embedded into theoriginal medical image by XOR operation, and the logical key vector isobtained and saved at the same time. Similarly, the same feature extractionmethod is used to extract the deep features of the medical image to be testedand generate the feature vector. Later, the XOR operation is carried outbetween the feature vector and the logical key vector, and the encryptedwatermark is extracted and decrypted to get the restored watermark;thenormalized correlation coefficient (NC) of the original watermark and therestored watermark is calculated to determine the ownership and watermarkinformation of the medical image to be tested. After calculation, most ofthe NC values are greater than 0.50. The experimental results demonstratethe algorithm’s robustness, invisibility, and security, as well as its ability toaccurately extract watermark information. The algorithm also shows goodresistance to conventional attacks and geometric attacks.