In order to fast transmission and processing of medical images and do not need to install client and plug-ins, the paper designed a kind of medical image reading system based on BS structure. This system improved the ...In order to fast transmission and processing of medical images and do not need to install client and plug-ins, the paper designed a kind of medical image reading system based on BS structure. This system improved the existing IWEB in the framework of PACS client image processing, medical image based on the service WEB completion port model. To realize the fast loading images with high concurrency, compared with the traditional WEB PACS, this system has the advantages of no client without plug-in installation, at the same time in the transmission and processing performance image has been greatly improved.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
Medical image processing becomes a hot research topic in healthcare sector for effective decision making and diagnoses of diseases.Magnetic resonance imaging(MRI)is a widely utilized tool for the classification and de...Medical image processing becomes a hot research topic in healthcare sector for effective decision making and diagnoses of diseases.Magnetic resonance imaging(MRI)is a widely utilized tool for the classification and detection of prostate cancer.Since the manual screening process of prostate cancer is difficult,automated diagnostic methods become essential.This study develops a novel Deep Learning based Prostate Cancer Classification(DTL-PSCC)model using MRI images.The presented DTL-PSCC technique encompasses EfficientNet based feature extractor for the generation of a set of feature vectors.In addition,the fuzzy k-nearest neighbour(FKNN)model is utilized for classification process where the class labels are allotted to the input MRI images.Moreover,the membership value of the FKNN model can be optimally tuned by the use of krill herd algorithm(KHA)which results in improved classification performance.In order to demonstrate the good classification outcome of the DTL-PSCC technique,a wide range of simulations take place on benchmark MRI datasets.The extensive comparative results ensured the betterment of the DTL-PSCC technique over the recent methods with the maximum accuracy of 85.09%.展开更多
In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intel...In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.展开更多
Medical image compression is one of the essential technologies to facilitate real-time medical data transmission in remote healthcare applications.In general,image compression can introduce undesired coding artifacts,...Medical image compression is one of the essential technologies to facilitate real-time medical data transmission in remote healthcare applications.In general,image compression can introduce undesired coding artifacts,such as blocking artifacts and ringing effects.In this paper,we proposed a Multi-Scale Feature Attention Network(MSFAN)with two essential parts,which are multi-scale feature extraction layers and feature attention layers to efficiently remove coding artifacts of compressed medical images.Multiscale feature extraction layers have four Feature Extraction(FE)blocks.Each FE block consists of five convolution layers and one CA block for weighted skip connection.In order to optimize the proposed network architectures,a variety of verification tests were conducted using validation dataset.We used Computer Vision Center-Clinic Database(CVC-ClinicDB)consisting of 612 colonoscopy medical images to evaluate the enhancement of image restoration.The proposedMSFAN can achieve improved PSNR gains as high as 0.25 and 0.24 dB on average compared to DnCNNand DCSC,respectively.展开更多
Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Tr...Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique.展开更多
Diabetic retinopathy(DR),the main cause of irreversible blindness,is one of the most common complications of diabetes.At present,deep convolutional neural networks have achieved promising performance in automatic DR d...Diabetic retinopathy(DR),the main cause of irreversible blindness,is one of the most common complications of diabetes.At present,deep convolutional neural networks have achieved promising performance in automatic DR detection tasks.The convolution operation of methods is a local cross-correlation operation,whose receptive field de-termines the size of the local neighbourhood for processing.However,for retinal fundus photographs,there is not only the local information but also long-distance dependence between the lesion features(e.g.hemorrhages and exudates)scattered throughout the whole image.The proposed method incorporates correlations between long-range patches into the deep learning framework to improve DR detection.Patch-wise re-lationships are used to enhance the local patch features since lesions of DR usually appear as plaques.The Long-Range unit in the proposed network with a residual structure can be flexibly embedded into other trained networks.Extensive experimental results demon-strate that the proposed approach can achieve higher accuracy than existing state-of-the-art models on Messidor and EyePACS datasets.展开更多
Eye health has become a global health concern and attracted broad attention.Over the years,researchers have proposed many state-of-the-art convolutional neural networks(CNNs)to assist ophthalmologists in diagnosing oc...Eye health has become a global health concern and attracted broad attention.Over the years,researchers have proposed many state-of-the-art convolutional neural networks(CNNs)to assist ophthalmologists in diagnosing ocular diseases efficiently and precisely.However,most existing methods were dedicated to constructing sophisticated CNNs,inevitably ignoring the trade-off between performance and model complexity.To alleviate this paradox,this paper proposes a lightweight yet efficient network architecture,mixeddecomposed convolutional network(MDNet),to recognise ocular diseases.In MDNet,we introduce a novel mixed-decomposed depthwise convolution method,which takes advantage of depthwise convolution and depthwise dilated convolution operations to capture low-resolution and high-resolution patterns by using fewer computations and fewer parameters.We conduct extensive experiments on the clinical anterior segment optical coherence tomography(AS-OCT),LAG,University of California San Diego,and CIFAR-100 datasets.The results show our MDNet achieves a better trade-off between the performance and model complexity than efficient CNNs including MobileNets and MixNets.Specifically,our MDNet outperforms MobileNets by 2.5%of accuracy by using 22%fewer parameters and 30%fewer computations on the AS-OCT dataset.展开更多
Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR...Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.展开更多
This paper addresses the common orthopedic trauma of spinal vertebral fractures and aims to enhance doctors’diagnostic efficiency.Therefore,a deep-learning-based automated diagnostic systemwithmulti-label segmentatio...This paper addresses the common orthopedic trauma of spinal vertebral fractures and aims to enhance doctors’diagnostic efficiency.Therefore,a deep-learning-based automated diagnostic systemwithmulti-label segmentation is proposed to recognize the condition of vertebral fractures.The whole spine Computed Tomography(CT)image is segmented into the fracture,normal,and background using U-Net,and the fracture degree of each vertebra is evaluated(Genant semi-qualitative evaluation).The main work of this paper includes:First,based on the spatial configuration network(SCN)structure,U-Net is used instead of the SCN feature extraction network.The attention mechanismandthe residual connectionbetweenthe convolutional layers are added in the local network(LN)stage.Multiple filtering is added in the global network(GN)stage,and each layer of the LN decoder feature map is filtered separately using dot product,and the filtered features are re-convolved to obtain the GN output heatmap.Second,a network model with improved SCN(M-SCN)helps automatically localize the center-of-mass position of each vertebra,and the voxels around each localized vertebra were clipped,eliminating a large amount of redundant information(e.g.,background and other interfering vertebrae)and keeping the vertebrae to be segmented in the center of the image.Multilabel segmentation of the clipped portion was subsequently performed using U-Net.This paper uses VerSe’19,VerSe’20(using only data containing vertebral fractures),and private data(provided by Guizhou Orthopedic Hospital)for model training and evaluation.Compared with the original SCN network,the M-SCN reduced the prediction error rate by 1.09%and demonstrated the effectiveness of the improvement in ablation experiments.In the vertebral segmentation experiment,the Dice Similarity Coefficient(DSC)index reached 93.50%and the Maximum Symmetry Surface Distance(MSSD)index was 4.962 mm,with accuracy and recall of 95.82%and 91.73%,respectively.Fractured vertebrae were also marked as red and normal vertebrae were marked as white in the experiment,and the semi-qualitative assessment results of Genant were provided,as well as the results of spinal localization visualization and 3D reconstructed views of the spine to analyze the actual predictive ability of the model.It provides a promising tool for vertebral fracture detection.展开更多
Diagnosing individuals with autism spectrum disorder(ASD)accurately faces great chal-lenges in clinical practice,primarily due to the data's high heterogeneity and limited sample size.To tackle this issue,the auth...Diagnosing individuals with autism spectrum disorder(ASD)accurately faces great chal-lenges in clinical practice,primarily due to the data's high heterogeneity and limited sample size.To tackle this issue,the authors constructed a deep graph convolutional network(GCN)based on variable multi‐graph and multimodal data(VMM‐DGCN)for ASD diagnosis.Firstly,the functional connectivity matrix was constructed to extract primary features.Then,the authors constructed a variable multi‐graph construction strategy to capture the multi‐scale feature representations of each subject by utilising convolutional filters with varying kernel sizes.Furthermore,the authors brought the non‐imaging in-formation into the feature representation at each scale and constructed multiple population graphs based on multimodal data by fully considering the correlation between subjects.After extracting the deeper features of population graphs using the deep GCN(DeepGCN),the authors fused the node features of multiple subgraphs to perform node classification tasks for typical control and ASD patients.The proposed algorithm was evaluated on the Autism Brain Imaging Data Exchange I(ABIDE I)dataset,achieving an accuracy of 91.62%and an area under the curve value of 95.74%.These results demon-strated its outstanding performance compared to other ASD diagnostic algorithms.展开更多
Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irreg...Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.展开更多
Lightweight deep convolutional neural networks(CNNs)present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients.Recently,advantages of portable Ultrasound(US)imaging su...Lightweight deep convolutional neural networks(CNNs)present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients.Recently,advantages of portable Ultrasound(US)imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases.In this paper,a new framework of lightweight deep learning classifiers,namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images.Compared to traditional deep learning models,lightweight CNNs showed significant performance of real-time vision applications by using mobile devices with limited hardware resources.Four main lightweight deep learning models,namely MobileNets,ShuffleNets,MENet and MnasNet have been proposed to identify the health status of lungs using US images.Public image dataset(POCUS)was used to validate our proposed COVID-LWNet framework successfully.Three classes of infectious COVID-19,bacterial pneumonia,and the healthy lung were investigated in this study.The results showed that the performance of our proposed MnasNet classifier achieved the best accuracy score and shortest training time of 99.0%and 647.0 s,respectively.This paper demonstrates the feasibility of using our proposed COVID-LWNet framework as a new mobilebased radiological tool for clinical diagnosis of COVID-19 and other lung diseases.展开更多
Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory class...Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory classification results on mammographic images because these models are not specifically designed for mammographic images and do not take the specific traits of these images into account.To exploit the essential discriminant information of mammographic images,we propose a novel classification method based on a convolutional neural network.Specifically,the proposed method designs two branches to extract the discriminative features from mammographic images from the mediolateral oblique and craniocaudal(CC)mammographic views.The features extracted from the two-view mammographic images contain complementary information that enables breast cancer to be more easily distinguished.Moreover,the attention block is introduced to capture the channel-wise information by adjusting the weight of each feature map,which is beneficial to emphasising the important features of mammographic images.Furthermore,we add a penalty term based on the fuzzy cluster algorithm to the cross-entropy function,which improves the generalisation ability of the classification model by maximising the interclass distance and minimising the intraclass distance of the samples.The experimental results on The Digital database for Screening Mammography INbreast and MIAS mammography databases illustrate that the proposed method achieves the best classification performance and is more robust than the compared state-ofthe-art classification methods.展开更多
One of the fast-growing disease affecting women’s health seriously is breast cancer.It is highly essential to identify and detect breast cancer in the earlier stage.This paper used a novel advanced methodology than m...One of the fast-growing disease affecting women’s health seriously is breast cancer.It is highly essential to identify and detect breast cancer in the earlier stage.This paper used a novel advanced methodology than machine learning algorithms such as Deep learning algorithms to classify breast cancer accurately.Deep learning algorithms are fully automatic in learning,extracting,and classifying the features and are highly suitable for any image,from natural to medical images.Existing methods focused on using various conventional and machine learning methods for processing natural and medical images.It is inadequate for the image where the coarse structure matters most.Most of the input images are downscaled,where it is impossible to fetch all the hidden details to reach accuracy in classification.Whereas deep learning algorithms are high efficiency,fully automatic,have more learning capability using more hidden layers,fetch as much as possible hidden information from the input images,and provide an accurate prediction.Hence this paper uses AlexNet from a deep convolution neural network for classifying breast cancer in mammogram images.The performance of the proposed convolution network structure is evaluated by comparing it with the existing algorithms.展开更多
Detection of epileptic seizures on the basis of Electroencephalogram(EEG)recordings is a challenging task due to the complex,non-stationary and non-linear nature of these biomedical signals.In the existing literature,...Detection of epileptic seizures on the basis of Electroencephalogram(EEG)recordings is a challenging task due to the complex,non-stationary and non-linear nature of these biomedical signals.In the existing literature,a number of automatic epileptic seizure detection methods have been proposed that extract useful features from EEG segments and classify them using machine learning algorithms.Some characterizing features of epileptic and non-epileptic EEG signals overlap;therefore,it requires that analysis of signals must be performed from diverse perspectives.Few studies analyzed these signals in diverse domains to identify distinguishing characteristics of epileptic EEG signals.To pose the challenge mentioned above,in this paper,a fuzzy-based epileptic seizure detection model is proposed that incorporates a novel feature extraction and selection method along with fuzzy classifiers.The proposed work extracts pattern features along with time-domain,frequencydomain,and non-linear analysis of signals.It applies a feature selection strategy on extracted features to get more discriminating features that build fuzzy machine learning classifiers for the detection of epileptic seizures.The empirical evaluation of the proposed model was conducted on the benchmark Bonn EEG dataset.It shows significant accuracy of 98%to 100%for normal vs.ictal classification cases while for three class classification of normal vs.inter-ictal vs.ictal accuracy reaches to above 97.5%.The obtained results for ten classification cases(including normal,seizure or ictal,and seizure-free or inter-ictal classes)prove the superior performance of proposed work as compared to other state-of-the-art counterparts.展开更多
Brown adipose tissue(BAT)is a kind of adipose tissue engaging in thermoregulatory thermogenesis,metaboloregulatory thermogenesis,and secretory.Current studies have revealed that BAT activity is negatively correlated w...Brown adipose tissue(BAT)is a kind of adipose tissue engaging in thermoregulatory thermogenesis,metaboloregulatory thermogenesis,and secretory.Current studies have revealed that BAT activity is negatively correlated with adult body weight and is considered a target tissue for the treatment of obesity and other metabolic-related diseases.Additionally,the activity of BAT presents certain differences between different ages and genders.Clinically,BAT segmentation based on PET/CT data is a reliable method for brown fat research.However,most of the current BAT segmentation methods rely on the experience of doctors.In this paper,an improved U-net network,ICA-Unet,is proposed to achieve automatic and precise segmentation of BAT.First,the traditional 2D convolution layer in the encoder is replaced with a depth-wise overparameterized convolutional(Do-Conv)layer.Second,the channel attention block is introduced between the double-layer convolution.Finally,the image information entropy(IIE)block is added in the skip connections to strengthen the edge features.Furthermore,the performance of this method is evaluated on the dataset of PET/CT images from 368 patients.The results demonstrate a strong agreement between the automatic segmentation of BAT and manual annotation by experts.The average DICE coeffcient(DSC)is 0.9057,and the average Hausdorff distance is 7.2810.Experimental results suggest that the method proposed in this paper can achieve effcient and accurate automatic BAT segmentation and satisfy the clinical requirements of BAT.展开更多
The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause of...The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause ofdeath and among over a hundred types of cancer;lung cancer is the secondmost common type of cancer as well as the leading cause of cancer-relateddeaths. Anyhow, an accurate lung cancer diagnosis in a timely manner canelevate the likelihood of survival by a noticeable margin and medical imagingis a prevalent manner of cancer diagnosis since it is easily accessible to peoplearound the globe. Nonetheless, this is not eminently efficacious consideringhuman inspection of medical images can yield a high false positive rate. Ineffectiveand inefficient diagnosis is a crucial reason for such a high mortalityrate for this malady. However, the conspicuous advancements in deep learningand artificial intelligence have stimulated the development of exceedinglyprecise diagnosis systems. The development and performance of these systemsrely prominently on the data that is used to train these systems. A standardproblem witnessed in publicly available medical image datasets is the severeimbalance of data between different classes. This grave imbalance of data canmake a deep learning model biased towards the dominant class and unableto generalize. This study aims to present an end-to-end convolutional neuralnetwork that can accurately differentiate lung nodules from non-nodules andreduce the false positive rate to a bare minimum. To tackle the problem ofdata imbalance, we oversampled the data by transforming available images inthe minority class. The average false positive rate in the proposed method isa mere 1.5 percent. However, the average false negative rate is 31.76 percent.The proposed neural network has 68.66 percent sensitivity and 98.42 percentspecificity.展开更多
Colon cancer is the third most commonly diagnosed cancer in the world.Most colon AdenoCArcinoma(ACA)arises from pre-existing benign polyps in the mucosa of the bowel.Thus,detecting benign at the earliest helps reduce ...Colon cancer is the third most commonly diagnosed cancer in the world.Most colon AdenoCArcinoma(ACA)arises from pre-existing benign polyps in the mucosa of the bowel.Thus,detecting benign at the earliest helps reduce the mortality rate.In this work,a Predictive Modeling System(PMS)is developed for the classification of colon cancer using the Horizontal Voting Ensemble(HVE)method.Identifying different patterns inmicroscopic images is essential to an effective classification system.A twelve-layer deep learning architecture has been developed to extract these patterns.The developedHVE algorithm can increase the system’s performance according to the combined models from the last epochs of the proposed architecture.Ten thousand(10000)microscopic images are taken to test the classification performance of the proposed PMS with the HVE method.The microscopic images obtained from the colon tissues are classified intoACAor benign by the proposed PMS.Results prove that the proposed PMS has∼8%performance improvement over the architecture without using the HVE method.The proposed PMS for colon cancer reduces the misclassification rate and attains 99.2%of sensitivity and 99.4%of specificity.The overall accuracy of the proposed PMS is 99.3%,and without using the HVE method,it is only 91.3%.展开更多
Due to the rising occurrence of skin cancer and inadequate clinical expertise,it is needed to design Artificial Intelligence(AI)based tools to diagnose skin cancer at an earlier stage.Since massive skin lesion dataset...Due to the rising occurrence of skin cancer and inadequate clinical expertise,it is needed to design Artificial Intelligence(AI)based tools to diagnose skin cancer at an earlier stage.Since massive skin lesion datasets have existed in the literature,the AI-based Deep Learning(DL)modelsfind useful to differentiate benign and malignant skin lesions using dermoscopic images.This study develops an Automated Seeded Growing Segmentation with Optimal EfficientNet(ARGS-OEN)technique for skin lesion segmentation and classification.The proposed ASRGS-OEN technique involves the design of an optimal EfficientNet model in which the hyper-parameter tuning process takes place using the Flower Pollination Algorithm(FPA).In addition,Multiwheel Attention Memory Network Encoder(MWAMNE)based classification technique is employed for identifying the appropriate class labels of the dermoscopic images.A comprehensive simulation analysis of the ASRGS-OEN technique takes place and the results are inspected under several dimensions.The simulation results highlighted the supremacy of the ASRGS-OEN technique on the applied dermoscopic images compared to the recently developed approaches.展开更多
文摘In order to fast transmission and processing of medical images and do not need to install client and plug-ins, the paper designed a kind of medical image reading system based on BS structure. This system improved the existing IWEB in the framework of PACS client image processing, medical image based on the service WEB completion port model. To realize the fast loading images with high concurrency, compared with the traditional WEB PACS, this system has the advantages of no client without plug-in installation, at the same time in the transmission and processing performance image has been greatly improved.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/25/43)Taif University Researchers Supporting Project Number(TURSP-2020/346),Taif University,Taif,Saudi Arabia.
文摘Medical image processing becomes a hot research topic in healthcare sector for effective decision making and diagnoses of diseases.Magnetic resonance imaging(MRI)is a widely utilized tool for the classification and detection of prostate cancer.Since the manual screening process of prostate cancer is difficult,automated diagnostic methods become essential.This study develops a novel Deep Learning based Prostate Cancer Classification(DTL-PSCC)model using MRI images.The presented DTL-PSCC technique encompasses EfficientNet based feature extractor for the generation of a set of feature vectors.In addition,the fuzzy k-nearest neighbour(FKNN)model is utilized for classification process where the class labels are allotted to the input MRI images.Moreover,the membership value of the FKNN model can be optimally tuned by the use of krill herd algorithm(KHA)which results in improved classification performance.In order to demonstrate the good classification outcome of the DTL-PSCC technique,a wide range of simulations take place on benchmark MRI datasets.The extensive comparative results ensured the betterment of the DTL-PSCC technique over the recent methods with the maximum accuracy of 85.09%.
基金supported by National Natural Science Foundation of China(NSFC)(61976123,62072213)Taishan Young Scholars Program of Shandong Provinceand Key Development Program for Basic Research of Shandong Province(ZR2020ZD44).
文摘In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.
基金This work was supported by Kyungnam University Foundation Grant,2020.
文摘Medical image compression is one of the essential technologies to facilitate real-time medical data transmission in remote healthcare applications.In general,image compression can introduce undesired coding artifacts,such as blocking artifacts and ringing effects.In this paper,we proposed a Multi-Scale Feature Attention Network(MSFAN)with two essential parts,which are multi-scale feature extraction layers and feature attention layers to efficiently remove coding artifacts of compressed medical images.Multiscale feature extraction layers have four Feature Extraction(FE)blocks.Each FE block consists of five convolution layers and one CA block for weighted skip connection.In order to optimize the proposed network architectures,a variety of verification tests were conducted using validation dataset.We used Computer Vision Center-Clinic Database(CVC-ClinicDB)consisting of 612 colonoscopy medical images to evaluate the enhancement of image restoration.The proposedMSFAN can achieve improved PSNR gains as high as 0.25 and 0.24 dB on average compared to DnCNNand DCSC,respectively.
文摘Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique.
基金National Natural Science Foundation of China,Grant/Award Numbers:62001141,62272319Science,Technology and Innovation Commission of Shenzhen Municipality,Grant/Award Numbers:GJHZ20210705141812038,JCYJ20210324094413037,JCYJ20210324131800002,RCBS20210609103820029Stable Support Projects for Shenzhen Higher Education Institutions,Grant/Award Number:20220715183602001。
文摘Diabetic retinopathy(DR),the main cause of irreversible blindness,is one of the most common complications of diabetes.At present,deep convolutional neural networks have achieved promising performance in automatic DR detection tasks.The convolution operation of methods is a local cross-correlation operation,whose receptive field de-termines the size of the local neighbourhood for processing.However,for retinal fundus photographs,there is not only the local information but also long-distance dependence between the lesion features(e.g.hemorrhages and exudates)scattered throughout the whole image.The proposed method incorporates correlations between long-range patches into the deep learning framework to improve DR detection.Patch-wise re-lationships are used to enhance the local patch features since lesions of DR usually appear as plaques.The Long-Range unit in the proposed network with a residual structure can be flexibly embedded into other trained networks.Extensive experimental results demon-strate that the proposed approach can achieve higher accuracy than existing state-of-the-art models on Messidor and EyePACS datasets.
基金Stable Support Plan Program,Grant/Award Number:20200925174052004Shenzhen Natural Science Fund,Grant/Award Number:JCYJ20200109140820699+2 种基金National Natural Science Foundation of China,Grant/Award Number:82272086Guangdong Provincial Department of Education,Grant/Award Numbers:2020ZDZX3043,SJZLGC202202Guangdong Provincial Key Laboratory,Grant/Award Number:2020B121201001。
文摘Eye health has become a global health concern and attracted broad attention.Over the years,researchers have proposed many state-of-the-art convolutional neural networks(CNNs)to assist ophthalmologists in diagnosing ocular diseases efficiently and precisely.However,most existing methods were dedicated to constructing sophisticated CNNs,inevitably ignoring the trade-off between performance and model complexity.To alleviate this paradox,this paper proposes a lightweight yet efficient network architecture,mixeddecomposed convolutional network(MDNet),to recognise ocular diseases.In MDNet,we introduce a novel mixed-decomposed depthwise convolution method,which takes advantage of depthwise convolution and depthwise dilated convolution operations to capture low-resolution and high-resolution patterns by using fewer computations and fewer parameters.We conduct extensive experiments on the clinical anterior segment optical coherence tomography(AS-OCT),LAG,University of California San Diego,and CIFAR-100 datasets.The results show our MDNet achieves a better trade-off between the performance and model complexity than efficient CNNs including MobileNets and MixNets.Specifically,our MDNet outperforms MobileNets by 2.5%of accuracy by using 22%fewer parameters and 30%fewer computations on the AS-OCT dataset.
基金the PID2022‐137451OB‐I00 and PID2022‐137629OA‐I00 projects funded by the MICIU/AEIAEI/10.13039/501100011033 and by ERDF/EU.
文摘Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.
文摘This paper addresses the common orthopedic trauma of spinal vertebral fractures and aims to enhance doctors’diagnostic efficiency.Therefore,a deep-learning-based automated diagnostic systemwithmulti-label segmentation is proposed to recognize the condition of vertebral fractures.The whole spine Computed Tomography(CT)image is segmented into the fracture,normal,and background using U-Net,and the fracture degree of each vertebra is evaluated(Genant semi-qualitative evaluation).The main work of this paper includes:First,based on the spatial configuration network(SCN)structure,U-Net is used instead of the SCN feature extraction network.The attention mechanismandthe residual connectionbetweenthe convolutional layers are added in the local network(LN)stage.Multiple filtering is added in the global network(GN)stage,and each layer of the LN decoder feature map is filtered separately using dot product,and the filtered features are re-convolved to obtain the GN output heatmap.Second,a network model with improved SCN(M-SCN)helps automatically localize the center-of-mass position of each vertebra,and the voxels around each localized vertebra were clipped,eliminating a large amount of redundant information(e.g.,background and other interfering vertebrae)and keeping the vertebrae to be segmented in the center of the image.Multilabel segmentation of the clipped portion was subsequently performed using U-Net.This paper uses VerSe’19,VerSe’20(using only data containing vertebral fractures),and private data(provided by Guizhou Orthopedic Hospital)for model training and evaluation.Compared with the original SCN network,the M-SCN reduced the prediction error rate by 1.09%and demonstrated the effectiveness of the improvement in ablation experiments.In the vertebral segmentation experiment,the Dice Similarity Coefficient(DSC)index reached 93.50%and the Maximum Symmetry Surface Distance(MSSD)index was 4.962 mm,with accuracy and recall of 95.82%and 91.73%,respectively.Fractured vertebrae were also marked as red and normal vertebrae were marked as white in the experiment,and the semi-qualitative assessment results of Genant were provided,as well as the results of spinal localization visualization and 3D reconstructed views of the spine to analyze the actual predictive ability of the model.It provides a promising tool for vertebral fracture detection.
基金National Natural Science Foundation of China,Grant/Award Number:62172139Science Research Project of Hebei Province,Grant/Award Number:CXY2024031+3 种基金Natural Science Foundation of Hebei Province,Grant/Award Number:F2022201055Project Funded by China Postdoctoral,Grant/Award Number:2022M713361Natural Science Interdisciplinary Research Program of Hebei University,Grant/Award Number:DXK202102Open Project Program of the National Laboratory of Pattern Recognition,Grant/Award Number:202200007。
文摘Diagnosing individuals with autism spectrum disorder(ASD)accurately faces great chal-lenges in clinical practice,primarily due to the data's high heterogeneity and limited sample size.To tackle this issue,the authors constructed a deep graph convolutional network(GCN)based on variable multi‐graph and multimodal data(VMM‐DGCN)for ASD diagnosis.Firstly,the functional connectivity matrix was constructed to extract primary features.Then,the authors constructed a variable multi‐graph construction strategy to capture the multi‐scale feature representations of each subject by utilising convolutional filters with varying kernel sizes.Furthermore,the authors brought the non‐imaging in-formation into the feature representation at each scale and constructed multiple population graphs based on multimodal data by fully considering the correlation between subjects.After extracting the deeper features of population graphs using the deep GCN(DeepGCN),the authors fused the node features of multiple subgraphs to perform node classification tasks for typical control and ASD patients.The proposed algorithm was evaluated on the Autism Brain Imaging Data Exchange I(ABIDE I)dataset,achieving an accuracy of 91.62%and an area under the curve value of 95.74%.These results demon-strated its outstanding performance compared to other ASD diagnostic algorithms.
基金National Natural Science Foundation of China,Grant/Award Numbers:62377026,62201222Knowledge Innovation Program of Wuhan-Shuguang Project,Grant/Award Number:2023010201020382+1 种基金National Key Research and Development Programme of China,Grant/Award Number:2022YFD1700204Fundamental Research Funds for the Central Universities,Grant/Award Numbers:CCNU22QN014,CCNU22JC007,CCNU22XJ034.
文摘Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.
基金This research received the support from Taif University Researchers Supporting Project Number(TURSP-2020/147),Taif university,Taif,Saudi Arabia.
文摘Lightweight deep convolutional neural networks(CNNs)present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients.Recently,advantages of portable Ultrasound(US)imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases.In this paper,a new framework of lightweight deep learning classifiers,namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images.Compared to traditional deep learning models,lightweight CNNs showed significant performance of real-time vision applications by using mobile devices with limited hardware resources.Four main lightweight deep learning models,namely MobileNets,ShuffleNets,MENet and MnasNet have been proposed to identify the health status of lungs using US images.Public image dataset(POCUS)was used to validate our proposed COVID-LWNet framework successfully.Three classes of infectious COVID-19,bacterial pneumonia,and the healthy lung were investigated in this study.The results showed that the performance of our proposed MnasNet classifier achieved the best accuracy score and shortest training time of 99.0%and 647.0 s,respectively.This paper demonstrates the feasibility of using our proposed COVID-LWNet framework as a new mobilebased radiological tool for clinical diagnosis of COVID-19 and other lung diseases.
基金Guangdong Basic and Applied Basic Research Foundation,Grant/Award Number:2019A1515110582Shenzhen Key Laboratory of Visual Object Detection and Recognition,Grant/Award Number:ZDSYS20190902093015527National Natural Science Foundation of China,Grant/Award Number:61876051。
文摘Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory classification results on mammographic images because these models are not specifically designed for mammographic images and do not take the specific traits of these images into account.To exploit the essential discriminant information of mammographic images,we propose a novel classification method based on a convolutional neural network.Specifically,the proposed method designs two branches to extract the discriminative features from mammographic images from the mediolateral oblique and craniocaudal(CC)mammographic views.The features extracted from the two-view mammographic images contain complementary information that enables breast cancer to be more easily distinguished.Moreover,the attention block is introduced to capture the channel-wise information by adjusting the weight of each feature map,which is beneficial to emphasising the important features of mammographic images.Furthermore,we add a penalty term based on the fuzzy cluster algorithm to the cross-entropy function,which improves the generalisation ability of the classification model by maximising the interclass distance and minimising the intraclass distance of the samples.The experimental results on The Digital database for Screening Mammography INbreast and MIAS mammography databases illustrate that the proposed method achieves the best classification performance and is more robust than the compared state-ofthe-art classification methods.
文摘One of the fast-growing disease affecting women’s health seriously is breast cancer.It is highly essential to identify and detect breast cancer in the earlier stage.This paper used a novel advanced methodology than machine learning algorithms such as Deep learning algorithms to classify breast cancer accurately.Deep learning algorithms are fully automatic in learning,extracting,and classifying the features and are highly suitable for any image,from natural to medical images.Existing methods focused on using various conventional and machine learning methods for processing natural and medical images.It is inadequate for the image where the coarse structure matters most.Most of the input images are downscaled,where it is impossible to fetch all the hidden details to reach accuracy in classification.Whereas deep learning algorithms are high efficiency,fully automatic,have more learning capability using more hidden layers,fetch as much as possible hidden information from the input images,and provide an accurate prediction.Hence this paper uses AlexNet from a deep convolution neural network for classifying breast cancer in mammogram images.The performance of the proposed convolution network structure is evaluated by comparing it with the existing algorithms.
基金This work was supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(Grant No.NRF-2020R1I1A3074141)the Brain Research Program through the NRF funded by the Ministry of Science,ICT and Future Planning(Grant No.NRF-2019M3C7A1020406),and“Regional Innovation Strategy(RIS)”through the NRF funded by the Ministry of Education.
文摘Detection of epileptic seizures on the basis of Electroencephalogram(EEG)recordings is a challenging task due to the complex,non-stationary and non-linear nature of these biomedical signals.In the existing literature,a number of automatic epileptic seizure detection methods have been proposed that extract useful features from EEG segments and classify them using machine learning algorithms.Some characterizing features of epileptic and non-epileptic EEG signals overlap;therefore,it requires that analysis of signals must be performed from diverse perspectives.Few studies analyzed these signals in diverse domains to identify distinguishing characteristics of epileptic EEG signals.To pose the challenge mentioned above,in this paper,a fuzzy-based epileptic seizure detection model is proposed that incorporates a novel feature extraction and selection method along with fuzzy classifiers.The proposed work extracts pattern features along with time-domain,frequencydomain,and non-linear analysis of signals.It applies a feature selection strategy on extracted features to get more discriminating features that build fuzzy machine learning classifiers for the detection of epileptic seizures.The empirical evaluation of the proposed model was conducted on the benchmark Bonn EEG dataset.It shows significant accuracy of 98%to 100%for normal vs.ictal classification cases while for three class classification of normal vs.inter-ictal vs.ictal accuracy reaches to above 97.5%.The obtained results for ten classification cases(including normal,seizure or ictal,and seizure-free or inter-ictal classes)prove the superior performance of proposed work as compared to other state-of-the-art counterparts.
基金supported in part by the National Natural Science Foundation of China(61701403,82122033,81871379)National Key Research and Development Program of China(2016YFC0103804,2019YFC1521103,2020YFC1523301,2019YFC-1521102)+3 种基金Key R&D Projects in Shaanxi Province(2019ZDLSF07-02,2019ZDLGY10-01)Key R&D Projects in Qinghai Province(2020-SF-143)China Post-doctoral Science Foundation(2018M643719)Young Talent Support Program of the Shaanxi Association for Science and Technology(20190107).
文摘Brown adipose tissue(BAT)is a kind of adipose tissue engaging in thermoregulatory thermogenesis,metaboloregulatory thermogenesis,and secretory.Current studies have revealed that BAT activity is negatively correlated with adult body weight and is considered a target tissue for the treatment of obesity and other metabolic-related diseases.Additionally,the activity of BAT presents certain differences between different ages and genders.Clinically,BAT segmentation based on PET/CT data is a reliable method for brown fat research.However,most of the current BAT segmentation methods rely on the experience of doctors.In this paper,an improved U-net network,ICA-Unet,is proposed to achieve automatic and precise segmentation of BAT.First,the traditional 2D convolution layer in the encoder is replaced with a depth-wise overparameterized convolutional(Do-Conv)layer.Second,the channel attention block is introduced between the double-layer convolution.Finally,the image information entropy(IIE)block is added in the skip connections to strengthen the edge features.Furthermore,the performance of this method is evaluated on the dataset of PET/CT images from 368 patients.The results demonstrate a strong agreement between the automatic segmentation of BAT and manual annotation by experts.The average DICE coeffcient(DSC)is 0.9057,and the average Hausdorff distance is 7.2810.Experimental results suggest that the method proposed in this paper can achieve effcient and accurate automatic BAT segmentation and satisfy the clinical requirements of BAT.
基金supported this research through the National Research Foundation of Korea (NRF)funded by the Ministry of Science,ICT (2019M3F2A1073387)this work was supported by the Institute for Information&communications Technology Promotion (IITP) (NO.2022-0-00980Cooperative Intelligence Framework of Scene Perception for Autonomous IoT Device).
文摘The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause ofdeath and among over a hundred types of cancer;lung cancer is the secondmost common type of cancer as well as the leading cause of cancer-relateddeaths. Anyhow, an accurate lung cancer diagnosis in a timely manner canelevate the likelihood of survival by a noticeable margin and medical imagingis a prevalent manner of cancer diagnosis since it is easily accessible to peoplearound the globe. Nonetheless, this is not eminently efficacious consideringhuman inspection of medical images can yield a high false positive rate. Ineffectiveand inefficient diagnosis is a crucial reason for such a high mortalityrate for this malady. However, the conspicuous advancements in deep learningand artificial intelligence have stimulated the development of exceedinglyprecise diagnosis systems. The development and performance of these systemsrely prominently on the data that is used to train these systems. A standardproblem witnessed in publicly available medical image datasets is the severeimbalance of data between different classes. This grave imbalance of data canmake a deep learning model biased towards the dominant class and unableto generalize. This study aims to present an end-to-end convolutional neuralnetwork that can accurately differentiate lung nodules from non-nodules andreduce the false positive rate to a bare minimum. To tackle the problem ofdata imbalance, we oversampled the data by transforming available images inthe minority class. The average false positive rate in the proposed method isa mere 1.5 percent. However, the average false negative rate is 31.76 percent.The proposed neural network has 68.66 percent sensitivity and 98.42 percentspecificity.
文摘Colon cancer is the third most commonly diagnosed cancer in the world.Most colon AdenoCArcinoma(ACA)arises from pre-existing benign polyps in the mucosa of the bowel.Thus,detecting benign at the earliest helps reduce the mortality rate.In this work,a Predictive Modeling System(PMS)is developed for the classification of colon cancer using the Horizontal Voting Ensemble(HVE)method.Identifying different patterns inmicroscopic images is essential to an effective classification system.A twelve-layer deep learning architecture has been developed to extract these patterns.The developedHVE algorithm can increase the system’s performance according to the combined models from the last epochs of the proposed architecture.Ten thousand(10000)microscopic images are taken to test the classification performance of the proposed PMS with the HVE method.The microscopic images obtained from the colon tissues are classified intoACAor benign by the proposed PMS.Results prove that the proposed PMS has∼8%performance improvement over the architecture without using the HVE method.The proposed PMS for colon cancer reduces the misclassification rate and attains 99.2%of sensitivity and 99.4%of specificity.The overall accuracy of the proposed PMS is 99.3%,and without using the HVE method,it is only 91.3%.
文摘Due to the rising occurrence of skin cancer and inadequate clinical expertise,it is needed to design Artificial Intelligence(AI)based tools to diagnose skin cancer at an earlier stage.Since massive skin lesion datasets have existed in the literature,the AI-based Deep Learning(DL)modelsfind useful to differentiate benign and malignant skin lesions using dermoscopic images.This study develops an Automated Seeded Growing Segmentation with Optimal EfficientNet(ARGS-OEN)technique for skin lesion segmentation and classification.The proposed ASRGS-OEN technique involves the design of an optimal EfficientNet model in which the hyper-parameter tuning process takes place using the Flower Pollination Algorithm(FPA).In addition,Multiwheel Attention Memory Network Encoder(MWAMNE)based classification technique is employed for identifying the appropriate class labels of the dermoscopic images.A comprehensive simulation analysis of the ASRGS-OEN technique takes place and the results are inspected under several dimensions.The simulation results highlighted the supremacy of the ASRGS-OEN technique on the applied dermoscopic images compared to the recently developed approaches.