Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w...Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.展开更多
Diabetic retinopathy(DR)diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features.This task is very difficult for ophthalmologists and timeco...Diabetic retinopathy(DR)diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features.This task is very difficult for ophthalmologists and timeconsuming.Therefore,many computer-aided diagnosis(CAD)systems were developed to automate this screening process ofDR.In this paper,aCAD-DR system is proposed based on preprocessing and a pre-train transfer learningbased convolutional neural network(PCNN)to recognize the five stages of DR through retinal fundus images.To develop this CAD-DR system,a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results.The architecture of the PCNN model is based on three main phases.Firstly,the training process of the proposed PCNN is accomplished by using the expected gradient length(EGL)to decrease the image labeling efforts during the training of the CNN model.Secondly,themost informative patches and images were automatically selected using a few pieces of training labeled samples.Thirdly,the PCNN method generated useful masks for prognostication and identified regions of interest.Fourthly,the DR-related lesions involved in the classification task such as micro-aneurysms,hemorrhages,and exudates were detected and then used for recognition of DR.The PCNN model is pre-trained using a high-end graphical processor unit(GPU)on the publicly available Kaggle benchmark.The obtained results demonstrate that the CAD-DR system outperforms compared to other state-of-the-art in terms of sensitivity(SE),specificity(SP),and accuracy(ACC).On the test set of 30,000 images,the CAD-DR system achieved an average SE of 93.20%,SP of 96.10%,and ACC of 98%.This result indicates that the proposed CAD-DR system is appropriate for the screening of the severity-level of DR.展开更多
Deep neural network(DNN)based computer-aided breast tumor diagnosis(CABTD)method plays a vital role in the early detection and diagnosis of breast tumors.However,a Brightness mode(B-mode)ultrasound image derives train...Deep neural network(DNN)based computer-aided breast tumor diagnosis(CABTD)method plays a vital role in the early detection and diagnosis of breast tumors.However,a Brightness mode(B-mode)ultrasound image derives training feature samples that make closer isolation toward the infection part.Hence,it is expensive due to a metaheuristic search of features occupying the global region of interest(ROI)structures of input images.Thus,it may lead to the high computational complexity of the pre-trained DNN-based CABTD method.This paper proposes a novel ensemble pretrained DNN-based CABTD method using global-and local-ROI-structures of B-mode ultrasound images.It conveys the additional consideration of a local-ROI-structures for further enhan-cing the pretrained DNN-based CABTD method’s breast tumor diagnostic performance without degrading its visual quality.The features are extracted at various depths(18,50,and 101)from the global and local ROI structures and feed to support vector machine for better classification.From the experimental results,it has been observed that the combined local and global ROI structure of small depth residual network ResNet18(0.8 in%)has produced significant improve-ment in pixel ratio as compared to ResNet50(0.5 in%)and ResNet101(0.3 in%),respectively.Subsequently,the pretrained DNN-based CABTD methods have been tested by influencing local and global ROI structures to diagnose two specific breast tumors(Benign and Malignant)and improve the diagnostic accuracy(86%)compared to Dense Net,Alex Net,VGG Net,and Google Net.Moreover,it reduces the computational complexity due to the small depth residual network ResNet18,respectively.展开更多
Computer-aided diagnosis(CAD)models exploit artificial intelligence(AI)for chest X-ray(CXR)examination to identify the presence of tuberculosis(TB)and can improve the feasibility and performance of CXR for TB screenin...Computer-aided diagnosis(CAD)models exploit artificial intelligence(AI)for chest X-ray(CXR)examination to identify the presence of tuberculosis(TB)and can improve the feasibility and performance of CXR for TB screening and triage.At the same time,CXR interpretation is a time-consuming and subjective process.Furthermore,high resemblance among the radiological patterns of TB and other lung diseases can result in misdiagnosis.Therefore,computer-aided diagnosis(CAD)models using machine learning(ML)and deep learning(DL)can be designed for screening TB accurately.With this motivation,this article develops a Water Strider Optimization with Deep Transfer Learning Enabled Tuberculosis Classification(WSODTL-TBC)model on Chest X-rays(CXR).The presented WSODTL-TBC model aims to detect and classify TB on CXR images.Primarily,the WSODTL-TBC model undergoes image filtering techniques to discard the noise content and U-Net-based image segmentation.Besides,a pre-trained residual network with a two-dimensional convolutional neural network(2D-CNN)model is applied to extract feature vectors.In addition,the WSO algorithm with long short-term memory(LSTM)model was employed for identifying and classifying TB,where the WSO algorithm is applied as a hyperparameter optimizer of the LSTM methodology,showing the novelty of the work.The performance validation of the presented WSODTL-TBC model is carried out on the benchmark dataset,and the outcomes were investigated in many aspects.The experimental development pointed out the betterment of the WSODTL-TBC model over existing algorithms.展开更多
With the rapid increase of new cases with an increased mortality rate,cancer is considered the second and most deadly disease globally.Breast cancer is the most widely affected cancer worldwide,with an increased death...With the rapid increase of new cases with an increased mortality rate,cancer is considered the second and most deadly disease globally.Breast cancer is the most widely affected cancer worldwide,with an increased death rate percentage.Due to radiologists’processing of mammogram images,many computer-aided diagnoses have been developed to detect breast cancer.Early detection of breast cancer will reduce the death rate worldwide.The early diagnosis of breast cancer using the developed computer-aided diagnosis(CAD)systems still needed to be enhanced by incorporating innovative deep learning technologies to improve the accuracy and sensitivity of the detection system with a reduced false positive rate.This paper proposed an efficient and optimized deep learning-based feature selection approach with this consideration.This model selects the relevant features from the mammogram images that can improve the accuracy of malignant detection and reduce the false alarm rate.Transfer learning is used in the extraction of features initially.Na ext,a convolution neural network,is used to extract the features.The two feature vectors are fused and optimized with enhanced Butterfly Optimization with Gaussian function(TL-CNN-EBOG)to select the final most relevant features.The optimized features are applied to the classifier called Deep belief network(DBN)to classify the benign and malignant images.The feature extraction and classification process used two datasets,breast,and MIAS.Compared to the existing methods,the optimized deep learning-based model secured 98.6%of improved accuracy on the breast dataset and 98.85%of improved accuracy on the MIAS dataset.展开更多
BACKGROUND Upper gastrointestinal endoscopy is critical for esophageal squamous cell carcinoma(ESCC)detection;however,endoscopists require long-term training to avoid missing superficial lesions.AIM To develop a deep ...BACKGROUND Upper gastrointestinal endoscopy is critical for esophageal squamous cell carcinoma(ESCC)detection;however,endoscopists require long-term training to avoid missing superficial lesions.AIM To develop a deep learning computer-assisted diagnosis(CAD)system for endoscopic detection of superficial ESCC and investigate its application value.METHODS We configured the CAD system for white-light and narrow-band imaging modes based on the YOLO v5 algorithm.A total of 4447 images from 837 patients and 1695 images from 323 patients were included in the training and testing datasets,respectively.Two experts and two non-expert endoscopists reviewed the testing dataset independently and with computer assistance.The diagnostic performance was evaluated in terms of the area under the receiver operating characteristic curve,accuracy,sensitivity,and specificity.RESULTS The area under the receiver operating characteristics curve,accuracy,sensitivity,and specificity of the CAD system were 0.982[95%confidence interval(CI):0.969-0.994],92.9%(95%CI:89.5%-95.2%),91.9%(95%CI:87.4%-94.9%),and 94.7%(95%CI:89.0%-97.6%),respectively.The accuracy of CAD was significantly higher than that of non-expert endoscopists(78.3%,P<0.001 compared with CAD)and comparable to that of expert endoscopists(91.0%,P=0.129 compared with CAD).After referring to the CAD results,the accuracy of the non-expert endoscopists significantly improved(88.2%vs 78.3%,P<0.001).Lesions with Paris classification type 0-IIb were more likely to be inaccurately identified by the CAD system.CONCLUSION The diagnostic performance of the CAD system is promising and may assist in improving detectability,particularly for inexperienced endoscopists.展开更多
Lung is an important organ of human body.More and more people are suffering from lung diseases due to air pollution.These diseases are usually highly infectious.Such as lung tuberculosis,novel coronavirus COVID-19,etc...Lung is an important organ of human body.More and more people are suffering from lung diseases due to air pollution.These diseases are usually highly infectious.Such as lung tuberculosis,novel coronavirus COVID-19,etc.Lung nodule is a kind of high-density globular lesion in the lung.Physicians need to spend a lot of time and energy to observe the computed tomography image sequences to make a diagnosis,which is inefficient.For this reason,the use of computer-assisted diagnosis of lung nodules has become the current main trend.In the process of computer-aided diagnosis,how to reduce the false positive rate while ensuring a low missed detection rate is a difficulty and focus of current research.To solve this problem,we propose a three-dimensional optimization model to achieve the extraction of suspected regions,improve the traditional deep belief network,and to modify the dispersion matrix between classes.We construct a multi-view model,fuse local three-dimensional information into two-dimensional images,and thereby to reduce the complexity of the algorithm.And alleviate the problem of unbalanced training caused by only a small number of positive samples.Experiments show that the false positive rate of the algorithm proposed in this paper is as low as 12%,which is in line with clinical application standards.展开更多
BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algor...BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algorithm similar to deep learning,has demonstrated its capability to recognise specific features that can detect pathological lesions.AIM To assess the use of CNNs in examining HCC and liver masses images in the diagnosis of cancer and evaluating the accuracy level of CNNs and their performance.METHODS The databases PubMed,EMBASE,and the Web of Science and research books were systematically searched using related keywords.Studies analysing pathological anatomy,cellular,and radiological images on HCC or liver masses using CNNs were identified according to the study protocol to detect cancer,differentiating cancer from other lesions,or staging the lesion.The data were extracted as per a predefined extraction.The accuracy level and performance of the CNNs in detecting cancer or early stages of cancer were analysed.The primary outcomes of the study were analysing the type of cancer or liver mass and identifying the type of images that showed optimum accuracy in cancer detection.RESULTS A total of 11 studies that met the selection criteria and were consistent with the aims of the study were identified.The studies demonstrated the ability to differentiate liver masses or differentiate HCC from other lesions(n=6),HCC from cirrhosis or development of new tumours(n=3),and HCC nuclei grading or segmentation(n=2).The CNNs showed satisfactory levels of accuracy.The studies aimed at detecting lesions(n=4),classification(n=5),and segmentation(n=2).Several methods were used to assess the accuracy of CNN models used.CONCLUSION The role of CNNs in analysing images and as tools in early detection of HCC or liver masses has been demonstrated in these studies.While a few limitations have been identified in these studies,overall there was an optimal level of accuracy of the CNNs used in segmentation and classification of liver cancers images.展开更多
BACKGROUND Identifying genetic mutations in cancer patients have been increasingly important because distinctive mutational patterns can be very informative to determine the optimal therapeutic strategy. Recent studie...BACKGROUND Identifying genetic mutations in cancer patients have been increasingly important because distinctive mutational patterns can be very informative to determine the optimal therapeutic strategy. Recent studies have shown that deep learning-based molecular cancer subtyping can be performed directly from the standard hematoxylin and eosin(H&E) sections in diverse tumors including colorectal cancers(CRCs). Since H&E-stained tissue slides are ubiquitously available, mutation prediction with the pathology images from cancers can be a time-and cost-effective complementary method for personalized treatment.AIM To predict the frequently occurring actionable mutations from the H&E-stained CRC whole-slide images(WSIs) with deep learning-based classifiers.METHODS A total of 629 CRC patients from The Cancer Genome Atlas(TCGA-COAD and TCGA-READ) and 142 CRC patients from Seoul St. Mary Hospital(SMH) were included. Based on the mutation frequency in TCGA and SMH datasets, we chose APC, KRAS, PIK3CA, SMAD4, and TP53 genes for the study. The classifiers were trained with 360 × 360 pixel patches of tissue images. The receiver operating characteristic(ROC) curves and area under the curves(AUCs) for all the classifiers were presented.RESULTS The AUCs for ROC curves ranged from 0.693 to 0.809 for the TCGA frozen WSIs and from 0.645 to 0.783 for the TCGA formalin-fixed paraffin-embedded WSIs.The prediction performance can be enhanced with the expansion of datasets. When the classifiers were trained with both TCGA and SMH data, the prediction performance was improved.CONCLUSION APC, KRAS, PIK3CA, SMAD4, and TP53 mutations can be predicted from H&E pathology images using deep learning-based classifiers, demonstrating the potential for deep learning-based mutation prediction in the CRC tissue slides.展开更多
BACKGROUND Hepatic steatosis is a major cause of chronic liver disease.Two-dimensional(2D)ultrasound is the most widely used non-invasive tool for screening and monitoring,but associated diagnoses are highly subjectiv...BACKGROUND Hepatic steatosis is a major cause of chronic liver disease.Two-dimensional(2D)ultrasound is the most widely used non-invasive tool for screening and monitoring,but associated diagnoses are highly subjective.AIM To develop a scalable deep learning(DL)algorithm for quantitative scoring of liver steatosis from 2D ultrasound images.METHODS Using multi-view ultrasound data from 3310 patients,19513 studies,and 228075 images from a retrospective cohort of patients received elastography,we trained a DL algorithm to diagnose steatosis stages(healthy,mild,moderate,or severe)from clinical ultrasound diagnoses.Performance was validated on two multiscanner unblinded and blinded(initially to DL developer)histology-proven cohorts(147 and 112 patients)with histopathology fatty cell percentage diagnoses and a subset with FibroScan diagnoses.We also quantified reliability across scanners and viewpoints.Results were evaluated using Bland-Altman and receiver operating characteristic(ROC)analysis.RESULTS The DL algorithm demonstrated repeatable measurements with a moderate number of images(three for each viewpoint)and high agreement across three premium ultrasound scanners.High diagnostic performance was observed across all viewpoints:Areas under the curve of the ROC to classify mild,moderate,and severe steatosis grades were 0.85,0.91,and 0.93,respectively.The DL algorithm outperformed or performed at least comparably to FibroScan control attenuation parameter(CAP)with statistically significant improvements for all levels on the unblinded histology-proven cohort and for“=severe”steatosis on the blinded histology-proven cohort.CONCLUSION The DL algorithm provides a reliable quantitative steatosis assessment across view and scanners on two multi-scanner cohorts.Diagnostic performance was high with comparable or better performance than the CAP.展开更多
Humankind is facing another deadliest pandemic of all times in history,caused by COVID-19.Apart from this challenging pandemic,World Health Organization(WHO)considers tuberculosis(TB)as a preeminent infectious disease...Humankind is facing another deadliest pandemic of all times in history,caused by COVID-19.Apart from this challenging pandemic,World Health Organization(WHO)considers tuberculosis(TB)as a preeminent infectious disease due to its high infection rate.Generally,both TB and COVID-19 severely affect the lungs,thus hardening the job of medical practitioners who can often misidentify these diseases in the current situation.Therefore,the time of need calls for an immediate and meticulous automatic diagnostic tool that can accurately discriminate both diseases.As one of the preliminary smart health systems that examine three clinical states(COVID-19,TB,and normal cases),this study proposes an amalgam of image filtering,data-augmentation technique,transfer learning-based approach,and advanced deep-learning classifiers to effectively segregate these diseases.It first employed a generative adversarial network(GAN)and Crimmins speckle removal filter on X-ray images to overcome the issue of limited data and noise.Each pre-processed image is then converted into red,green,and blue(RGB)and Commission Internationale de l’Elcairage(CIE)color spaces from which deep fused features are formed by extracting relevant features using DenseNet121 and ResNet50.Each feature extractor extracts 1000 most useful features which are then fused and finally fed to two variants of recurrent neural network(RNN)classifiers for precise discrimination of threeclinical states.Comparative analysis showed that the proposed Bi-directional long-short-term-memory(Bi-LSTM)model dominated the long-short-termmemory(LSTM)network by attaining an overall accuracy of 98.22%for the three-class classification task,whereas LSTM hardly achieved 94.22%accuracy on the test dataset.展开更多
Malaria is a severe disease caused by Plasmodium parasites,which can be detected through blood smear images.The early identification of the disease can effectively reduce the severity rate.Deep learning(DL)models can ...Malaria is a severe disease caused by Plasmodium parasites,which can be detected through blood smear images.The early identification of the disease can effectively reduce the severity rate.Deep learning(DL)models can be widely employed to analyze biomedical images,thereby minimizing the misclassification rate.With this objective,this study developed an intelligent deep-transfer-learning-based malaria parasite detection and classification(IDTL-MPDC)model on blood smear images.The proposed IDTL-MPDC technique aims to effectively determine the presence of malarial parasites in blood smear images.In addition,the IDTL-MPDC technique derives median filtering(MF)as a pre-processing step.In addition,a residual neural network(Res2Net)model was employed for the extraction of feature vectors,and its hyperparameters were optimally adjusted using the differential evolution(DE)algorithm.The k-nearest neighbor(KNN)classifier was used to assign appropriate classes to the blood smear images.The optimal selection of Res2Net hyperparameters by the DE model helps achieve enhanced classification outcomes.A wide range of simulation analyses of the IDTL-MPDC technique are carried out using a benchmark dataset,and its performance seems to be highly accurate(95.86%),highly sensitive(95.82%),highly specific(95.98%),with a high F1 score(95.69%),and high precision(95.86%),and it has been proven to be better than the other existing methods.展开更多
Due to small size and high occult,metacarpophalangeal fracturediagnosis displays a low accuracy in terms of fracture detection and locationin X-ray images.To efficiently detect metacarpophalangeal fractures on Xrayima...Due to small size and high occult,metacarpophalangeal fracturediagnosis displays a low accuracy in terms of fracture detection and locationin X-ray images.To efficiently detect metacarpophalangeal fractures on Xrayimages as the second opinion for radiologists,we proposed a novel onestageneural network namedMPFracNet based onRetinaNet.InMPFracNet,a deformable bottleneck block(DBB)was integrated into the bottleneckto better adapt to the geometric variation of the fractures.Furthermore,an integrated feature fusion module(IFFM)was employed to obtain morein-depth semantic and shallow detail features.Specifically,Focal Loss andBalanced L1 Loss were introduced to respectively attenuate the imbalancebetween positive and negative classes and the imbalance between detectionand location tasks.We assessed the proposed model on the test set andachieved an AP of 80.4%for the metacarpophalangeal fracture detection.To estimate the detection performance for fractures with different difficulties,the proposed model was tested on the subsets of metacarpal,phalangeal andtiny fracture test sets and achieved APs of 82.7%,78.5%and 74.9%,respectively.Our proposed framework has state-of-the-art performance for detectingmetacarpophalangeal fractures,which has a strong potential application valuein practical clinical environments.展开更多
Breast Cancer(BC)is considered the most commonly scrutinized can-cer in women worldwide,affecting one in eight women in a lifetime.Mammogra-phy screening becomes one such standard method that is helpful in identifying...Breast Cancer(BC)is considered the most commonly scrutinized can-cer in women worldwide,affecting one in eight women in a lifetime.Mammogra-phy screening becomes one such standard method that is helpful in identifying suspicious masses’malignancy of BC at an initial level.However,the prior iden-tification of masses in mammograms was still challenging for extremely dense and dense breast categories and needs an effective and automatic mechanisms for helping radiotherapists in diagnosis.Deep learning(DL)techniques were broadly utilized for medical imaging applications,particularly breast mass classi-fication.The advancements in the DL field paved the way for highly intellectual and self-reliant computer-aided diagnosis(CAD)systems since the learning cap-ability of Machine Learning(ML)techniques was constantly improving.This paper presents a new Hyperparameter Tuned Deep Hybrid Denoising Autoenco-der Breast Cancer Classification(HTDHDAE-BCC)on Digital Mammograms.The presented HTDHDAE-BCC model examines the mammogram images for the identification of BC.In the HTDHDAE-BCC model,the initial stage of image preprocessing is carried out using an average median filter.In addition,the deep convolutional neural network-based Inception v4 model is employed to generate feature vectors.The parameter tuning process uses the binary spider monkey opti-mization(BSMO)algorithm.The HTDHDAE-BCC model exploits chameleon swarm optimization(CSO)with the DHDAE model for BC classification.The experimental analysis of the HTDHDAE-BCC model is performed using the MIAS database.The experimental outcomes demonstrate the betterments of the HTDHDAE-BCC model over other recent approaches.展开更多
Age-related macular degeneration(AMD)ranks third among the most common causes of blindness.As the most conventional and direct method for identifying AMD,color fundus photography has become prominent owing to its cons...Age-related macular degeneration(AMD)ranks third among the most common causes of blindness.As the most conventional and direct method for identifying AMD,color fundus photography has become prominent owing to its consistency,ease of use,and good quality in extensive clinical practice.In this study,a convolutional neural network(CSPDarknet53)was combined with a transformer to construct a new hybrid model,HCSP-Net.This hybrid model was employed to tri-classify color fundus photography into the normal macula(NM),dry macular degeneration(DMD),and wet macular degeneration(WMD)based on clinical classification manifestations,thus identifying and resolving AMD as early as possible with color fundus photography.To further enhance the performance of this model,grouped convolution was introduced in this study without significantly increasing the number of parameters.HCSP-Net was validated using an independent test set.The average precision of HCSPNet in the diagnosis of AMD was 99.2%,the recall rate was 98.2%,the F1-Score was 98.7%,the PPV(positive predictive value)was 99.2%,and the NPV(negative predictive value)was 99.6%.Moreover,a knowledge distillation approach was also adopted to develop a lightweight student network(SCSP-Net).The experimental results revealed a noteworthy enhancement in the accuracy of SCSP-Net,rising from 94%to 97%,while remarkably reducing the parameter count to a quarter of HCSP-Net.This attribute positions SCSP-Net as a highly suitable candidate for the deployment of resource-constrained devices,which may provide ophthalmologists with an efficient tool for diagnosing AMD.展开更多
Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selec...Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selection.This study presents a new deep network called Multi-scale Fusion Network(MsfNet),which aims to enhance the automatic ISUP grade of ccRCC with digital histopathology pathology images.The MsfNet overcomes the limitations of traditional ResNet50 by multi-scale information fusion and dynamic allocation of channel quantity.The model was trained and tested using 90 Hematoxylin and Eosin(H&E)stained whole slide images(WSIs),which were all cropped into 320×320-pixel patches at 40×magnification.MsfNet achieved a micro-averaged area under the curve(AUC)of 0.9807,a macro-averaged AUC of 0.9778 on the test dataset.The Gradient-weighted Class Activation Mapping(Grad-CAM)visually demonstrated MsfNet’s ability to distinguish and highlight abnormal areas more effectively than ResNet50.The t-Distributed Stochastic Neighbor Embedding(t-SNE)plot indicates our model can efficiently extract critical features from images,reducing the impact of noise and redundant information.The results suggest that MsfNet offers an accurate ISUP grade of ccRCC in digital images,emphasizing the potential of AI-assisted histopathological systems in clinical practice.展开更多
Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing...Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.展开更多
Purpose-The advancements of deep learning(DL)models demonstrate significant performance on accurate pancreatic tumor segmentation and classification.Design/methodology/approach-The presented model involves different s...Purpose-The advancements of deep learning(DL)models demonstrate significant performance on accurate pancreatic tumor segmentation and classification.Design/methodology/approach-The presented model involves different stages of operations,namely preprocessing,image segmentation,feature extraction and image classification.Primarily,bilateral filtering(BF)technique is applied for image preprocessing to eradicate the noise present in the CT pancreatic image.Besides,noninteractive GrabCut(NIGC)algorithm is applied for the image segmentation process.Subsequently,residual network 152(ResNet152)model is utilized as a feature extractor to originate a valuable set of feature vectors.At last,the red deer optimization algorithm(RDA)tuned backpropagation neural network(BPNN),called RDA-BPNN model,is employed as a classification model to determine the existence of pancreatic tumor.Findings-The experimental results are validated in terms of different performance measures and a detailed comparative results analysis ensured the betterment of the RDA-BPNN model with the sensitivity of 98.54%,specificity of 98.46%,accuracy of 98.51% and F-score of 98.23%.Originality/value-The study also identifies several novel automated deep learning based approaches used by researchers to assess the performance of the RDA-BPNN model on benchmark dataset and analyze the results in terms of several measures.展开更多
BACKGROUND The risk of gastric cancer increases in patients with Helicobacter pylori-associated chronic atrophic gastritis(CAG).X-ray examination can evaluate the condition of the stomach,and it can be used for gastri...BACKGROUND The risk of gastric cancer increases in patients with Helicobacter pylori-associated chronic atrophic gastritis(CAG).X-ray examination can evaluate the condition of the stomach,and it can be used for gastric cancer mass screening.However,skilled doctors for interpretation of X-ray examination are decreasing due to the diverse of inspections.AIM To evaluate the effectiveness of stomach regions that are automatically estimated by a deep learning-based model for CAG detection.METHODS We used 815 gastric X-ray images(GXIs)obtained from 815 subjects.The ground truth of this study was the diagnostic results in X-ray and endoscopic examinations.For a part of GXIs for training,the stomach regions are manually annotated.A model for automatic estimation of the stomach regions is trained with the GXIs.For the rest of them,the stomach regions are automatically estimated.Finally,a model for automatic CAG detection is trained with all GXIs for training.RESULTS In the case that the stomach regions were manually annotated for only 10 GXIs and 30 GXIs,the harmonic mean of sensitivity and specificity of CAG detection were 0.955±0.002 and 0.963±0.004,respectively.CONCLUSION By estimating stomach regions automatically,our method contributes to the reduction of the workload of manual annotation and the accurate detection of the CAG.展开更多
Esophageal cancer poses diagnostic,therapeutic and economic burdens in highrisk regions.Artificial intelligence(AI)has been developed for diagnosis and outcome prediction using various features,including clinicopathol...Esophageal cancer poses diagnostic,therapeutic and economic burdens in highrisk regions.Artificial intelligence(AI)has been developed for diagnosis and outcome prediction using various features,including clinicopathologic,radiologic,and genetic variables,which can achieve inspiring results.One of the most recent tasks of AI is to use state-of-the-art deep learning technique to detect both early esophageal squamous cell carcinoma and esophageal adenocarcinoma in Barrett’s esophagus.In this review,we aim to provide a comprehensive overview of the ways in which AI may help physicians diagnose advanced cancer and make clinical decisions based on predicted outcomes,and combine the endoscopic images to detect precancerous lesions or early cancer.Pertinent studies conducted in recent two years have surged in numbers,with large datasets and external validation from multi-centers,and have partly achieved intriguing results of expert’s performance of AI in real time.Improved pre-trained computer-aided diagnosis algorithms in the future studies with larger training and external validation datasets,aiming at real-time video processing,are imperative to produce a diagnostic efficacy similar to or even superior to experienced endoscopists.Meanwhile,supervised randomized controlled trials in real clinical practice are highly essential for a solid conclusion,which meets patient-centered satisfaction.Notably,ethical and legal issues regarding the blackbox nature of computer algorithms should be addressed,for both clinicians and regulators.展开更多
基金via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.
基金Deanship of Scientific Research at Imam Mohammad Ibn Saud Islamic University for funding this work through Research Group no.RG-21-07-01.
文摘Diabetic retinopathy(DR)diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features.This task is very difficult for ophthalmologists and timeconsuming.Therefore,many computer-aided diagnosis(CAD)systems were developed to automate this screening process ofDR.In this paper,aCAD-DR system is proposed based on preprocessing and a pre-train transfer learningbased convolutional neural network(PCNN)to recognize the five stages of DR through retinal fundus images.To develop this CAD-DR system,a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results.The architecture of the PCNN model is based on three main phases.Firstly,the training process of the proposed PCNN is accomplished by using the expected gradient length(EGL)to decrease the image labeling efforts during the training of the CNN model.Secondly,themost informative patches and images were automatically selected using a few pieces of training labeled samples.Thirdly,the PCNN method generated useful masks for prognostication and identified regions of interest.Fourthly,the DR-related lesions involved in the classification task such as micro-aneurysms,hemorrhages,and exudates were detected and then used for recognition of DR.The PCNN model is pre-trained using a high-end graphical processor unit(GPU)on the publicly available Kaggle benchmark.The obtained results demonstrate that the CAD-DR system outperforms compared to other state-of-the-art in terms of sensitivity(SE),specificity(SP),and accuracy(ACC).On the test set of 30,000 images,the CAD-DR system achieved an average SE of 93.20%,SP of 96.10%,and ACC of 98%.This result indicates that the proposed CAD-DR system is appropriate for the screening of the severity-level of DR.
文摘Deep neural network(DNN)based computer-aided breast tumor diagnosis(CABTD)method plays a vital role in the early detection and diagnosis of breast tumors.However,a Brightness mode(B-mode)ultrasound image derives training feature samples that make closer isolation toward the infection part.Hence,it is expensive due to a metaheuristic search of features occupying the global region of interest(ROI)structures of input images.Thus,it may lead to the high computational complexity of the pre-trained DNN-based CABTD method.This paper proposes a novel ensemble pretrained DNN-based CABTD method using global-and local-ROI-structures of B-mode ultrasound images.It conveys the additional consideration of a local-ROI-structures for further enhan-cing the pretrained DNN-based CABTD method’s breast tumor diagnostic performance without degrading its visual quality.The features are extracted at various depths(18,50,and 101)from the global and local ROI structures and feed to support vector machine for better classification.From the experimental results,it has been observed that the combined local and global ROI structure of small depth residual network ResNet18(0.8 in%)has produced significant improve-ment in pixel ratio as compared to ResNet50(0.5 in%)and ResNet101(0.3 in%),respectively.Subsequently,the pretrained DNN-based CABTD methods have been tested by influencing local and global ROI structures to diagnose two specific breast tumors(Benign and Malignant)and improve the diagnostic accuracy(86%)compared to Dense Net,Alex Net,VGG Net,and Google Net.Moreover,it reduces the computational complexity due to the small depth residual network ResNet18,respectively.
文摘Computer-aided diagnosis(CAD)models exploit artificial intelligence(AI)for chest X-ray(CXR)examination to identify the presence of tuberculosis(TB)and can improve the feasibility and performance of CXR for TB screening and triage.At the same time,CXR interpretation is a time-consuming and subjective process.Furthermore,high resemblance among the radiological patterns of TB and other lung diseases can result in misdiagnosis.Therefore,computer-aided diagnosis(CAD)models using machine learning(ML)and deep learning(DL)can be designed for screening TB accurately.With this motivation,this article develops a Water Strider Optimization with Deep Transfer Learning Enabled Tuberculosis Classification(WSODTL-TBC)model on Chest X-rays(CXR).The presented WSODTL-TBC model aims to detect and classify TB on CXR images.Primarily,the WSODTL-TBC model undergoes image filtering techniques to discard the noise content and U-Net-based image segmentation.Besides,a pre-trained residual network with a two-dimensional convolutional neural network(2D-CNN)model is applied to extract feature vectors.In addition,the WSO algorithm with long short-term memory(LSTM)model was employed for identifying and classifying TB,where the WSO algorithm is applied as a hyperparameter optimizer of the LSTM methodology,showing the novelty of the work.The performance validation of the presented WSODTL-TBC model is carried out on the benchmark dataset,and the outcomes were investigated in many aspects.The experimental development pointed out the betterment of the WSODTL-TBC model over existing algorithms.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R151)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR12).
文摘With the rapid increase of new cases with an increased mortality rate,cancer is considered the second and most deadly disease globally.Breast cancer is the most widely affected cancer worldwide,with an increased death rate percentage.Due to radiologists’processing of mammogram images,many computer-aided diagnoses have been developed to detect breast cancer.Early detection of breast cancer will reduce the death rate worldwide.The early diagnosis of breast cancer using the developed computer-aided diagnosis(CAD)systems still needed to be enhanced by incorporating innovative deep learning technologies to improve the accuracy and sensitivity of the detection system with a reduced false positive rate.This paper proposed an efficient and optimized deep learning-based feature selection approach with this consideration.This model selects the relevant features from the mammogram images that can improve the accuracy of malignant detection and reduce the false alarm rate.Transfer learning is used in the extraction of features initially.Na ext,a convolution neural network,is used to extract the features.The two feature vectors are fused and optimized with enhanced Butterfly Optimization with Gaussian function(TL-CNN-EBOG)to select the final most relevant features.The optimized features are applied to the classifier called Deep belief network(DBN)to classify the benign and malignant images.The feature extraction and classification process used two datasets,breast,and MIAS.Compared to the existing methods,the optimized deep learning-based model secured 98.6%of improved accuracy on the breast dataset and 98.85%of improved accuracy on the MIAS dataset.
基金Supported by Shanghai Science and Technology Innovation Action Program, No. 21Y31900100234 Clinical Research Fund of Changhai Hospital, No. 2019YXK006
文摘BACKGROUND Upper gastrointestinal endoscopy is critical for esophageal squamous cell carcinoma(ESCC)detection;however,endoscopists require long-term training to avoid missing superficial lesions.AIM To develop a deep learning computer-assisted diagnosis(CAD)system for endoscopic detection of superficial ESCC and investigate its application value.METHODS We configured the CAD system for white-light and narrow-band imaging modes based on the YOLO v5 algorithm.A total of 4447 images from 837 patients and 1695 images from 323 patients were included in the training and testing datasets,respectively.Two experts and two non-expert endoscopists reviewed the testing dataset independently and with computer assistance.The diagnostic performance was evaluated in terms of the area under the receiver operating characteristic curve,accuracy,sensitivity,and specificity.RESULTS The area under the receiver operating characteristics curve,accuracy,sensitivity,and specificity of the CAD system were 0.982[95%confidence interval(CI):0.969-0.994],92.9%(95%CI:89.5%-95.2%),91.9%(95%CI:87.4%-94.9%),and 94.7%(95%CI:89.0%-97.6%),respectively.The accuracy of CAD was significantly higher than that of non-expert endoscopists(78.3%,P<0.001 compared with CAD)and comparable to that of expert endoscopists(91.0%,P=0.129 compared with CAD).After referring to the CAD results,the accuracy of the non-expert endoscopists significantly improved(88.2%vs 78.3%,P<0.001).Lesions with Paris classification type 0-IIb were more likely to be inaccurately identified by the CAD system.CONCLUSION The diagnostic performance of the CAD system is promising and may assist in improving detectability,particularly for inexperienced endoscopists.
基金This work was supported by Science and Technology Rising Star of Shaanxi Youth(No.2021KJXX-61)The Open Project Program of the State Key Lab of CAD&CG,Zhejiang University(No.A2206)+3 种基金The China Postdoctoral Science Foundation(No.2020M683696XB)Natural Science Basic Research Plan in Shaanxi Province of China(No.2021JQ-455)Natural Science Foundation of China(No.62062003),Key Research and Development Project of Ningxia(Special projects for talents)(No.2020BEB04022)North Minzu University Research Project of Talent Introduction(No.2020KYQD08).
文摘Lung is an important organ of human body.More and more people are suffering from lung diseases due to air pollution.These diseases are usually highly infectious.Such as lung tuberculosis,novel coronavirus COVID-19,etc.Lung nodule is a kind of high-density globular lesion in the lung.Physicians need to spend a lot of time and energy to observe the computed tomography image sequences to make a diagnosis,which is inefficient.For this reason,the use of computer-assisted diagnosis of lung nodules has become the current main trend.In the process of computer-aided diagnosis,how to reduce the false positive rate while ensuring a low missed detection rate is a difficulty and focus of current research.To solve this problem,we propose a three-dimensional optimization model to achieve the extraction of suspected regions,improve the traditional deep belief network,and to modify the dispersion matrix between classes.We construct a multi-view model,fuse local three-dimensional information into two-dimensional images,and thereby to reduce the complexity of the algorithm.And alleviate the problem of unbalanced training caused by only a small number of positive samples.Experiments show that the false positive rate of the algorithm proposed in this paper is as low as 12%,which is in line with clinical application standards.
基金Supported by the College of Medicine Research Centre,Deanship of Scientific Research,King Saud University,Riyadh,Saudi Arabia
文摘BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algorithm similar to deep learning,has demonstrated its capability to recognise specific features that can detect pathological lesions.AIM To assess the use of CNNs in examining HCC and liver masses images in the diagnosis of cancer and evaluating the accuracy level of CNNs and their performance.METHODS The databases PubMed,EMBASE,and the Web of Science and research books were systematically searched using related keywords.Studies analysing pathological anatomy,cellular,and radiological images on HCC or liver masses using CNNs were identified according to the study protocol to detect cancer,differentiating cancer from other lesions,or staging the lesion.The data were extracted as per a predefined extraction.The accuracy level and performance of the CNNs in detecting cancer or early stages of cancer were analysed.The primary outcomes of the study were analysing the type of cancer or liver mass and identifying the type of images that showed optimum accuracy in cancer detection.RESULTS A total of 11 studies that met the selection criteria and were consistent with the aims of the study were identified.The studies demonstrated the ability to differentiate liver masses or differentiate HCC from other lesions(n=6),HCC from cirrhosis or development of new tumours(n=3),and HCC nuclei grading or segmentation(n=2).The CNNs showed satisfactory levels of accuracy.The studies aimed at detecting lesions(n=4),classification(n=5),and segmentation(n=2).Several methods were used to assess the accuracy of CNN models used.CONCLUSION The role of CNNs in analysing images and as tools in early detection of HCC or liver masses has been demonstrated in these studies.While a few limitations have been identified in these studies,overall there was an optimal level of accuracy of the CNNs used in segmentation and classification of liver cancers images.
基金Supported by Research Fund of Seoul St. Mary’s Hospital made in the program year of 2018。
文摘BACKGROUND Identifying genetic mutations in cancer patients have been increasingly important because distinctive mutational patterns can be very informative to determine the optimal therapeutic strategy. Recent studies have shown that deep learning-based molecular cancer subtyping can be performed directly from the standard hematoxylin and eosin(H&E) sections in diverse tumors including colorectal cancers(CRCs). Since H&E-stained tissue slides are ubiquitously available, mutation prediction with the pathology images from cancers can be a time-and cost-effective complementary method for personalized treatment.AIM To predict the frequently occurring actionable mutations from the H&E-stained CRC whole-slide images(WSIs) with deep learning-based classifiers.METHODS A total of 629 CRC patients from The Cancer Genome Atlas(TCGA-COAD and TCGA-READ) and 142 CRC patients from Seoul St. Mary Hospital(SMH) were included. Based on the mutation frequency in TCGA and SMH datasets, we chose APC, KRAS, PIK3CA, SMAD4, and TP53 genes for the study. The classifiers were trained with 360 × 360 pixel patches of tissue images. The receiver operating characteristic(ROC) curves and area under the curves(AUCs) for all the classifiers were presented.RESULTS The AUCs for ROC curves ranged from 0.693 to 0.809 for the TCGA frozen WSIs and from 0.645 to 0.783 for the TCGA formalin-fixed paraffin-embedded WSIs.The prediction performance can be enhanced with the expansion of datasets. When the classifiers were trained with both TCGA and SMH data, the prediction performance was improved.CONCLUSION APC, KRAS, PIK3CA, SMAD4, and TP53 mutations can be predicted from H&E pathology images using deep learning-based classifiers, demonstrating the potential for deep learning-based mutation prediction in the CRC tissue slides.
基金Supported by the Maintenance Project of the Center for Artificial Intelligence,No.CLRPG3H0012 and No.SMRPG3I0011.
文摘BACKGROUND Hepatic steatosis is a major cause of chronic liver disease.Two-dimensional(2D)ultrasound is the most widely used non-invasive tool for screening and monitoring,but associated diagnoses are highly subjective.AIM To develop a scalable deep learning(DL)algorithm for quantitative scoring of liver steatosis from 2D ultrasound images.METHODS Using multi-view ultrasound data from 3310 patients,19513 studies,and 228075 images from a retrospective cohort of patients received elastography,we trained a DL algorithm to diagnose steatosis stages(healthy,mild,moderate,or severe)from clinical ultrasound diagnoses.Performance was validated on two multiscanner unblinded and blinded(initially to DL developer)histology-proven cohorts(147 and 112 patients)with histopathology fatty cell percentage diagnoses and a subset with FibroScan diagnoses.We also quantified reliability across scanners and viewpoints.Results were evaluated using Bland-Altman and receiver operating characteristic(ROC)analysis.RESULTS The DL algorithm demonstrated repeatable measurements with a moderate number of images(three for each viewpoint)and high agreement across three premium ultrasound scanners.High diagnostic performance was observed across all viewpoints:Areas under the curve of the ROC to classify mild,moderate,and severe steatosis grades were 0.85,0.91,and 0.93,respectively.The DL algorithm outperformed or performed at least comparably to FibroScan control attenuation parameter(CAP)with statistically significant improvements for all levels on the unblinded histology-proven cohort and for“=severe”steatosis on the blinded histology-proven cohort.CONCLUSION The DL algorithm provides a reliable quantitative steatosis assessment across view and scanners on two multi-scanner cohorts.Diagnostic performance was high with comparable or better performance than the CAP.
文摘Humankind is facing another deadliest pandemic of all times in history,caused by COVID-19.Apart from this challenging pandemic,World Health Organization(WHO)considers tuberculosis(TB)as a preeminent infectious disease due to its high infection rate.Generally,both TB and COVID-19 severely affect the lungs,thus hardening the job of medical practitioners who can often misidentify these diseases in the current situation.Therefore,the time of need calls for an immediate and meticulous automatic diagnostic tool that can accurately discriminate both diseases.As one of the preliminary smart health systems that examine three clinical states(COVID-19,TB,and normal cases),this study proposes an amalgam of image filtering,data-augmentation technique,transfer learning-based approach,and advanced deep-learning classifiers to effectively segregate these diseases.It first employed a generative adversarial network(GAN)and Crimmins speckle removal filter on X-ray images to overcome the issue of limited data and noise.Each pre-processed image is then converted into red,green,and blue(RGB)and Commission Internationale de l’Elcairage(CIE)color spaces from which deep fused features are formed by extracting relevant features using DenseNet121 and ResNet50.Each feature extractor extracts 1000 most useful features which are then fused and finally fed to two variants of recurrent neural network(RNN)classifiers for precise discrimination of threeclinical states.Comparative analysis showed that the proposed Bi-directional long-short-term-memory(Bi-LSTM)model dominated the long-short-termmemory(LSTM)network by attaining an overall accuracy of 98.22%for the three-class classification task,whereas LSTM hardly achieved 94.22%accuracy on the test dataset.
基金The authors extend their appreciation to the Deanship of Scientific Research at Majmaah University for funding this study under project number R-2022-76.
文摘Malaria is a severe disease caused by Plasmodium parasites,which can be detected through blood smear images.The early identification of the disease can effectively reduce the severity rate.Deep learning(DL)models can be widely employed to analyze biomedical images,thereby minimizing the misclassification rate.With this objective,this study developed an intelligent deep-transfer-learning-based malaria parasite detection and classification(IDTL-MPDC)model on blood smear images.The proposed IDTL-MPDC technique aims to effectively determine the presence of malarial parasites in blood smear images.In addition,the IDTL-MPDC technique derives median filtering(MF)as a pre-processing step.In addition,a residual neural network(Res2Net)model was employed for the extraction of feature vectors,and its hyperparameters were optimally adjusted using the differential evolution(DE)algorithm.The k-nearest neighbor(KNN)classifier was used to assign appropriate classes to the blood smear images.The optimal selection of Res2Net hyperparameters by the DE model helps achieve enhanced classification outcomes.A wide range of simulation analyses of the IDTL-MPDC technique are carried out using a benchmark dataset,and its performance seems to be highly accurate(95.86%),highly sensitive(95.82%),highly specific(95.98%),with a high F1 score(95.69%),and high precision(95.86%),and it has been proven to be better than the other existing methods.
基金funded by the Research Fund for Foundation of Hebei University(DXK201914)the President of Hebei University(XZJJ201914)+1 种基金the Post-graduate’s Innovation Fund Project of Hebei University(HBU2022SS003)the Special Project for Cultivating College Students’Scientific and Technological Innovation Ability in Hebei Province(22E50041D).
文摘Due to small size and high occult,metacarpophalangeal fracturediagnosis displays a low accuracy in terms of fracture detection and locationin X-ray images.To efficiently detect metacarpophalangeal fractures on Xrayimages as the second opinion for radiologists,we proposed a novel onestageneural network namedMPFracNet based onRetinaNet.InMPFracNet,a deformable bottleneck block(DBB)was integrated into the bottleneckto better adapt to the geometric variation of the fractures.Furthermore,an integrated feature fusion module(IFFM)was employed to obtain morein-depth semantic and shallow detail features.Specifically,Focal Loss andBalanced L1 Loss were introduced to respectively attenuate the imbalancebetween positive and negative classes and the imbalance between detectionand location tasks.We assessed the proposed model on the test set andachieved an AP of 80.4%for the metacarpophalangeal fracture detection.To estimate the detection performance for fractures with different difficulties,the proposed model was tested on the subsets of metacarpal,phalangeal andtiny fracture test sets and achieved APs of 82.7%,78.5%and 74.9%,respectively.Our proposed framework has state-of-the-art performance for detectingmetacarpophalangeal fractures,which has a strong potential application valuein practical clinical environments.
基金This project was supported by the Deanship of Scientific Research at Prince SattamBin Abdulaziz University under research Project#(PSAU-2022/01/20287).
文摘Breast Cancer(BC)is considered the most commonly scrutinized can-cer in women worldwide,affecting one in eight women in a lifetime.Mammogra-phy screening becomes one such standard method that is helpful in identifying suspicious masses’malignancy of BC at an initial level.However,the prior iden-tification of masses in mammograms was still challenging for extremely dense and dense breast categories and needs an effective and automatic mechanisms for helping radiotherapists in diagnosis.Deep learning(DL)techniques were broadly utilized for medical imaging applications,particularly breast mass classi-fication.The advancements in the DL field paved the way for highly intellectual and self-reliant computer-aided diagnosis(CAD)systems since the learning cap-ability of Machine Learning(ML)techniques was constantly improving.This paper presents a new Hyperparameter Tuned Deep Hybrid Denoising Autoenco-der Breast Cancer Classification(HTDHDAE-BCC)on Digital Mammograms.The presented HTDHDAE-BCC model examines the mammogram images for the identification of BC.In the HTDHDAE-BCC model,the initial stage of image preprocessing is carried out using an average median filter.In addition,the deep convolutional neural network-based Inception v4 model is employed to generate feature vectors.The parameter tuning process uses the binary spider monkey opti-mization(BSMO)algorithm.The HTDHDAE-BCC model exploits chameleon swarm optimization(CSO)with the DHDAE model for BC classification.The experimental analysis of the HTDHDAE-BCC model is performed using the MIAS database.The experimental outcomes demonstrate the betterments of the HTDHDAE-BCC model over other recent approaches.
基金Shenzhen Fund for Guangdong Provincial High-Level Clinical Key Specialties(SZGSP014)Sanming Project of Medicine in Shenzhen(SZSM202311012)Shenzhen Science and Technology Planning Project(KCXFZ20211020163813019).
文摘Age-related macular degeneration(AMD)ranks third among the most common causes of blindness.As the most conventional and direct method for identifying AMD,color fundus photography has become prominent owing to its consistency,ease of use,and good quality in extensive clinical practice.In this study,a convolutional neural network(CSPDarknet53)was combined with a transformer to construct a new hybrid model,HCSP-Net.This hybrid model was employed to tri-classify color fundus photography into the normal macula(NM),dry macular degeneration(DMD),and wet macular degeneration(WMD)based on clinical classification manifestations,thus identifying and resolving AMD as early as possible with color fundus photography.To further enhance the performance of this model,grouped convolution was introduced in this study without significantly increasing the number of parameters.HCSP-Net was validated using an independent test set.The average precision of HCSPNet in the diagnosis of AMD was 99.2%,the recall rate was 98.2%,the F1-Score was 98.7%,the PPV(positive predictive value)was 99.2%,and the NPV(negative predictive value)was 99.6%.Moreover,a knowledge distillation approach was also adopted to develop a lightweight student network(SCSP-Net).The experimental results revealed a noteworthy enhancement in the accuracy of SCSP-Net,rising from 94%to 97%,while remarkably reducing the parameter count to a quarter of HCSP-Net.This attribute positions SCSP-Net as a highly suitable candidate for the deployment of resource-constrained devices,which may provide ophthalmologists with an efficient tool for diagnosing AMD.
基金supported by the Scientific Research and Innovation Team of Hebei University(IT2023B07)the Natural Science Foundation of Hebei Province(F2023201069)the Postgraduate’s Innovation Fund Project of Hebei University(HBU2024BS021).
文摘Clear cell renal cell carcinoma(ccRCC)represents the most frequent form of renal cell carcinoma(RCC),and accurate International Society of Urological Pathology(ISUP)grading is crucial for prognosis and treatment selection.This study presents a new deep network called Multi-scale Fusion Network(MsfNet),which aims to enhance the automatic ISUP grade of ccRCC with digital histopathology pathology images.The MsfNet overcomes the limitations of traditional ResNet50 by multi-scale information fusion and dynamic allocation of channel quantity.The model was trained and tested using 90 Hematoxylin and Eosin(H&E)stained whole slide images(WSIs),which were all cropped into 320×320-pixel patches at 40×magnification.MsfNet achieved a micro-averaged area under the curve(AUC)of 0.9807,a macro-averaged AUC of 0.9778 on the test dataset.The Gradient-weighted Class Activation Mapping(Grad-CAM)visually demonstrated MsfNet’s ability to distinguish and highlight abnormal areas more effectively than ResNet50.The t-Distributed Stochastic Neighbor Embedding(t-SNE)plot indicates our model can efficiently extract critical features from images,reducing the impact of noise and redundant information.The results suggest that MsfNet offers an accurate ISUP grade of ccRCC in digital images,emphasizing the potential of AI-assisted histopathological systems in clinical practice.
基金Macao Polytechnic University Grant(RP/FCSD-01/2022RP/FCA-05/2022)Science and Technology Development Fund of Macao(0105/2022/A).
文摘Background Deep convolutional neural networks have garnered considerable attention in numerous machine learning applications,particularly in visual recognition tasks such as image and video analyses.There is a growing interest in applying this technology to diverse applications in medical image analysis.Automated three dimensional Breast Ultrasound is a vital tool for detecting breast cancer,and computer-assisted diagnosis software,developed based on deep learning,can effectively assist radiologists in diagnosis.However,the network model is prone to overfitting during training,owing to challenges such as insufficient training data.This study attempts to solve the problem caused by small datasets and improve model detection performance.Methods We propose a breast cancer detection framework based on deep learning(a transfer learning method based on cross-organ cancer detection)and a contrastive learning method based on breast imaging reporting and data systems(BI-RADS).Results When using cross organ transfer learning and BIRADS based contrastive learning,the average sensitivity of the model increased by a maximum of 16.05%.Conclusion Our experiments have demonstrated that the parameters and experiences of cross-organ cancer detection can be mutually referenced,and contrastive learning method based on BI-RADS can improve the detection performance of the model.
文摘Purpose-The advancements of deep learning(DL)models demonstrate significant performance on accurate pancreatic tumor segmentation and classification.Design/methodology/approach-The presented model involves different stages of operations,namely preprocessing,image segmentation,feature extraction and image classification.Primarily,bilateral filtering(BF)technique is applied for image preprocessing to eradicate the noise present in the CT pancreatic image.Besides,noninteractive GrabCut(NIGC)algorithm is applied for the image segmentation process.Subsequently,residual network 152(ResNet152)model is utilized as a feature extractor to originate a valuable set of feature vectors.At last,the red deer optimization algorithm(RDA)tuned backpropagation neural network(BPNN),called RDA-BPNN model,is employed as a classification model to determine the existence of pancreatic tumor.Findings-The experimental results are validated in terms of different performance measures and a detailed comparative results analysis ensured the betterment of the RDA-BPNN model with the sensitivity of 98.54%,specificity of 98.46%,accuracy of 98.51% and F-score of 98.23%.Originality/value-The study also identifies several novel automated deep learning based approaches used by researchers to assess the performance of the RDA-BPNN model on benchmark dataset and analyze the results in terms of several measures.
文摘BACKGROUND The risk of gastric cancer increases in patients with Helicobacter pylori-associated chronic atrophic gastritis(CAG).X-ray examination can evaluate the condition of the stomach,and it can be used for gastric cancer mass screening.However,skilled doctors for interpretation of X-ray examination are decreasing due to the diverse of inspections.AIM To evaluate the effectiveness of stomach regions that are automatically estimated by a deep learning-based model for CAG detection.METHODS We used 815 gastric X-ray images(GXIs)obtained from 815 subjects.The ground truth of this study was the diagnostic results in X-ray and endoscopic examinations.For a part of GXIs for training,the stomach regions are manually annotated.A model for automatic estimation of the stomach regions is trained with the GXIs.For the rest of them,the stomach regions are automatically estimated.Finally,a model for automatic CAG detection is trained with all GXIs for training.RESULTS In the case that the stomach regions were manually annotated for only 10 GXIs and 30 GXIs,the harmonic mean of sensitivity and specificity of CAG detection were 0.955±0.002 and 0.963±0.004,respectively.CONCLUSION By estimating stomach regions automatically,our method contributes to the reduction of the workload of manual annotation and the accurate detection of the CAG.
基金Supported by Sichuan Science and Technology Department Key R and D Projects,No.2019YFS0257and Chengdu Technological Innovation R and D Projects,No.2018-YFYF-00033-GX.
文摘Esophageal cancer poses diagnostic,therapeutic and economic burdens in highrisk regions.Artificial intelligence(AI)has been developed for diagnosis and outcome prediction using various features,including clinicopathologic,radiologic,and genetic variables,which can achieve inspiring results.One of the most recent tasks of AI is to use state-of-the-art deep learning technique to detect both early esophageal squamous cell carcinoma and esophageal adenocarcinoma in Barrett’s esophagus.In this review,we aim to provide a comprehensive overview of the ways in which AI may help physicians diagnose advanced cancer and make clinical decisions based on predicted outcomes,and combine the endoscopic images to detect precancerous lesions or early cancer.Pertinent studies conducted in recent two years have surged in numbers,with large datasets and external validation from multi-centers,and have partly achieved intriguing results of expert’s performance of AI in real time.Improved pre-trained computer-aided diagnosis algorithms in the future studies with larger training and external validation datasets,aiming at real-time video processing,are imperative to produce a diagnostic efficacy similar to or even superior to experienced endoscopists.Meanwhile,supervised randomized controlled trials in real clinical practice are highly essential for a solid conclusion,which meets patient-centered satisfaction.Notably,ethical and legal issues regarding the blackbox nature of computer algorithms should be addressed,for both clinicians and regulators.