Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of ...Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of breastcancer fromultrasound images. The primary challenge is accurately distinguishing between malignant and benigntumors, complicated by factors such as speckle noise, variable image quality, and the need for precise segmentationand classification. The main objective of the research paper is to develop an advanced methodology for breastultrasound image classification, focusing on speckle noise reduction, precise segmentation, feature extraction, andmachine learning-based classification. A unique approach is introduced that combines Enhanced Speckle ReducedAnisotropic Diffusion (SRAD) filters for speckle noise reduction, U-NET-based segmentation, Genetic Algorithm(GA)-based feature selection, and Random Forest and Bagging Tree classifiers, resulting in a novel and efficientmodel. To test and validate the hybrid model, rigorous experimentations were performed and results state thatthe proposed hybrid model achieved accuracy rate of 99.9%, outperforming other existing techniques, and alsosignificantly reducing computational time. This enhanced accuracy, along with improved sensitivity and specificity,makes the proposed hybrid model a valuable addition to CAD systems in breast cancer diagnosis, ultimatelyenhancing diagnostic accuracy in clinical applications.展开更多
Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal cha...Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal characteristics and contextual information compared to 3D CT blocks.However,3D CT blocks necessitate significantly higher hardware resources during the learning phase.Therefore,efficiently exploiting temporal correlation and spatial-temporal features of 2D CT slices is crucial for ULD tasks.In this paper,we propose a ULD network with the enhanced temporal correlation for this purpose,named TCE-Net.The designed TCE module is applied to enrich the discriminate feature representation of multiple sequential CT slices.Besides,we employ multi-scale feature maps to facilitate the localization and detection of lesions in various sizes.Extensive experiments are conducted on the DeepLesion benchmark demonstrate that thismethod achieves 66.84%and 78.18%for FS@0.5 and FS@1.0,respectively,outperforming compared state-of-the-art methods.展开更多
Objective:We propose a solution that is backed by cloud computing,combines a series of AI neural networks of computer vision;is capable of detecting,highlighting,and locating breast lesions from a live ultrasound vide...Objective:We propose a solution that is backed by cloud computing,combines a series of AI neural networks of computer vision;is capable of detecting,highlighting,and locating breast lesions from a live ultrasound video feed,provides BI-RADS categorizations;and has reliable sensitivity and specificity.Multiple deep-learning models were trained on more than 300,000 breast ultrasound images to achieve object detection and regions of interest classification.The main objective of this study was to determine whether the performance of our Al-powered solution was comparable to that of ultrasound radiologists.Methods:The noninferiority evaluation was conducted by comparing the examination results of the same screening women between our AI-powered solution and ultrasound radiologists with over 10 years of experience.The study lasted for one and a half years and was carried out in the Duanzhou District Women and Children's Hospital,Zhaoqing,China.1,133 females between 20 and 70 years old were selected through convenience sampling.Results:The accuracy,sensitivity,specificity,positive predictive value,and negative predictive value were 93.03%,94.90%,90.71%,92.68%,and 93.48%,respectively.The area under the curve(AUC)for all positives was 0.91569 and the AUC for all negatives was 0.90461.The comparison indicated that the overall performance of the AI system was comparable to that of ultrasound radiologists.Conclusion:This innovative AI-powered ultrasound solution is cost-effective and user-friendly,and could be applied to massive breast cancer screening.展开更多
AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent...AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent DCC SB imaging.Two experienced investigators examined the videos and compared the total number of detected lesions to the number of lesions detected by each camera separately.Examination tolerability was assessed using a questionnaire.RESULTS:One patient was excluded.DCC cameras detected 68 positive findings(POS)in 20(50%)cases.Fifty of them were detected by the"yellow"camera,48 by the"green"and 28 by both cameras;44%(n=22)of the"yellow"camera’s POS were not detected by the"green"camera and 42%(n=20)of the"green" camera’s POS were not detected by the"yellow"camera.In two cases,only one camera detected significant findings.All participants had 216 findings of unknown significance(FUS).The"yellow","green"and both cameras detected 171,161,and 116 FUS,respectively;32%(n=55)of the"yellow"camera’s FUS were not detected by the"green"camera and 28%(n=45)of the"green"camera’s FUS were not detected by the "yellow"camera.There were no complications related to the examination,and 97.6%of the patients would repeat the examination,if necessary.CONCLUSION:DCC SB examination is feasible and well tolerated.The two cameras complement each other to detect more SB lesions.展开更多
The diagnosis of multiple sclerosis(MS)is based on accurate detection of lesions on magnetic resonance imaging(MRI)which also provides ongoing essential information about the progression and status of the disease.Manu...The diagnosis of multiple sclerosis(MS)is based on accurate detection of lesions on magnetic resonance imaging(MRI)which also provides ongoing essential information about the progression and status of the disease.Manual detection of lesions is very time consuming and lacks accuracy.Most of the lesions are difficult to detect manually,especially within the grey matter.This paper proposes a novel and fully automated convolution neural network(CNN)approach to segment lesions.The proposed system consists of two 2D patchwise CNNs which can segment lesions more accurately and robustly.The first CNN network is implemented to segment lesions accurately,and the second network aims to reduce the false positives to increase efficiency.The system consists of two parallel convolutional pathways,where one pathway is concatenated to the second and at the end,the fully connected layer is replaced with CNN.Three routine MRI sequences T1-w,T2-w and FLAIR are used as input to the CNN,where FLAIR is used for segmentation because most lesions on MRI appear as bright regions and T1-w&T2-w are used to reduce MRI artifacts.We evaluated the proposed system on two challenge datasets that are publicly available from MICCAI and ISBI.Quantitative and qualitative evaluation has been performed with various metrics like false positive rate(FPR),true positive rate(TPR)and dice similarities,and were compared to current state-of-the-art methods.The proposed method shows consistent higher precision and sensitivity than other methods.The proposed method can accurately and robustly segment MS lesions from images produced by different MRI scanners,with a precision up to 90%.展开更多
Objective We developed a universal lesion detector(ULDor)which showed good performance in in-lab experiments.The study aims to evaluate the performance and its ability to generalize in clinical setting via both extern...Objective We developed a universal lesion detector(ULDor)which showed good performance in in-lab experiments.The study aims to evaluate the performance and its ability to generalize in clinical setting via both external and internal validation.Methods The ULDor system consists of a convolutional neural network(CNN)trained on around 80 K lesion annotations from about 12 K CT studies in the DeepLesion dataset and 5 other public organ-specific datasets.During the validation process,the test sets include two parts:the external validation dataset which was comprised of 164 sets of non-contrasted chest and upper abdomen CT scans from a comprehensive hospital,and the internal validation dataset which was comprised of 187 sets of low-dose helical CT scans from the National Lung Screening Trial(NLST).We ran the model on the two test sets to output lesion detection.Three board-certified radiologists read the CT scans and verified the detection results of ULDor.We used positive predictive value(PPV)and sensitivity to evaluate the performance of the model in detecting space-occupying lesions at all extra-pulmonary organs visualized on CT images,including liver,kidney,pancreas,adrenal,spleen,esophagus,thyroid,lymph nodes,body wall,thoracic spine,etc.Results In the external validation,the lesion-level PPV and sensitivity of the model were 57.9%and 67.0%,respectively.On average,the model detected 2.1 findings per set,and among them,0.9 were false positives.ULDor worked well for detecting liver lesions,with a PPV of 78.9%and a sensitivity of 92.7%,followed by kidney,with a PPV of 70.0%and a sensitivity of 58.3%.In internal validation with NLST test set,ULDor obtained a PPV of 75.3%and a sensitivity of 52.0%despite the relatively high noise level of soft tissue on images.Conclusions The performance tests of ULDor with the external real-world data have shown its high effectiveness in multiple-purposed detection for lesions in certain organs.With further optimisation and iterative upgrades,ULDor may be well suited for extensive application to external data.展开更多
BACKGROUND Limited data currently exists on the clinical utility of Artificial Intelligence Assisted Colonoscopy(AIAC)outside of clinical trials.AIM To evaluate the impact of AIAC on key markers of colonoscopy quality...BACKGROUND Limited data currently exists on the clinical utility of Artificial Intelligence Assisted Colonoscopy(AIAC)outside of clinical trials.AIM To evaluate the impact of AIAC on key markers of colonoscopy quality compared to conventional colonoscopy(CC).METHODS This single-centre retrospective observational cohort study included all patients undergoing colonoscopy at a secondary centre in Brisbane,Australia.CC outcomes between October 2021 and October 2022 were compared with AIAC outcomes after the introduction of the Olympus Endo-AID module from October 2022 to January 2023.Endoscopists who conducted over 50 procedures before and after AIAC introduction were included.Procedures for surveillance of inflammatory bowel disease were excluded.Patient demographics,proceduralist specialisation,indication for colonoscopy,and colonoscopy quality metrics were collected.Adenoma detection rate(ADR)and sessile serrated lesion detection rate(SSLDR)were calculated for both AIAC and CC.RESULTS The study included 746 AIAC procedures and 2162 CC procedures performed by seven endoscopists.Baseline patient demographics were similar,with median age of 60 years with a slight female predominance(52.1%).Procedure indications,bowel preparation quality,and caecal intubation rates were comparable between groups.AIAC had a slightly longer withdrawal time compared to CC,but the difference was not statistically significant.The introduction of AIAC did not significantly change ADR(52.1%for AIAC vs 52.6%for CC,P=0.91)or SSLDR(17.4%for AIAC vs 18.1%for CC,P=0.44).CONCLUSION The implementation of AIAC failed to improve key markers of colonoscopy quality,including ADR,SSLDR and withdrawal time.Further research is required to assess the utility and cost-efficiency of AIAC for high performing endoscopists.展开更多
Recently,computer vision(CV)based disease diagnosis models have been utilized in various areas of healthcare.At the same time,deep learning(DL)and machine learning(ML)models play a vital role in the healthcare sector ...Recently,computer vision(CV)based disease diagnosis models have been utilized in various areas of healthcare.At the same time,deep learning(DL)and machine learning(ML)models play a vital role in the healthcare sector for the effectual recognition of diseases using medical imaging tools.This study develops a novel computer vision with optimal machine learning enabled skin lesion detection and classification(CVOML-SLDC)model.The goal of the CVOML-SLDC model is to determine the appropriate class labels for the test dermoscopic images.Primarily,the CVOML-SLDC model derives a gaussian filtering(GF)approach to pre-process the input images and graph cut segmentation is applied.Besides,firefly algorithm(FFA)with EfficientNet based feature extraction module is applied for effectual derivation of feature vectors.Moreover,naïve bayes(NB)classifier is utilized for the skin lesion detection and classification model.The application of FFA helps to effectually adjust the hyperparameter values of the EfficientNet model.The experimental analysis of the CVOML-SLDC model is performed using benchmark skin lesion dataset.The detailed comparative study of the CVOML-SLDC model reported the improved outcomes over the recent approaches with maximum accuracy of 94.83%.展开更多
Lesion detection in Computed Tomography(CT) images is a challenging task in the field of computer-aided diagnosis.An important issue is to locate the area of lesion accurately.As a branch of Convolutional Neural Netwo...Lesion detection in Computed Tomography(CT) images is a challenging task in the field of computer-aided diagnosis.An important issue is to locate the area of lesion accurately.As a branch of Convolutional Neural Networks(CNNs),3D Context-Enhanced(3DCE) frameworks are designed to detect lesions on CT scans.The False Positives(FPs) detected in 3DCE frameworks are usually caused by inaccurate region proposals,which slow down the inference time.To solve the above problems,a new method is proposed,a dimension-decomposition region proposal network is integrated into 3DCE framework to improve the location accuracy in lesion detection.Without the restriction of "anchors" on ratios and scales,anchors are decomposed to independent "anchor strings".Anchor segments are dynamically combined in accordance with probability,and anchor strings with different lengths dynamically compose bounding boxes.Experiments show that the accurate region proposals generated by our model promote the sensitivity of FPs and spend less inference time compared with the current methods.展开更多
基金funded through Researchers Supporting Project Number(RSPD2024R996)King Saud University,Riyadh,Saudi Arabia。
文摘Breast cancer detection heavily relies on medical imaging, particularly ultrasound, for early diagnosis and effectivetreatment. This research addresses the challenges associated with computer-aided diagnosis (CAD) of breastcancer fromultrasound images. The primary challenge is accurately distinguishing between malignant and benigntumors, complicated by factors such as speckle noise, variable image quality, and the need for precise segmentationand classification. The main objective of the research paper is to develop an advanced methodology for breastultrasound image classification, focusing on speckle noise reduction, precise segmentation, feature extraction, andmachine learning-based classification. A unique approach is introduced that combines Enhanced Speckle ReducedAnisotropic Diffusion (SRAD) filters for speckle noise reduction, U-NET-based segmentation, Genetic Algorithm(GA)-based feature selection, and Random Forest and Bagging Tree classifiers, resulting in a novel and efficientmodel. To test and validate the hybrid model, rigorous experimentations were performed and results state thatthe proposed hybrid model achieved accuracy rate of 99.9%, outperforming other existing techniques, and alsosignificantly reducing computational time. This enhanced accuracy, along with improved sensitivity and specificity,makes the proposed hybrid model a valuable addition to CAD systems in breast cancer diagnosis, ultimatelyenhancing diagnostic accuracy in clinical applications.
基金Taishan Young Scholars Program of Shandong Province,Key Development Program for Basic Research of Shandong Province(ZR2020ZD44).
文摘Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal characteristics and contextual information compared to 3D CT blocks.However,3D CT blocks necessitate significantly higher hardware resources during the learning phase.Therefore,efficiently exploiting temporal correlation and spatial-temporal features of 2D CT slices is crucial for ULD tasks.In this paper,we propose a ULD network with the enhanced temporal correlation for this purpose,named TCE-Net.The designed TCE module is applied to enrich the discriminate feature representation of multiple sequential CT slices.Besides,we employ multi-scale feature maps to facilitate the localization and detection of lesions in various sizes.Extensive experiments are conducted on the DeepLesion benchmark demonstrate that thismethod achieves 66.84%and 78.18%for FS@0.5 and FS@1.0,respectively,outperforming compared state-of-the-art methods.
文摘Objective:We propose a solution that is backed by cloud computing,combines a series of AI neural networks of computer vision;is capable of detecting,highlighting,and locating breast lesions from a live ultrasound video feed,provides BI-RADS categorizations;and has reliable sensitivity and specificity.Multiple deep-learning models were trained on more than 300,000 breast ultrasound images to achieve object detection and regions of interest classification.The main objective of this study was to determine whether the performance of our Al-powered solution was comparable to that of ultrasound radiologists.Methods:The noninferiority evaluation was conducted by comparing the examination results of the same screening women between our AI-powered solution and ultrasound radiologists with over 10 years of experience.The study lasted for one and a half years and was carried out in the Duanzhou District Women and Children's Hospital,Zhaoqing,China.1,133 females between 20 and 70 years old were selected through convenience sampling.Results:The accuracy,sensitivity,specificity,positive predictive value,and negative predictive value were 93.03%,94.90%,90.71%,92.68%,and 93.48%,respectively.The area under the curve(AUC)for all positives was 0.91569 and the AUC for all negatives was 0.90461.The comparison indicated that the overall performance of the AI system was comparable to that of ultrasound radiologists.Conclusion:This innovative AI-powered ultrasound solution is cost-effective and user-friendly,and could be applied to massive breast cancer screening.
文摘AIM:To explore the feasibility of dual camera capsule (DCC)small-bowel(SB)imaging and to examine if two cameras complement each other to detect more SB lesions.METHODS:Forty-one eligible,consecutive patients underwent DCC SB imaging.Two experienced investigators examined the videos and compared the total number of detected lesions to the number of lesions detected by each camera separately.Examination tolerability was assessed using a questionnaire.RESULTS:One patient was excluded.DCC cameras detected 68 positive findings(POS)in 20(50%)cases.Fifty of them were detected by the"yellow"camera,48 by the"green"and 28 by both cameras;44%(n=22)of the"yellow"camera’s POS were not detected by the"green"camera and 42%(n=20)of the"green" camera’s POS were not detected by the"yellow"camera.In two cases,only one camera detected significant findings.All participants had 216 findings of unknown significance(FUS).The"yellow","green"and both cameras detected 171,161,and 116 FUS,respectively;32%(n=55)of the"yellow"camera’s FUS were not detected by the"green"camera and 28%(n=45)of the"green"camera’s FUS were not detected by the "yellow"camera.There were no complications related to the examination,and 97.6%of the patients would repeat the examination,if necessary.CONCLUSION:DCC SB examination is feasible and well tolerated.The two cameras complement each other to detect more SB lesions.
基金Thanks to research training program(RTP)of University of Newcastle,Australia and PGRSS,UON for providing funding.APC of CMC will be paid by PGRSS,UON funding.
文摘The diagnosis of multiple sclerosis(MS)is based on accurate detection of lesions on magnetic resonance imaging(MRI)which also provides ongoing essential information about the progression and status of the disease.Manual detection of lesions is very time consuming and lacks accuracy.Most of the lesions are difficult to detect manually,especially within the grey matter.This paper proposes a novel and fully automated convolution neural network(CNN)approach to segment lesions.The proposed system consists of two 2D patchwise CNNs which can segment lesions more accurately and robustly.The first CNN network is implemented to segment lesions accurately,and the second network aims to reduce the false positives to increase efficiency.The system consists of two parallel convolutional pathways,where one pathway is concatenated to the second and at the end,the fully connected layer is replaced with CNN.Three routine MRI sequences T1-w,T2-w and FLAIR are used as input to the CNN,where FLAIR is used for segmentation because most lesions on MRI appear as bright regions and T1-w&T2-w are used to reduce MRI artifacts.We evaluated the proposed system on two challenge datasets that are publicly available from MICCAI and ISBI.Quantitative and qualitative evaluation has been performed with various metrics like false positive rate(FPR),true positive rate(TPR)and dice similarities,and were compared to current state-of-the-art methods.The proposed method shows consistent higher precision and sensitivity than other methods.The proposed method can accurately and robustly segment MS lesions from images produced by different MRI scanners,with a precision up to 90%.
文摘Objective We developed a universal lesion detector(ULDor)which showed good performance in in-lab experiments.The study aims to evaluate the performance and its ability to generalize in clinical setting via both external and internal validation.Methods The ULDor system consists of a convolutional neural network(CNN)trained on around 80 K lesion annotations from about 12 K CT studies in the DeepLesion dataset and 5 other public organ-specific datasets.During the validation process,the test sets include two parts:the external validation dataset which was comprised of 164 sets of non-contrasted chest and upper abdomen CT scans from a comprehensive hospital,and the internal validation dataset which was comprised of 187 sets of low-dose helical CT scans from the National Lung Screening Trial(NLST).We ran the model on the two test sets to output lesion detection.Three board-certified radiologists read the CT scans and verified the detection results of ULDor.We used positive predictive value(PPV)and sensitivity to evaluate the performance of the model in detecting space-occupying lesions at all extra-pulmonary organs visualized on CT images,including liver,kidney,pancreas,adrenal,spleen,esophagus,thyroid,lymph nodes,body wall,thoracic spine,etc.Results In the external validation,the lesion-level PPV and sensitivity of the model were 57.9%and 67.0%,respectively.On average,the model detected 2.1 findings per set,and among them,0.9 were false positives.ULDor worked well for detecting liver lesions,with a PPV of 78.9%and a sensitivity of 92.7%,followed by kidney,with a PPV of 70.0%and a sensitivity of 58.3%.In internal validation with NLST test set,ULDor obtained a PPV of 75.3%and a sensitivity of 52.0%despite the relatively high noise level of soft tissue on images.Conclusions The performance tests of ULDor with the external real-world data have shown its high effectiveness in multiple-purposed detection for lesions in certain organs.With further optimisation and iterative upgrades,ULDor may be well suited for extensive application to external data.
文摘BACKGROUND Limited data currently exists on the clinical utility of Artificial Intelligence Assisted Colonoscopy(AIAC)outside of clinical trials.AIM To evaluate the impact of AIAC on key markers of colonoscopy quality compared to conventional colonoscopy(CC).METHODS This single-centre retrospective observational cohort study included all patients undergoing colonoscopy at a secondary centre in Brisbane,Australia.CC outcomes between October 2021 and October 2022 were compared with AIAC outcomes after the introduction of the Olympus Endo-AID module from October 2022 to January 2023.Endoscopists who conducted over 50 procedures before and after AIAC introduction were included.Procedures for surveillance of inflammatory bowel disease were excluded.Patient demographics,proceduralist specialisation,indication for colonoscopy,and colonoscopy quality metrics were collected.Adenoma detection rate(ADR)and sessile serrated lesion detection rate(SSLDR)were calculated for both AIAC and CC.RESULTS The study included 746 AIAC procedures and 2162 CC procedures performed by seven endoscopists.Baseline patient demographics were similar,with median age of 60 years with a slight female predominance(52.1%).Procedure indications,bowel preparation quality,and caecal intubation rates were comparable between groups.AIAC had a slightly longer withdrawal time compared to CC,but the difference was not statistically significant.The introduction of AIAC did not significantly change ADR(52.1%for AIAC vs 52.6%for CC,P=0.91)or SSLDR(17.4%for AIAC vs 18.1%for CC,P=0.44).CONCLUSION The implementation of AIAC failed to improve key markers of colonoscopy quality,including ADR,SSLDR and withdrawal time.Further research is required to assess the utility and cost-efficiency of AIAC for high performing endoscopists.
文摘Recently,computer vision(CV)based disease diagnosis models have been utilized in various areas of healthcare.At the same time,deep learning(DL)and machine learning(ML)models play a vital role in the healthcare sector for the effectual recognition of diseases using medical imaging tools.This study develops a novel computer vision with optimal machine learning enabled skin lesion detection and classification(CVOML-SLDC)model.The goal of the CVOML-SLDC model is to determine the appropriate class labels for the test dermoscopic images.Primarily,the CVOML-SLDC model derives a gaussian filtering(GF)approach to pre-process the input images and graph cut segmentation is applied.Besides,firefly algorithm(FFA)with EfficientNet based feature extraction module is applied for effectual derivation of feature vectors.Moreover,naïve bayes(NB)classifier is utilized for the skin lesion detection and classification model.The application of FFA helps to effectually adjust the hyperparameter values of the EfficientNet model.The experimental analysis of the CVOML-SLDC model is performed using benchmark skin lesion dataset.The detailed comparative study of the CVOML-SLDC model reported the improved outcomes over the recent approaches with maximum accuracy of 94.83%.
基金supported by the National Natural Science Foundation of China (Nos. 62072135, 61672181)。
文摘Lesion detection in Computed Tomography(CT) images is a challenging task in the field of computer-aided diagnosis.An important issue is to locate the area of lesion accurately.As a branch of Convolutional Neural Networks(CNNs),3D Context-Enhanced(3DCE) frameworks are designed to detect lesions on CT scans.The False Positives(FPs) detected in 3DCE frameworks are usually caused by inaccurate region proposals,which slow down the inference time.To solve the above problems,a new method is proposed,a dimension-decomposition region proposal network is integrated into 3DCE framework to improve the location accuracy in lesion detection.Without the restriction of "anchors" on ratios and scales,anchors are decomposed to independent "anchor strings".Anchor segments are dynamically combined in accordance with probability,and anchor strings with different lengths dynamically compose bounding boxes.Experiments show that the accurate region proposals generated by our model promote the sensitivity of FPs and spend less inference time compared with the current methods.