AIM: To investigate visceral fat accumulation in association with the risk of small bowel angioectasia.METHODS: We retrospectively investigated 198 consecutive patients who underwent both capsule endoscopy and CT for ...AIM: To investigate visceral fat accumulation in association with the risk of small bowel angioectasia.METHODS: We retrospectively investigated 198 consecutive patients who underwent both capsule endoscopy and CT for investigation of obscure gastrointestinal bleeding(OGIB) from January 2009 to September 2013. The visceral fat area(VFA) and subcutaneous fat area were measured by CT, and information on comorbidities, body mass index, and medications was obtained from their medical records.Logistic regression analysis was used to evaluate associations.RESULTS: Capsule endoscopy revealed small bowel angioectasia in 18/198(9.1%) patients with OGIB.Compared to patients without small bowel angioectasia,those with small bowel angioectasia had a significantly higher VFA(96 ± 76.0 cm2 vs 63.4 ±51.5 cm2, P = 0.016) and a higher prevalence of liver cirrhosis(61% vs 22%, P < 0.001). The proportion of patients with chronic renal failure was higher in patients with small bowel angioectasia(22% vs 9%,P = 0.11). There were no significant differences in subcutaneous fat area or waist circumference. The prevalence of small bowel angioectasia progressively increased according to the VFA. Multivariate analysis showed that the VFA [odd ratio(OR) for each 10-cm2 increment = 1.1; [95% confidence interval(CI):1.02-1.19; P = 0.021] and liver cirrhosis(OR = 6.1,95%CI: 2.2-18.5; P < 0.001) were significant risk factors for small bowel angioectasia.CONCLUSION: VFA is positively associated with theprevalence of small bowel angioectasia, for which VFA and liver cirrhosis are independent risk factors in patients with OGIB.展开更多
The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landm...The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landmarks, pathological findings, other anomalies and normal cases, by examining medical endoscopic images of GI tract. Each binary classifier is trained to detect one specific non-healthy condition. The algorithm analyzed in the present work expands the ability of detection of this tool by classifying GI tract image snapshots into two classes, depicting haemorrhage and non-haemorrhage state. The proposed algorithm is the result of the collaboration between interdisciplinary specialists on AI and Data Analysis, Computer Vision, Gastroenterologists of four University Gastroenterology Departments of Greek Medical Schools. The data used are 195 videos (177 from non-healthy cases and 18 from healthy cases) videos captured from the PillCam<sup>(R)</sup> Medronics device, originated from 195 patients, all diagnosed with different forms of angioectasia, haemorrhages and other diseases from different sites of the gastrointestinal (GI), mainly including difficult cases of diagnosis. Our AI algorithm is based on convolutional neural network (CNN) trained on annotated images at image level, using a semantic tag indicating whether the image contains angioectasia and haemorrhage traces or not. At least 22 CNN architectures were created and evaluated some of which pre-trained applying transfer learning on ImageNet data. All the CNN variations were introduced, trained to a prevalence dataset of 50%, and evaluated of unseen data. On test data, the best results were obtained from our CNN architectures which do not utilize backbone of transfer learning. Across a balanced dataset from no-healthy images and healthy images from 39 videos from different patients, identified correct diagnosis with sensitivity 90%, specificity 92%, precision 91.8%, FPR 8%, FNR 10%. Besides, we compared the performance of our best CNN algorithm versus our same goal algorithm based on HSV colorimetric lesions features extracted of pixel-level annotations, both algorithms trained and tested on the same data. It is evaluated that the CNN trained on image level annotated images, is 9% less sensitive, achieves 2.6% less precision, 1.2% less FPR, and 7% less FNR, than that based on HSV filters, extracted from on pixel-level annotated training data.展开更多
文摘AIM: To investigate visceral fat accumulation in association with the risk of small bowel angioectasia.METHODS: We retrospectively investigated 198 consecutive patients who underwent both capsule endoscopy and CT for investigation of obscure gastrointestinal bleeding(OGIB) from January 2009 to September 2013. The visceral fat area(VFA) and subcutaneous fat area were measured by CT, and information on comorbidities, body mass index, and medications was obtained from their medical records.Logistic regression analysis was used to evaluate associations.RESULTS: Capsule endoscopy revealed small bowel angioectasia in 18/198(9.1%) patients with OGIB.Compared to patients without small bowel angioectasia,those with small bowel angioectasia had a significantly higher VFA(96 ± 76.0 cm2 vs 63.4 ±51.5 cm2, P = 0.016) and a higher prevalence of liver cirrhosis(61% vs 22%, P < 0.001). The proportion of patients with chronic renal failure was higher in patients with small bowel angioectasia(22% vs 9%,P = 0.11). There were no significant differences in subcutaneous fat area or waist circumference. The prevalence of small bowel angioectasia progressively increased according to the VFA. Multivariate analysis showed that the VFA [odd ratio(OR) for each 10-cm2 increment = 1.1; [95% confidence interval(CI):1.02-1.19; P = 0.021] and liver cirrhosis(OR = 6.1,95%CI: 2.2-18.5; P < 0.001) were significant risk factors for small bowel angioectasia.CONCLUSION: VFA is positively associated with theprevalence of small bowel angioectasia, for which VFA and liver cirrhosis are independent risk factors in patients with OGIB.
文摘The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landmarks, pathological findings, other anomalies and normal cases, by examining medical endoscopic images of GI tract. Each binary classifier is trained to detect one specific non-healthy condition. The algorithm analyzed in the present work expands the ability of detection of this tool by classifying GI tract image snapshots into two classes, depicting haemorrhage and non-haemorrhage state. The proposed algorithm is the result of the collaboration between interdisciplinary specialists on AI and Data Analysis, Computer Vision, Gastroenterologists of four University Gastroenterology Departments of Greek Medical Schools. The data used are 195 videos (177 from non-healthy cases and 18 from healthy cases) videos captured from the PillCam<sup>(R)</sup> Medronics device, originated from 195 patients, all diagnosed with different forms of angioectasia, haemorrhages and other diseases from different sites of the gastrointestinal (GI), mainly including difficult cases of diagnosis. Our AI algorithm is based on convolutional neural network (CNN) trained on annotated images at image level, using a semantic tag indicating whether the image contains angioectasia and haemorrhage traces or not. At least 22 CNN architectures were created and evaluated some of which pre-trained applying transfer learning on ImageNet data. All the CNN variations were introduced, trained to a prevalence dataset of 50%, and evaluated of unseen data. On test data, the best results were obtained from our CNN architectures which do not utilize backbone of transfer learning. Across a balanced dataset from no-healthy images and healthy images from 39 videos from different patients, identified correct diagnosis with sensitivity 90%, specificity 92%, precision 91.8%, FPR 8%, FNR 10%. Besides, we compared the performance of our best CNN algorithm versus our same goal algorithm based on HSV colorimetric lesions features extracted of pixel-level annotations, both algorithms trained and tested on the same data. It is evaluated that the CNN trained on image level annotated images, is 9% less sensitive, achieves 2.6% less precision, 1.2% less FPR, and 7% less FNR, than that based on HSV filters, extracted from on pixel-level annotated training data.