期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Scale‐wise interaction fusion and knowledge distillation network for aerial scene recognition
1
作者 Hailong Ning Tao Lei +3 位作者 Mengyuan An Hao Sun Zhanxuan Hu Asoke K.Nandi 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1178-1190,共13页
Aerial scene recognition(ASR)has attracted great attention due to its increasingly essential applications.Most of the ASR methods adopt the multi‐scale architecture because both global and local features play great r... Aerial scene recognition(ASR)has attracted great attention due to its increasingly essential applications.Most of the ASR methods adopt the multi‐scale architecture because both global and local features play great roles in ASR.However,the existing multi‐scale methods neglect the effective interactions among different scales and various spatial locations when fusing global and local features,leading to a limited ability to deal with challenges of large‐scale variation and complex background in aerial scene images.In addition,existing methods may suffer from poor generalisations due to millions of to‐belearnt parameters and inconsistent predictions between global and local features.To tackle these problems,this study proposes a scale‐wise interaction fusion and knowledge distillation(SIF‐KD)network for learning robust and discriminative features with scaleinvariance and background‐independent information.The main highlights of this study include two aspects.On the one hand,a global‐local features collaborative learning scheme is devised for extracting scale‐invariance features so as to tackle the large‐scale variation problem in aerial scene images.Specifically,a plug‐and‐play multi‐scale context attention fusion module is proposed for collaboratively fusing the context information between global and local features.On the other hand,a scale‐wise knowledge distillation scheme is proposed to produce more consistent predictions by distilling the predictive distribution between different scales during training.Comprehensive experimental results show the proposed SIF‐KD network achieves the best overall accuracy with 99.68%,98.74%and 95.47%on the UCM,AID and NWPU‐RESISC45 datasets,respectively,compared with state of the arts. 展开更多
关键词 deep learning image analysis image classification information fusion
下载PDF
An attention-based cascade R-CNN model for sternum fracture detection in X-ray images 被引量:2
2
作者 Yang Jia Haijuan Wang +2 位作者 Weiguang Chen Yagang Wang Bin Yang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2022年第4期658-670,共13页
Fracture is one of the most common and unexpected traumas.If not treated in time,it may cause serious consequences such as joint stiffness,traumatic arthritis,and nerve injury.Using computer vision technology to detec... Fracture is one of the most common and unexpected traumas.If not treated in time,it may cause serious consequences such as joint stiffness,traumatic arthritis,and nerve injury.Using computer vision technology to detect fractures can reduce the workload and misdiagnosis of fractures and also improve the fracture detection speed.However,there are still some problems in sternum fracture detection,such as the low detection rate of small and occult fractures.In this work,the authors have constructed a dataset with 1227 labelled X-ray images for sternum fracture detection.The authors designed a fully automatic fracture detection model based on a deep convolution neural network(CNN).The authors used cascade R-CNN,attention mechanism,and atrous convolution to optimise the detection of small fractures in a large X-ray image with big local variations.The authors compared the detection results of YOLOv5 model,cascade R-CNN and other state-of-the-art models.The authors found that the convolution neural network based on cascade and attention mechanism models has a better detection effect and arrives at an mAP of 0.71,which is much better than using the YOLOv5 model(mAP=0.44)and cascade R-CNN(mAP=0.55). 展开更多
关键词 attention mechanism cascade R-CNN fracture detection X-ray image
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部