期刊文献+
共找到332篇文章
< 1 2 17 >
每页显示 20 50 100
Confusing Object Detection:A Survey
1
作者 Kunkun Tong Guchu Zou +5 位作者 Xin Tan Jingyu Gong Zhenyi Qi Zhizhong Zhang Yuan Xie Lizhuang Ma 《Computers, Materials & Continua》 SCIE EI 2024年第9期3421-3461,共41页
Confusing object detection(COD),such as glass,mirrors,and camouflaged objects,represents a burgeoning visual detection task centered on pinpointing and distinguishing concealed targets within intricate backgrounds,lev... Confusing object detection(COD),such as glass,mirrors,and camouflaged objects,represents a burgeoning visual detection task centered on pinpointing and distinguishing concealed targets within intricate backgrounds,leveraging deep learning methodologies.Despite garnering increasing attention in computer vision,the focus of most existing works leans toward formulating task-specific solutions rather than delving into in-depth analyses of methodological structures.As of now,there is a notable absence of a comprehensive systematic review that focuses on recently proposed deep learning-based models for these specific tasks.To fill this gap,our study presents a pioneering review that covers both themodels and the publicly available benchmark datasets,while also identifying potential directions for future research in this field.The current dataset primarily focuses on single confusing object detection at the image level,with some studies extending to video-level data.We conduct an in-depth analysis of deep learning architectures,revealing that the current state-of-the-art(SOTA)COD methods demonstrate promising performance in single object detection.We also compile and provide detailed descriptions ofwidely used datasets relevant to these detection tasks.Our endeavor extends to discussing the limitations observed in current methodologies,alongside proposed solutions aimed at enhancing detection accuracy.Additionally,we deliberate on relevant applications and outline future research trajectories,aiming to catalyze advancements in the field of glass,mirror,and camouflaged object detection. 展开更多
关键词 Confusing object detection mirror detection glass detection camouflaged object detection deep learning
下载PDF
Enhancing Dense Small Object Detection in UAV Images Based on Hybrid Transformer 被引量:1
2
作者 Changfeng Feng Chunping Wang +2 位作者 Dongdong Zhang Renke Kou Qiang Fu 《Computers, Materials & Continua》 SCIE EI 2024年第3期3993-4013,共21页
Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unman... Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection. 展开更多
关键词 UAV images TRANSFORMER dense small object detection
下载PDF
YOLO-MFD:Remote Sensing Image Object Detection with Multi-Scale Fusion Dynamic Head
3
作者 Zhongyuan Zhang Wenqiu Zhu 《Computers, Materials & Continua》 SCIE EI 2024年第5期2547-2563,共17页
Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false... Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method. 展开更多
关键词 object detection YOLOv8 MULTI-SCALE attention mechanism dynamic detection head
下载PDF
Enhanced Object Detection and Classification via Multi-Method Fusion
4
作者 Muhammad Waqas Ahmed Nouf Abdullah Almujally +2 位作者 Abdulwahab Alazeb Asaad Algarni Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第5期3315-3331,共17页
Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occ... Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occlusion,and limited labeled data.To address these challenges,we introduce a comprehensive methodology toenhance image classification and object detection accuracy.The proposed approach involves the integration ofmultiple methods in a complementary way.The process commences with the application of Gaussian filters tomitigate the impact of noise interference.These images are then processed for segmentation using Fuzzy C-Meanssegmentation in parallel with saliency mapping techniques to find the most prominent regions.The Binary RobustIndependent Elementary Features(BRIEF)characteristics are then extracted fromdata derived fromsaliency mapsand segmented images.For precise object separation,Oriented FAST and Rotated BRIEF(ORB)algorithms areemployed.Genetic Algorithms(GAs)are used to optimize Random Forest classifier parameters which lead toimproved performance.Our method stands out due to its comprehensive approach,adeptly addressing challengessuch as changing backdrops,occlusion,and limited labeled data concurrently.A significant enhancement hasbeen achieved by integrating Genetic Algorithms(GAs)to precisely optimize parameters.This minor adjustmentnot only boosts the uniqueness of our system but also amplifies its overall efficacy.The proposed methodologyhas demonstrated notable classification accuracies of 90.9%and 89.0%on the challenging Corel-1k and MSRCdatasets,respectively.Furthermore,detection accuracies of 87.2%and 86.6%have been attained.Although ourmethod performed well in both datasets it may face difficulties in real-world data especially where datasets havehighly complex backgrounds.Despite these limitations,GAintegration for parameter optimization shows a notablestrength in enhancing the overall adaptability and performance of our system. 展开更多
关键词 BRIEF features saliency map fuzzy c-means object detection object recognition
下载PDF
Robust and Discriminative Feature Learning via Mutual Information Maximization for Object Detection in Aerial Images
5
作者 Xu Sun Yinhui Yu Qing Cheng 《Computers, Materials & Continua》 SCIE EI 2024年第9期4149-4171,共23页
Object detection in unmanned aerial vehicle(UAV)aerial images has become increasingly important in military and civil applications.General object detection models are not robust enough against interclass similarity an... Object detection in unmanned aerial vehicle(UAV)aerial images has become increasingly important in military and civil applications.General object detection models are not robust enough against interclass similarity and intraclass variability of small objects,and UAV-specific nuisances such as uncontrolledweather conditions.Unlike previous approaches focusing on high-level semantic information,we report the importance of underlying features to improve detection accuracy and robustness fromthe information-theoretic perspective.Specifically,we propose a robust and discriminative feature learning approach through mutual information maximization(RD-MIM),which can be integrated into numerous object detection methods for aerial images.Firstly,we present the rank sample mining method to reduce underlying feature differences between the natural image domain and the aerial image domain.Then,we design a momentum contrast learning strategy to make object features similar to the same category and dissimilar to different categories.Finally,we construct a transformer-based global attention mechanism to boost object location semantics by leveraging the high interrelation of different receptive fields.We conduct extensive experiments on the VisDrone and Unmanned Aerial Vehicle Benchmark Object Detection and Tracking(UAVDT)datasets to prove the effectiveness of the proposed method.The experimental results show that our approach brings considerable robustness gains to basic detectors and advanced detection methods,achieving relative growth rates of 51.0%and 39.4%in corruption robustness,respectively.Our code is available at https://github.com/cq100/RD-MIM(accessed on 2 August 2024). 展开更多
关键词 Aerial images object detection mutual information contrast learning attention mechanism
下载PDF
Probability-Enhanced Anchor-Free Detector for Remote-Sensing Object Detection
6
作者 Chengcheng Fan Zhiruo Fang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4925-4943,共19页
Anchor-free object-detection methods achieve a significant advancement in field of computer vision,particularly in the realm of real-time inferences.However,in remote sensing object detection,anchor-free methods often... Anchor-free object-detection methods achieve a significant advancement in field of computer vision,particularly in the realm of real-time inferences.However,in remote sensing object detection,anchor-free methods often lack of capability in separating the foreground and background.This paper proposes an anchor-free method named probability-enhanced anchor-free detector(ProEnDet)for remote sensing object detection.First,a weighted bidirectional feature pyramid is used for feature extraction.Second,we introduce probability enhancement to strengthen the classification of the object’s foreground and background.The detector uses the logarithm likelihood as the final score to improve the classification of the foreground and background of the object.ProEnDet is verified using the DIOR and NWPU-VHR-10 datasets.The experiment achieved mean average precisions of 61.4 and 69.0 on the DIOR dataset and NWPU-VHR-10 dataset,respectively.ProEnDet achieves a speed of 32.4 FPS on the DIOR dataset,which satisfies the real-time requirements for remote-sensing object detection. 展开更多
关键词 object detection anchor-free detector PROBABILISTIC fully convolutional neural network remote sensing
下载PDF
CAW-YOLO:Cross-Layer Fusion and Weighted Receptive Field-Based YOLO for Small Object Detection in Remote Sensing
7
作者 Weiya Shi Shaowen Zhang Shiqiang Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期3209-3231,共23页
In recent years,there has been extensive research on object detection methods applied to optical remote sensing images utilizing convolutional neural networks.Despite these efforts,the detection of small objects in re... In recent years,there has been extensive research on object detection methods applied to optical remote sensing images utilizing convolutional neural networks.Despite these efforts,the detection of small objects in remote sensing remains a formidable challenge.The deep network structure will bring about the loss of object features,resulting in the loss of object features and the near elimination of some subtle features associated with small objects in deep layers.Additionally,the features of small objects are susceptible to interference from background features contained within the image,leading to a decline in detection accuracy.Moreover,the sensitivity of small objects to the bounding box perturbation further increases the detection difficulty.In this paper,we introduce a novel approach,Cross-Layer Fusion and Weighted Receptive Field-based YOLO(CAW-YOLO),specifically designed for small object detection in remote sensing.To address feature loss in deep layers,we have devised a cross-layer attention fusion module.Background noise is effectively filtered through the incorporation of Bi-Level Routing Attention(BRA).To enhance the model’s capacity to perceive multi-scale objects,particularly small-scale objects,we introduce a weightedmulti-receptive field atrous spatial pyramid poolingmodule.Furthermore,wemitigate the sensitivity arising from bounding box perturbation by incorporating the joint Normalized Wasserstein Distance(NWD)and Efficient Intersection over Union(EIoU)losses.The efficacy of the proposedmodel in detecting small objects in remote sensing has been validated through experiments conducted on three publicly available datasets.The experimental results unequivocally demonstrate the model’s pronounced advantages in small object detection for remote sensing,surpassing the performance of current mainstream models. 展开更多
关键词 Small object detection attention mechanism cross-layer fusion discrete cosine transform
下载PDF
Learning Discriminatory Information for Object Detection on Urine Sediment Image
8
作者 Sixian Chan Binghui Wu +2 位作者 Guodao Zhang Yuan Yao Hongqiang Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期411-428,共18页
In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,... In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,diagnosis and evaluation of kidney and urinary tract disease,providing insight into the specific type and severity.However,manual urine sediment examination is labor-intensive,time-consuming,and subjective.Traditional machine learning based object detection methods require hand-crafted features for localization and classification,which have poor generalization capabilities and are difficult to quickly and accurately detect the number of urine sediments.Deep learning based object detection methods have the potential to address the challenges mentioned above,but these methods require access to large urine sediment image datasets.Unfortunately,only a limited number of publicly available urine sediment datasets are currently available.To alleviate the lack of urine sediment datasets in medical image analysis,we propose a new dataset named UriSed2K,which contains 2465 high-quality images annotated with expert guidance.Two main challenges are associated with our dataset:a large number of small objects and the occlusion between these small objects.Our manuscript focuses on applying deep learning object detection methods to the urine sediment dataset and addressing the challenges presented by this dataset.Specifically,our goal is to improve the accuracy and efficiency of the detection algorithm and,in doing so,provide medical professionals with an automatic detector that saves time and effort.We propose an improved lightweight one-stage object detection algorithm called Discriminatory-YOLO.The proposed algorithm comprises a local context attention module and a global background suppression module,which aid the detector in distinguishing urine sediment features in the image.The local context attention module captures context information beyond the object region,while the global background suppression module emphasizes objects in uninformative backgrounds.We comprehensively evaluate our method on the UriSed2K dataset,which includes seven categories of urine sediments,such as erythrocytes(red blood cells),leukocytes(white blood cells),epithelial cells,crystals,mycetes,broken erythrocytes,and broken leukocytes,achieving the best average precision(AP)of 95.3%while taking only 10 ms per image.The source code and dataset are available at https://github.com/binghuiwu98/discriminatoryyolov5. 展开更多
关键词 object detection attention mechanism medical image urine sediment
下载PDF
Improving Transferable Targeted Adversarial Attack for Object Detection Using RCEN Framework and Logit Loss Optimization
9
作者 Zhiyi Ding Lei Sun +2 位作者 Xiuqing Mao Leyu Dai Ruiyang Ding 《Computers, Materials & Continua》 SCIE EI 2024年第9期4387-4412,共26页
Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural netw... Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples.This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems.Most existing adversarial attack strategies focus primarily on image classification problems,failing to fully exploit the unique characteristics of object detectionmodels,thus resulting in widespread deficiencies in their transferability.Furthermore,previous research has predominantly concentrated on the transferability issues of non-targeted attacks,whereas enhancing the transferability of targeted adversarial examples presents even greater challenges.Traditional attack techniques typically employ cross-entropy as a loss measure,iteratively adjusting adversarial examples to match target categories.However,their inherent limitations restrict their broad applicability and transferability across different models.To address the aforementioned challenges,this study proposes a novel targeted adversarial attack method aimed at enhancing the transferability of adversarial samples across object detection models.Within the framework of iterative attacks,we devise a new objective function designed to mitigate consistency issues arising from cumulative noise and to enhance the separation between target and non-target categories(logit margin).Secondly,a data augmentation framework incorporating random erasing and color transformations is introduced into targeted adversarial attacks.This enhances the diversity of gradients,preventing overfitting to white-box models.Lastly,perturbations are applied only within the specified object’s bounding box to reduce the perturbation range,enhancing attack stealthiness.Experiments were conducted on the Microsoft Common Objects in Context(MS COCO)dataset using You Only Look Once version 3(YOLOv3),You Only Look Once version 8(YOLOv8),Faster Region-based Convolutional Neural Networks(Faster R-CNN),and RetinaNet.The results demonstrate a significant advantage of the proposed method in black-box settings.Among these,the success rate of RetinaNet transfer attacks reached a maximum of 82.59%. 展开更多
关键词 object detection model security targeted attack gradient diversity
下载PDF
MSC-YOLO:Improved YOLOv7 Based on Multi-Scale Spatial Context for Small Object Detection in UAV-View
10
作者 Xiangyan Tang Chengchun Ruan +2 位作者 Xiulai Li Binbin Li Cebin Fu 《Computers, Materials & Continua》 SCIE EI 2024年第4期983-1003,共21页
Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variati... Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variations inUAV flight altitude,differences in object scales,as well as factors like flight speed and motion blur.To enhancethe detection efficacy of small targets in drone aerial imagery,we propose an enhanced You Only Look Onceversion 7(YOLOv7)algorithm based on multi-scale spatial context.We build the MSC-YOLO model,whichincorporates an additional prediction head,denoted as P2,to improve adaptability for small objects.We replaceconventional downsampling with a Spatial-to-Depth Convolutional Combination(CSPDC)module to mitigatethe loss of intricate feature details related to small objects.Furthermore,we propose a Spatial Context Pyramidwith Multi-Scale Attention(SCPMA)module,which captures spatial and channel-dependent features of smalltargets acrossmultiple scales.This module enhances the perception of spatial contextual features and the utilizationof multiscale feature information.On the Visdrone2023 and UAVDT datasets,MSC-YOLO achieves remarkableresults,outperforming the baseline method YOLOv7 by 3.0%in terms ofmean average precision(mAP).The MSCYOLOalgorithm proposed in this paper has demonstrated satisfactory performance in detecting small targets inUAV aerial photography,providing strong support for practical applications. 展开更多
关键词 Small object detection YOLOv7 multi-scale attention spatial context
下载PDF
Multi-Label Image Classification Based on Object Detection and Dynamic Graph Convolutional Networks
11
作者 Xiaoyu Liu Yong Hu 《Computers, Materials & Continua》 SCIE EI 2024年第9期4413-4432,共20页
Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread a... Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread adoption of convolutional neural networks(CNNs)has catalyzed the remarkable success of architectures such as ResNet-101 within the domain of image classification.However,inmulti-label image classification tasks,it is crucial to consider the correlation between labels.In order to improve the accuracy and performance of multi-label classification and fully combine visual and semantic features,many existing studies use graph convolutional networks(GCN)for modeling.Object detection and multi-label image classification exhibit a degree of conceptual overlap;however,the integration of these two tasks within a unified framework has been relatively underexplored in the existing literature.In this paper,we come up with Object-GCN framework,a model combining object detection network YOLOv5 and graph convolutional network,and we carry out a thorough experimental analysis using a range of well-established public datasets.The designed framework Object-GCN achieves significantly better performance than existing studies in public datasets COCO2014,VOC2007,VOC2012.The final results achieved are 86.9%,96.7%,and 96.3%mean Average Precision(mAP)across the three datasets. 展开更多
关键词 Deep learning multi-label image recognition object detection graph convolution networks
下载PDF
Two-Layer Attention Feature Pyramid Network for Small Object Detection
12
作者 Sheng Xiang Junhao Ma +2 位作者 Qunli Shang Xianbao Wang Defu Chen 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期713-731,共19页
Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain les... Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors. 展开更多
关键词 Small object detection two-layer attention module small object detail enhancement module feature pyramid network
下载PDF
Multi-Stream Temporally Enhanced Network for Video Salient Object Detection
13
作者 Dan Xu Jiale Ru Jinlong Shi 《Computers, Materials & Continua》 SCIE EI 2024年第1期85-104,共20页
Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing com... Video salient object detection(VSOD)aims at locating the most attractive objects in a video by exploring the spatial and temporal features.VSOD poses a challenging task in computer vision,as it involves processing complex spatial data that is also influenced by temporal dynamics.Despite the progress made in existing VSOD models,they still struggle in scenes of great background diversity within and between frames.Additionally,they encounter difficulties related to accumulated noise and high time consumption during the extraction of temporal features over a long-term duration.We propose a multi-stream temporal enhanced network(MSTENet)to address these problems.It investigates saliency cues collaboration in the spatial domain with a multi-stream structure to deal with the great background diversity challenge.A straightforward,yet efficient approach for temporal feature extraction is developed to avoid the accumulative noises and reduce time consumption.The distinction between MSTENet and other VSOD methods stems from its incorporation of both foreground supervision and background supervision,facilitating enhanced extraction of collaborative saliency cues.Another notable differentiation is the innovative integration of spatial and temporal features,wherein the temporal module is integrated into the multi-stream structure,enabling comprehensive spatial-temporal interactions within an end-to-end framework.Extensive experimental results demonstrate that the proposed method achieves state-of-the-art performance on five benchmark datasets while maintaining a real-time speed of 27 fps(Titan XP).Our code and models are available at https://github.com/RuJiaLe/MSTENet. 展开更多
关键词 Video salient object detection deep learning temporally enhanced foreground-background collaboration
下载PDF
A Secure and Cost-Effective Training Framework Atop Serverless Computing for Object Detection in Blasting
14
作者 Tianming Zhang Zebin Chen +4 位作者 Haonan Guo Bojun Ren Quanmin Xie Mengke Tian Yong Wang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第5期2139-2154,共16页
The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’interest in machine learning methods for target detection ... The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’interest in machine learning methods for target detection in the field of blasting.Serverless Computing can provide a variety of computing services for people without hardware foundations and rich software development experience,which has aroused people’s interest in how to use it in the field ofmachine learning.In this paper,we design a distributedmachine learning training application based on the AWS Lambda platform.Based on data parallelism,the data aggregation and training synchronization in Function as a Service(FaaS)are effectively realized.It also encrypts the data set,effectively reducing the risk of data leakage.We rent a cloud server and a Lambda,and then we conduct experiments to evaluate our applications.Our results indicate the effectiveness,rapidity,and economy of distributed training on FaaS. 展开更多
关键词 Serverless computing object detection BLASTING
下载PDF
Depth-Guided Vision Transformer With Normalizing Flows for Monocular 3D Object Detection
15
作者 Cong Pan Junran Peng Zhaoxiang Zhang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第3期673-689,共17页
Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input t... Monocular 3D object detection is challenging due to the lack of accurate depth information.Some methods estimate the pixel-wise depth maps from off-the-shelf depth estimators and then use them as an additional input to augment the RGB images.Depth-based methods attempt to convert estimated depth maps to pseudo-LiDAR and then use LiDAR-based object detectors or focus on the perspective of image and depth fusion learning.However,they demonstrate limited performance and efficiency as a result of depth inaccuracy and complex fusion mode with convolutions.Different from these approaches,our proposed depth-guided vision transformer with a normalizing flows(NF-DVT)network uses normalizing flows to build priors in depth maps to achieve more accurate depth information.Then we develop a novel Swin-Transformer-based backbone with a fusion module to process RGB image patches and depth map patches with two separate branches and fuse them using cross-attention to exchange information with each other.Furthermore,with the help of pixel-wise relative depth values in depth maps,we develop new relative position embeddings in the cross-attention mechanism to capture more accurate sequence ordering of input tokens.Our method is the first Swin-Transformer-based backbone architecture for monocular 3D object detection.The experimental results on the KITTI and the challenging Waymo Open datasets show the effectiveness of our proposed method and superior performance over previous counterparts. 展开更多
关键词 Monocular 3D object detection normalizing flows Swin Transformer
下载PDF
Local saliency consistency-based label inference for weakly supervised salient object detection using scribble annotations
16
作者 Shuo Zhao Peng Cui +1 位作者 Jing Shen Haibo Liu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期239-249,共11页
Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully superv... Recently,weak supervision has received growing attention in the field of salient object detection due to the convenience of labelling.However,there is a large performance gap between weakly supervised and fully supervised salient object detectors because the scribble annotation can only provide very limited foreground/background information.Therefore,an intuitive idea is to infer annotations that cover more complete object and background regions for training.To this end,a label inference strategy is proposed based on the assumption that pixels with similar colours and close positions should have consistent labels.Specifically,k-means clustering algorithm was first performed on both colours and coordinates of original annotations,and then assigned the same labels to points having similar colours with colour cluster centres and near coordinate cluster centres.Next,the same annotations for pixels with similar colours within each kernel neighbourhood was set further.Extensive experiments on six benchmarks demonstrate that our method can significantly improve the performance and achieve the state-of-the-art results. 展开更多
关键词 label inference salient object detection weak supervision
下载PDF
Real-Time Object Detection and Face Recognition Application for the Visually Impaired
17
作者 Karshiev Sanjar Soyoun Bang +1 位作者 SookheeRyue Heechul Jung 《Computers, Materials & Continua》 SCIE EI 2024年第6期3569-3583,共15页
The advancement of navigation systems for the visually impaired has significantly enhanced their mobility by mitigating the risk of encountering obstacles and guiding them along safe,navigable routes.Traditional appro... The advancement of navigation systems for the visually impaired has significantly enhanced their mobility by mitigating the risk of encountering obstacles and guiding them along safe,navigable routes.Traditional approaches primarily focus on broad applications such as wayfinding,obstacle detection,and fall prevention.However,there is a notable discrepancy in applying these technologies to more specific scenarios,like identifying distinct food crop types or recognizing faces.This study proposes a real-time application designed for visually impaired individuals,aiming to bridge this research-application gap.It introduces a system capable of detecting 20 different food crop types and recognizing faces with impressive accuracies of 83.27%and 95.64%,respectively.These results represent a significant contribution to the field of assistive technologies,providing visually impaired users with detailed and relevant information about their surroundings,thereby enhancing their mobility and ensuring their safety.Additionally,it addresses the vital aspects of social engagements,acknowledging the challenges faced by visually impaired individuals in recognizing acquaintances without auditory or tactile signals,and highlights recent developments in prototype systems aimed at assisting with face recognition tasks.This comprehensive approach not only promises enhanced navigational aids but also aims to enrich the social well-being and safety of visually impaired communities. 展开更多
关键词 Artificial intelligence deep learning real-time object detection application
下载PDF
Co-salient object detection with iterative purification and predictive optimization
18
作者 Yang WEN Yuhuan WANG +2 位作者 Hao WANG Wuzhen SHI Wenming CAO 《虚拟现实与智能硬件(中英文)》 EI 2024年第5期396-407,共12页
Background Co-salient object detection(Co-SOD)aims to identify and segment commonly salient objects in a set of related images.However,most current Co-SOD methods encounter issues with the inclusion of irrelevant info... Background Co-salient object detection(Co-SOD)aims to identify and segment commonly salient objects in a set of related images.However,most current Co-SOD methods encounter issues with the inclusion of irrelevant information in the co-representation.These issues hamper their ability to locate co-salient objects and significantly restrict the accuracy of detection.Methods To address this issue,this study introduces a novel Co-SOD method with iterative purification and predictive optimization(IPPO)comprising a common salient purification module(CSPM),predictive optimizing module(POM),and diminishing mixed enhancement block(DMEB).Results These components are designed to explore noise-free joint representations,assist the model in enhancing the quality of the final prediction results,and significantly improve the performance of the Co-SOD algorithm.Furthermore,through a comprehensive evaluation of IPPO and state-of-the-art algorithms focusing on the roles of CSPM,POM,and DMEB,our experiments confirmed that these components are pivotal in enhancing the performance of the model,substantiating the significant advancements of our method over existing benchmarks.Experiments on several challenging benchmark co-saliency datasets demonstrate that the proposed IPPO achieves state-of-the-art performance. 展开更多
关键词 Co-salient object detection Saliency detection Iterative method Predictive optimization
下载PDF
SwinVid:Enhancing Video Object Detection Using Swin Transformer
19
作者 Abdelrahman Maharek Amr Abozeid +1 位作者 Rasha Orban Kamal ElDahshan 《Computer Systems Science & Engineering》 2024年第2期305-320,共16页
What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reas... What causes object detection in video to be less accurate than it is in still images?Because some video frames have degraded in appearance from fast movement,out-of-focus camera shots,and changes in posture.These reasons have made video object detection(VID)a growing area of research in recent years.Video object detection can be used for various healthcare applications,such as detecting and tracking tumors in medical imaging,monitoring the movement of patients in hospitals and long-term care facilities,and analyzing videos of surgeries to improve technique and training.Additionally,it can be used in telemedicine to help diagnose and monitor patients remotely.Existing VID techniques are based on recurrent neural networks or optical flow for feature aggregation to produce reliable features which can be used for detection.Some of those methods aggregate features on the full-sequence level or from nearby frames.To create feature maps,existing VID techniques frequently use Convolutional Neural Networks(CNNs)as the backbone network.On the other hand,Vision Transformers have outperformed CNNs in various vision tasks,including object detection in still images and image classification.We propose in this research to use Swin-Transformer,a state-of-the-art Vision Transformer,as an alternative to CNN-based backbone networks for object detection in videos.The proposed architecture enhances the accuracy of existing VID methods.The ImageNet VID and EPIC KITCHENS datasets are used to evaluate the suggested methodology.We have demonstrated that our proposed method is efficient by achieving 84.3%mean average precision(mAP)on ImageNet VID using less memory in comparison to other leading VID techniques.The source code is available on the website https://github.com/amaharek/SwinVid. 展开更多
关键词 Video object detection vision transformers convolutional neural networks deep learning
下载PDF
Oriented Bounding Box Object Detection Model Based on Improved YOLOv8
20
作者 ZHAO Xin-kang SI Zhan-jun 《印刷与数字媒体技术研究》 CAS 北大核心 2024年第4期67-75,114,共10页
In the study of oriented bounding boxes(OBB)object detection in high-resolution remote sensing images,the problem of missed and wrong detection of small targets occurs because the targets are too small and have differ... In the study of oriented bounding boxes(OBB)object detection in high-resolution remote sensing images,the problem of missed and wrong detection of small targets occurs because the targets are too small and have different orientations.Existing OBB object detection for remote sensing images,although making good progress,mainly focuses on directional modeling,while less consideration is given to the size of the object as well as the problem of missed detection.In this study,a method based on improved YOLOv8 was proposed for detecting oriented objects in remote sensing images,which can improve the detection precision of oriented objects in remote sensing images.Firstly,the ResCBAMG module was innovatively designed,which could better extract channel and spatial correlation information.Secondly,the innovative top-down feature fusion layer network structure was proposed in conjunction with the Efficient Channel Attention(ECA)attention module,which helped to capture inter-local cross-channel interaction information appropriately.Finally,we introduced an innovative ResCBAMG module between the different C2f modules and detection heads of the bottom-up feature fusion layer.This innovative structure helped the model to better focus on the target area.The precision and robustness of oriented target detection were also improved.Experimental results on the DOTA-v1.5 dataset showed that the detection Precision,mAP@0.5,and mAP@0.5:0.95 metrics of the improved model are better compared to the original model.This improvement is effective in detecting small targets and complex scenes. 展开更多
关键词 Remote sensing image Oriented bounding boxes object detection Small target detection YOLOv8
下载PDF
上一页 1 2 17 下一页 到第
使用帮助 返回顶部