This article presents an analysis of the patterns of interactions resulting from the positive and negative emotional events that occur in cities,considering them as complex systems.It explores,from the imaginaries,how...This article presents an analysis of the patterns of interactions resulting from the positive and negative emotional events that occur in cities,considering them as complex systems.It explores,from the imaginaries,how certain urban objects can act as emotional agents and how these events affect the urban system as a whole.An adaptive complex systems perspective is used to analyze these patterns.The results show patterns in the processes and dynamics that occur in cities based on the objects that affect the emotions of the people who live there.These patterns depend on the characteristics of the emotional charge of urban objects,but they can be generalized in the following process:(1)immediate reaction by some individuals;(2)emotions are generated at the individual level which begins to generalize,permuting to a collective emotion;(3)a process of reflection is detonated in some individuals from the reading of collective emotions;(4)integration/significance in the community both at the individual and collective level,on the concepts,roles and/or functions that give rise to the process in the system.Therefore,it is clear that emotions play a significant role in the development of cities and these aspects should be considered in the design strategies of all kinds of projects for the city.Future extensions of this work could include a deeper analysis of specific emotional events in urban environments,as well as possible implications for urban policy and decision making.展开更多
Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.Ho...Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.However,monitor-ing the video continually at a quicker pace is a challenging job.As a consequence,security cameras are useless and need human monitoring.The primary difficulty with video surveillance is identifying abnormalities such as thefts,accidents,crimes,or other unlawful actions.The anomalous action does not occur at a high-er rate than usual occurrences.To detect the object in a video,first we analyze the images pixel by pixel.In digital image processing,segmentation is the process of segregating the individual image parts into pixels.The performance of segmenta-tion is affected by irregular illumination and/or low illumination.These factors highly affect the real-time object detection process in the video surveillance sys-tem.In this paper,a modified ResNet model(M-Resnet)is proposed to enhance the image which is affected by insufficient light.Experimental results provide the comparison of existing method output and modification architecture of the ResNet model shows the considerable amount improvement in detection objects in the video stream.The proposed model shows better results in the metrics like preci-sion,recall,pixel accuracy,etc.,andfinds a reasonable improvement in the object detection.展开更多
Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unman...Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.展开更多
A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the ...A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.展开更多
This research introduces a challenge in integrating and cleaning the data,which is a crucial task in object matching.While the object is detected and then measured,the vibration at different light intensities may influ...This research introduces a challenge in integrating and cleaning the data,which is a crucial task in object matching.While the object is detected and then measured,the vibration at different light intensities may influence the durability and reliability of mechanical systems or structures and cause problems such as damage,abnormal stopping,and disaster.Recent research failed to improve the accuracy rate and the computation time in tracking an object and in the vibration measurement.To solve all these problems,this proposed research simplifies the scaling factor determination by assigning a known real-world dimension to a predetermined portion of the image.A novel white color sticker of the known dimensions marked with a color dot is pasted on the surface of an object for the best result in the template matching using the Improved Up-Sampled Cross-Correlation(UCC)algorithm.The vibration measurement is calculated using the Finite-Difference Algorithm(FDA),a machine vision systemfitted with a macro lens sensor that is capable of capturing the image at a closer range,which does not affect the quality of displacement measurement from the video frames.Thefield test was conducted on the TAFE(Tractors and Farm Equipment Limited)tractor parts,and the percentage of error was recorded between 30%and 50%at very low vibration values close to zero,whereas it was recorded between 5%and 10%error in most high-accelerations,the essential range for vibration analysis.Finally,the suggested system is more suitable for measuring the vibration of stationary machinery having low frequency ranges.The use of a macro lens enables to capture of image frames at very close-ups.A 30%to 50%error percentage has been reported when the vibration amplitude is very small.Therefore,this study is not suitable for Nano vibration analysis.展开更多
Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and diffe...Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and different atmospheric conditions,such as mist,fog,dust etc.The pictures then shift in intensity,colour,polarity and consistency.A general challenge for computer vision analyses lies in the horrid appearance of night images in arbitrary illumination and ambient envir-onments.In recent years,target recognition techniques focused on deep learning and machine learning have become standard algorithms for object detection with the exponential growth of computer performance capabilities.However,the iden-tification of objects in the night world also poses further problems because of the distorted backdrop and dim light.The Correlation aware LSTM based YOLO(You Look Only Once)classifier method for exact object recognition and deter-mining its properties under night vision was a major inspiration for this work.In order to create virtual target sets similar to daily environments,we employ night images as inputs;and to obtain high enhanced image using histogram based enhancement and iterative wienerfilter for removing the noise in the image.The process of the feature extraction and feature selection was done for electing the potential features using the Adaptive internal linear embedding(AILE)and uplift linear discriminant analysis(ULDA).The region of interest mask can be segmen-ted using the Recurrent-Phase Level set Segmentation.Finally,we use deep con-volution feature fusion and region of interest pooling to integrate the presently extremely sophisticated quicker Long short term memory based(LSTM)with YOLO method for object tracking system.A range of experimentalfindings demonstrate that our technique achieves high average accuracy with a precision of 99.7%for object detection of SSAN datasets that is considerably more than that of the other standard object detection mechanism.Our approach may therefore satisfy the true demands of night scene target detection applications.We very much believe that our method will help future research.展开更多
Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occ...Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occlusion,and limited labeled data.To address these challenges,we introduce a comprehensive methodology toenhance image classification and object detection accuracy.The proposed approach involves the integration ofmultiple methods in a complementary way.The process commences with the application of Gaussian filters tomitigate the impact of noise interference.These images are then processed for segmentation using Fuzzy C-Meanssegmentation in parallel with saliency mapping techniques to find the most prominent regions.The Binary RobustIndependent Elementary Features(BRIEF)characteristics are then extracted fromdata derived fromsaliency mapsand segmented images.For precise object separation,Oriented FAST and Rotated BRIEF(ORB)algorithms areemployed.Genetic Algorithms(GAs)are used to optimize Random Forest classifier parameters which lead toimproved performance.Our method stands out due to its comprehensive approach,adeptly addressing challengessuch as changing backdrops,occlusion,and limited labeled data concurrently.A significant enhancement hasbeen achieved by integrating Genetic Algorithms(GAs)to precisely optimize parameters.This minor adjustmentnot only boosts the uniqueness of our system but also amplifies its overall efficacy.The proposed methodologyhas demonstrated notable classification accuracies of 90.9%and 89.0%on the challenging Corel-1k and MSRCdatasets,respectively.Furthermore,detection accuracies of 87.2%and 86.6%have been attained.Although ourmethod performed well in both datasets it may face difficulties in real-world data especially where datasets havehighly complex backgrounds.Despite these limitations,GAintegration for parameter optimization shows a notablestrength in enhancing the overall adaptability and performance of our system.展开更多
Confusing object detection(COD),such as glass,mirrors,and camouflaged objects,represents a burgeoning visual detection task centered on pinpointing and distinguishing concealed targets within intricate backgrounds,lev...Confusing object detection(COD),such as glass,mirrors,and camouflaged objects,represents a burgeoning visual detection task centered on pinpointing and distinguishing concealed targets within intricate backgrounds,leveraging deep learning methodologies.Despite garnering increasing attention in computer vision,the focus of most existing works leans toward formulating task-specific solutions rather than delving into in-depth analyses of methodological structures.As of now,there is a notable absence of a comprehensive systematic review that focuses on recently proposed deep learning-based models for these specific tasks.To fill this gap,our study presents a pioneering review that covers both themodels and the publicly available benchmark datasets,while also identifying potential directions for future research in this field.The current dataset primarily focuses on single confusing object detection at the image level,with some studies extending to video-level data.We conduct an in-depth analysis of deep learning architectures,revealing that the current state-of-the-art(SOTA)COD methods demonstrate promising performance in single object detection.We also compile and provide detailed descriptions ofwidely used datasets relevant to these detection tasks.Our endeavor extends to discussing the limitations observed in current methodologies,alongside proposed solutions aimed at enhancing detection accuracy.Additionally,we deliberate on relevant applications and outline future research trajectories,aiming to catalyze advancements in the field of glass,mirror,and camouflaged object detection.展开更多
Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects an...Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects and the high cost associated with data collection.Consequently,devising algorithms capable of accurately localizing specific objects within a scene in scenarios where annotated data is limited remains a formidable challenge.To solve this problem,this paper proposes an object discovery by request problem setting and a corresponding algorithmic framework.The proposed problem setting aims to identify specified objects in scenes,and the associated algorithmic framework comprises pseudo data generation and object discovery by request network.Pseudo-data generation generates images resembling natural scenes through various data augmentation rules,using a small number of object samples and scene images.The network structure of object discovery by request utilizes the pre-trained Vision Transformer(ViT)model as the backbone,employs object-centric methods to learn the latent representations of foreground objects,and applies patch-level reconstruction constraints to the model.During the validation phase,we use the generated pseudo datasets as training sets and evaluate the performance of our model on the original test sets.Experiments have proved that our method achieves state-of-the-art performance on Unmanned Aerial Vehicles-Bottle Detection(UAV-BD)dataset and self-constructed dataset Bottle,especially in multi-object scenarios.展开更多
Aiming at the limitations of the existing railway foreign object detection methods based on two-dimensional(2D)images,such as short detection distance,strong influence of environment and lack of distance information,w...Aiming at the limitations of the existing railway foreign object detection methods based on two-dimensional(2D)images,such as short detection distance,strong influence of environment and lack of distance information,we propose Rail-PillarNet,a three-dimensional(3D)LIDAR(Light Detection and Ranging)railway foreign object detection method based on the improvement of PointPillars.Firstly,the parallel attention pillar encoder(PAPE)is designed to fully extract the features of the pillars and alleviate the problem of local fine-grained information loss in PointPillars pillars encoder.Secondly,a fine backbone network is designed to improve the feature extraction capability of the network by combining the coding characteristics of LIDAR point cloud feature and residual structure.Finally,the initial weight parameters of the model were optimised by the transfer learning training method to further improve accuracy.The experimental results on the OSDaR23 dataset show that the average accuracy of Rail-PillarNet reaches 58.51%,which is higher than most mainstream models,and the number of parameters is 5.49 M.Compared with PointPillars,the accuracy of each target is improved by 10.94%,3.53%,16.96%and 19.90%,respectively,and the number of parameters only increases by 0.64M,which achieves a balance between the number of parameters and accuracy.展开更多
Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain les...Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors.展开更多
Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false...Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method.展开更多
Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have becom...Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have become a research hotspot due to their ability to globally model and contextualize information.However,current Transformer-based object tracking methods still face challenges such as low tracking accuracy and the presence of redundant feature information.In this paper,we introduce self-calibration multi-head self-attention Transformer(SMSTracker)as a solution to these challenges.It employs a hybrid tensor decomposition self-organizing multihead self-attention transformermechanism,which not only compresses and accelerates Transformer operations but also significantly reduces redundant data,thereby enhancing the accuracy and efficiency of tracking.Additionally,we introduce a self-calibration attention fusion block to resolve common issues of attention ambiguities and inconsistencies found in traditional trackingmethods,ensuring the stability and reliability of tracking performance across various scenarios.By integrating a hybrid tensor decomposition approach with a self-organizingmulti-head self-attentive transformer mechanism,SMSTracker enhances the efficiency and accuracy of the tracking process.Experimental results show that SMSTracker achieves competitive performance in visual object tracking,promising more robust and efficient tracking systems,demonstrating its potential to providemore robust and efficient tracking solutions in real-world applications.展开更多
Remote sensing image object detection is one of the core tasks of remote sensing image processing.In recent years,with the development of deep learning,great progress has been made in object detection in remote sensin...Remote sensing image object detection is one of the core tasks of remote sensing image processing.In recent years,with the development of deep learning,great progress has been made in object detection in remote sensing.However,the problems of dense small targets,complex backgrounds and poor target positioning accuracy in remote sensing images make the detection of remote sensing targets still difficult.In order to solve these problems,this research proposes a remote sensing image object detection algorithm based on improved YOLOX-S.Firstly,the Efficient Channel Attention(ECA)module is introduced to improve the network's ability to extract features in the image and suppress useless information such as background;Secondly,the loss function is optimized to improve the regression accuracy of the target bounding box.We evaluate the effectiveness of our algorithm on the NWPU VHR-10 remote sensing image dataset,the experimental results show that the detection accuracy of the algorithm can reach 95.5%,without increasing the amount of parameters.It is significantly improved compared with that of the original YOLOX-S network,and the detection performance is much better than that of some other mainstream remote sensing image detection methods.Besides,our method also shows good generalization detection performance in experiments on aircraft images in the RSOD dataset.展开更多
In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,...In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,diagnosis and evaluation of kidney and urinary tract disease,providing insight into the specific type and severity.However,manual urine sediment examination is labor-intensive,time-consuming,and subjective.Traditional machine learning based object detection methods require hand-crafted features for localization and classification,which have poor generalization capabilities and are difficult to quickly and accurately detect the number of urine sediments.Deep learning based object detection methods have the potential to address the challenges mentioned above,but these methods require access to large urine sediment image datasets.Unfortunately,only a limited number of publicly available urine sediment datasets are currently available.To alleviate the lack of urine sediment datasets in medical image analysis,we propose a new dataset named UriSed2K,which contains 2465 high-quality images annotated with expert guidance.Two main challenges are associated with our dataset:a large number of small objects and the occlusion between these small objects.Our manuscript focuses on applying deep learning object detection methods to the urine sediment dataset and addressing the challenges presented by this dataset.Specifically,our goal is to improve the accuracy and efficiency of the detection algorithm and,in doing so,provide medical professionals with an automatic detector that saves time and effort.We propose an improved lightweight one-stage object detection algorithm called Discriminatory-YOLO.The proposed algorithm comprises a local context attention module and a global background suppression module,which aid the detector in distinguishing urine sediment features in the image.The local context attention module captures context information beyond the object region,while the global background suppression module emphasizes objects in uninformative backgrounds.We comprehensively evaluate our method on the UriSed2K dataset,which includes seven categories of urine sediments,such as erythrocytes(red blood cells),leukocytes(white blood cells),epithelial cells,crystals,mycetes,broken erythrocytes,and broken leukocytes,achieving the best average precision(AP)of 95.3%while taking only 10 ms per image.The source code and dataset are available at https://github.com/binghuiwu98/discriminatoryyolov5.展开更多
Object detection in unmanned aerial vehicle(UAV)aerial images has become increasingly important in military and civil applications.General object detection models are not robust enough against interclass similarity an...Object detection in unmanned aerial vehicle(UAV)aerial images has become increasingly important in military and civil applications.General object detection models are not robust enough against interclass similarity and intraclass variability of small objects,and UAV-specific nuisances such as uncontrolledweather conditions.Unlike previous approaches focusing on high-level semantic information,we report the importance of underlying features to improve detection accuracy and robustness fromthe information-theoretic perspective.Specifically,we propose a robust and discriminative feature learning approach through mutual information maximization(RD-MIM),which can be integrated into numerous object detection methods for aerial images.Firstly,we present the rank sample mining method to reduce underlying feature differences between the natural image domain and the aerial image domain.Then,we design a momentum contrast learning strategy to make object features similar to the same category and dissimilar to different categories.Finally,we construct a transformer-based global attention mechanism to boost object location semantics by leveraging the high interrelation of different receptive fields.We conduct extensive experiments on the VisDrone and Unmanned Aerial Vehicle Benchmark Object Detection and Tracking(UAVDT)datasets to prove the effectiveness of the proposed method.The experimental results show that our approach brings considerable robustness gains to basic detectors and advanced detection methods,achieving relative growth rates of 51.0%and 39.4%in corruption robustness,respectively.Our code is available at https://github.com/cq100/RD-MIM(accessed on 2 August 2024).展开更多
The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’interest in machine learning methods for target detection ...The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’interest in machine learning methods for target detection in the field of blasting.Serverless Computing can provide a variety of computing services for people without hardware foundations and rich software development experience,which has aroused people’s interest in how to use it in the field ofmachine learning.In this paper,we design a distributedmachine learning training application based on the AWS Lambda platform.Based on data parallelism,the data aggregation and training synchronization in Function as a Service(FaaS)are effectively realized.It also encrypts the data set,effectively reducing the risk of data leakage.We rent a cloud server and a Lambda,and then we conduct experiments to evaluate our applications.Our results indicate the effectiveness,rapidity,and economy of distributed training on FaaS.展开更多
Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural netw...Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples.This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems.Most existing adversarial attack strategies focus primarily on image classification problems,failing to fully exploit the unique characteristics of object detectionmodels,thus resulting in widespread deficiencies in their transferability.Furthermore,previous research has predominantly concentrated on the transferability issues of non-targeted attacks,whereas enhancing the transferability of targeted adversarial examples presents even greater challenges.Traditional attack techniques typically employ cross-entropy as a loss measure,iteratively adjusting adversarial examples to match target categories.However,their inherent limitations restrict their broad applicability and transferability across different models.To address the aforementioned challenges,this study proposes a novel targeted adversarial attack method aimed at enhancing the transferability of adversarial samples across object detection models.Within the framework of iterative attacks,we devise a new objective function designed to mitigate consistency issues arising from cumulative noise and to enhance the separation between target and non-target categories(logit margin).Secondly,a data augmentation framework incorporating random erasing and color transformations is introduced into targeted adversarial attacks.This enhances the diversity of gradients,preventing overfitting to white-box models.Lastly,perturbations are applied only within the specified object’s bounding box to reduce the perturbation range,enhancing attack stealthiness.Experiments were conducted on the Microsoft Common Objects in Context(MS COCO)dataset using You Only Look Once version 3(YOLOv3),You Only Look Once version 8(YOLOv8),Faster Region-based Convolutional Neural Networks(Faster R-CNN),and RetinaNet.The results demonstrate a significant advantage of the proposed method in black-box settings.Among these,the success rate of RetinaNet transfer attacks reached a maximum of 82.59%.展开更多
Automatically detecting and locating remote occlusion small objects from the images of complex traffic environments is a valuable and challenging research.Since the boundary box location is not sufficiently accurate a...Automatically detecting and locating remote occlusion small objects from the images of complex traffic environments is a valuable and challenging research.Since the boundary box location is not sufficiently accurate and it is difficult to distinguish overlapping and occluded objects,the authors propose a network model with a second-order term attention mechanism and occlusion loss.First,the backbone network is built on CSPDarkNet53.Then a method is designed for the feature extraction network based on an item-wise attention mechanism,which uses the filtered weighted feature vector to replace the original residual fusion and adds a second-order term to reduce the information loss in the process of fusion and accelerate the convergence of the model.Finally,an objected occlusion regression loss function is studied to reduce the problems of missed detections caused by dense objects.Sufficient experimental results demonstrate that the authors’method achieved state-of-the-art performance without reducing the detection speed.The mAP@.5 of the method is 85.8%on the Foggy_cityscapes dataset and the mAP@.5 of the method is 97.8%on the KITTI dataset.展开更多
Anchor-free object-detection methods achieve a significant advancement in field of computer vision,particularly in the realm of real-time inferences.However,in remote sensing object detection,anchor-free methods often...Anchor-free object-detection methods achieve a significant advancement in field of computer vision,particularly in the realm of real-time inferences.However,in remote sensing object detection,anchor-free methods often lack of capability in separating the foreground and background.This paper proposes an anchor-free method named probability-enhanced anchor-free detector(ProEnDet)for remote sensing object detection.First,a weighted bidirectional feature pyramid is used for feature extraction.Second,we introduce probability enhancement to strengthen the classification of the object’s foreground and background.The detector uses the logarithm likelihood as the final score to improve the classification of the foreground and background of the object.ProEnDet is verified using the DIOR and NWPU-VHR-10 datasets.The experiment achieved mean average precisions of 61.4 and 69.0 on the DIOR dataset and NWPU-VHR-10 dataset,respectively.ProEnDet achieves a speed of 32.4 FPS on the DIOR dataset,which satisfies the real-time requirements for remote-sensing object detection.展开更多
文摘This article presents an analysis of the patterns of interactions resulting from the positive and negative emotional events that occur in cities,considering them as complex systems.It explores,from the imaginaries,how certain urban objects can act as emotional agents and how these events affect the urban system as a whole.An adaptive complex systems perspective is used to analyze these patterns.The results show patterns in the processes and dynamics that occur in cities based on the objects that affect the emotions of the people who live there.These patterns depend on the characteristics of the emotional charge of urban objects,but they can be generalized in the following process:(1)immediate reaction by some individuals;(2)emotions are generated at the individual level which begins to generalize,permuting to a collective emotion;(3)a process of reflection is detonated in some individuals from the reading of collective emotions;(4)integration/significance in the community both at the individual and collective level,on the concepts,roles and/or functions that give rise to the process in the system.Therefore,it is clear that emotions play a significant role in the development of cities and these aspects should be considered in the design strategies of all kinds of projects for the city.Future extensions of this work could include a deeper analysis of specific emotional events in urban environments,as well as possible implications for urban policy and decision making.
文摘Object detection plays a vital role in the video surveillance systems.To enhance security,surveillance cameras are now installed in public areas such as traffic signals,roadways,retail malls,train stations,and banks.However,monitor-ing the video continually at a quicker pace is a challenging job.As a consequence,security cameras are useless and need human monitoring.The primary difficulty with video surveillance is identifying abnormalities such as thefts,accidents,crimes,or other unlawful actions.The anomalous action does not occur at a high-er rate than usual occurrences.To detect the object in a video,first we analyze the images pixel by pixel.In digital image processing,segmentation is the process of segregating the individual image parts into pixels.The performance of segmenta-tion is affected by irregular illumination and/or low illumination.These factors highly affect the real-time object detection process in the video surveillance sys-tem.In this paper,a modified ResNet model(M-Resnet)is proposed to enhance the image which is affected by insufficient light.Experimental results provide the comparison of existing method output and modification architecture of the ResNet model shows the considerable amount improvement in detection objects in the video stream.The proposed model shows better results in the metrics like preci-sion,recall,pixel accuracy,etc.,andfinds a reasonable improvement in the object detection.
基金This research was funded by the Natural Science Foundation of Hebei Province(F2021506004).
文摘Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.
基金supported by“Regional Innovation Strategy (RIS)”through the National Research Foundation of Korea (NRF)funded by the Ministry of Education (MOE) (2021RIS-004).
文摘A concurrency control mechanism for collaborative work is akey element in a mixed reality environment. However, conventional lockingmechanisms restrict potential tasks or the support of non-owners, thusincreasing the working time because of waiting to avoid conflicts. Herein, wepropose an adaptive concurrency control approach that can reduce conflictsand work time. We classify shared object manipulation in mixed reality intodetailed goals and tasks. Then, we model the relationships among goal,task, and ownership. As the collaborative work progresses, the proposedsystem adapts the different concurrency control mechanisms of shared objectmanipulation according to the modeling of goal–task–ownership. With theproposed concurrency control scheme, users can hold shared objects andmove and rotate together in a mixed reality environment similar to realindustrial sites. Additionally, this system provides MS Hololens and Myosensors to recognize inputs from a user and provides results in a mixed realityenvironment. The proposed method is applied to install an air conditioneras a case study. Experimental results and user studies show that, comparedwith the conventional approach, the proposed method reduced the number ofconflicts, waiting time, and total working time.
文摘This research introduces a challenge in integrating and cleaning the data,which is a crucial task in object matching.While the object is detected and then measured,the vibration at different light intensities may influence the durability and reliability of mechanical systems or structures and cause problems such as damage,abnormal stopping,and disaster.Recent research failed to improve the accuracy rate and the computation time in tracking an object and in the vibration measurement.To solve all these problems,this proposed research simplifies the scaling factor determination by assigning a known real-world dimension to a predetermined portion of the image.A novel white color sticker of the known dimensions marked with a color dot is pasted on the surface of an object for the best result in the template matching using the Improved Up-Sampled Cross-Correlation(UCC)algorithm.The vibration measurement is calculated using the Finite-Difference Algorithm(FDA),a machine vision systemfitted with a macro lens sensor that is capable of capturing the image at a closer range,which does not affect the quality of displacement measurement from the video frames.Thefield test was conducted on the TAFE(Tractors and Farm Equipment Limited)tractor parts,and the percentage of error was recorded between 30%and 50%at very low vibration values close to zero,whereas it was recorded between 5%and 10%error in most high-accelerations,the essential range for vibration analysis.Finally,the suggested system is more suitable for measuring the vibration of stationary machinery having low frequency ranges.The use of a macro lens enables to capture of image frames at very close-ups.A 30%to 50%error percentage has been reported when the vibration amplitude is very small.Therefore,this study is not suitable for Nano vibration analysis.
文摘Improved picture quality is critical to the effectiveness of object recog-nition and tracking.The consistency of those photos is impacted by night-video systems because the contrast between high-profile items and different atmospheric conditions,such as mist,fog,dust etc.The pictures then shift in intensity,colour,polarity and consistency.A general challenge for computer vision analyses lies in the horrid appearance of night images in arbitrary illumination and ambient envir-onments.In recent years,target recognition techniques focused on deep learning and machine learning have become standard algorithms for object detection with the exponential growth of computer performance capabilities.However,the iden-tification of objects in the night world also poses further problems because of the distorted backdrop and dim light.The Correlation aware LSTM based YOLO(You Look Only Once)classifier method for exact object recognition and deter-mining its properties under night vision was a major inspiration for this work.In order to create virtual target sets similar to daily environments,we employ night images as inputs;and to obtain high enhanced image using histogram based enhancement and iterative wienerfilter for removing the noise in the image.The process of the feature extraction and feature selection was done for electing the potential features using the Adaptive internal linear embedding(AILE)and uplift linear discriminant analysis(ULDA).The region of interest mask can be segmen-ted using the Recurrent-Phase Level set Segmentation.Finally,we use deep con-volution feature fusion and region of interest pooling to integrate the presently extremely sophisticated quicker Long short term memory based(LSTM)with YOLO method for object tracking system.A range of experimentalfindings demonstrate that our technique achieves high average accuracy with a precision of 99.7%for object detection of SSAN datasets that is considerably more than that of the other standard object detection mechanism.Our approach may therefore satisfy the true demands of night scene target detection applications.We very much believe that our method will help future research.
基金a grant from the Basic Science Research Program through the National Research Foundation(NRF)(2021R1F1A1063634)funded by the Ministry of Science and ICT(MSIT)Republic of Korea.This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding program Grant Code(NU/RG/SERC/12/6).
文摘Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occlusion,and limited labeled data.To address these challenges,we introduce a comprehensive methodology toenhance image classification and object detection accuracy.The proposed approach involves the integration ofmultiple methods in a complementary way.The process commences with the application of Gaussian filters tomitigate the impact of noise interference.These images are then processed for segmentation using Fuzzy C-Meanssegmentation in parallel with saliency mapping techniques to find the most prominent regions.The Binary RobustIndependent Elementary Features(BRIEF)characteristics are then extracted fromdata derived fromsaliency mapsand segmented images.For precise object separation,Oriented FAST and Rotated BRIEF(ORB)algorithms areemployed.Genetic Algorithms(GAs)are used to optimize Random Forest classifier parameters which lead toimproved performance.Our method stands out due to its comprehensive approach,adeptly addressing challengessuch as changing backdrops,occlusion,and limited labeled data concurrently.A significant enhancement hasbeen achieved by integrating Genetic Algorithms(GAs)to precisely optimize parameters.This minor adjustmentnot only boosts the uniqueness of our system but also amplifies its overall efficacy.The proposed methodologyhas demonstrated notable classification accuracies of 90.9%and 89.0%on the challenging Corel-1k and MSRCdatasets,respectively.Furthermore,detection accuracies of 87.2%and 86.6%have been attained.Although ourmethod performed well in both datasets it may face difficulties in real-world data especially where datasets havehighly complex backgrounds.Despite these limitations,GAintegration for parameter optimization shows a notablestrength in enhancing the overall adaptability and performance of our system.
基金supported by the NationalNatural Science Foundation of China Nos.62302167,U23A20343Shanghai Sailing Program(23YF1410500)Chenguang Program of Shanghai Education Development Foundation and Shanghai Municipal Education Commission(23CGA34).
文摘Confusing object detection(COD),such as glass,mirrors,and camouflaged objects,represents a burgeoning visual detection task centered on pinpointing and distinguishing concealed targets within intricate backgrounds,leveraging deep learning methodologies.Despite garnering increasing attention in computer vision,the focus of most existing works leans toward formulating task-specific solutions rather than delving into in-depth analyses of methodological structures.As of now,there is a notable absence of a comprehensive systematic review that focuses on recently proposed deep learning-based models for these specific tasks.To fill this gap,our study presents a pioneering review that covers both themodels and the publicly available benchmark datasets,while also identifying potential directions for future research in this field.The current dataset primarily focuses on single confusing object detection at the image level,with some studies extending to video-level data.We conduct an in-depth analysis of deep learning architectures,revealing that the current state-of-the-art(SOTA)COD methods demonstrate promising performance in single object detection.We also compile and provide detailed descriptions ofwidely used datasets relevant to these detection tasks.Our endeavor extends to discussing the limitations observed in current methodologies,alongside proposed solutions aimed at enhancing detection accuracy.Additionally,we deliberate on relevant applications and outline future research trajectories,aiming to catalyze advancements in the field of glass,mirror,and camouflaged object detection.
文摘Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects and the high cost associated with data collection.Consequently,devising algorithms capable of accurately localizing specific objects within a scene in scenarios where annotated data is limited remains a formidable challenge.To solve this problem,this paper proposes an object discovery by request problem setting and a corresponding algorithmic framework.The proposed problem setting aims to identify specified objects in scenes,and the associated algorithmic framework comprises pseudo data generation and object discovery by request network.Pseudo-data generation generates images resembling natural scenes through various data augmentation rules,using a small number of object samples and scene images.The network structure of object discovery by request utilizes the pre-trained Vision Transformer(ViT)model as the backbone,employs object-centric methods to learn the latent representations of foreground objects,and applies patch-level reconstruction constraints to the model.During the validation phase,we use the generated pseudo datasets as training sets and evaluate the performance of our model on the original test sets.Experiments have proved that our method achieves state-of-the-art performance on Unmanned Aerial Vehicles-Bottle Detection(UAV-BD)dataset and self-constructed dataset Bottle,especially in multi-object scenarios.
基金supported by a grant from the National Key Research and Development Project(2023YFB4302100)Key Research and Development Project of Jiangxi Province(No.20232ACE01011)Independent Deployment Project of Ganjiang Innovation Research Institute,Chinese Academy of Sciences(E255J001).
文摘Aiming at the limitations of the existing railway foreign object detection methods based on two-dimensional(2D)images,such as short detection distance,strong influence of environment and lack of distance information,we propose Rail-PillarNet,a three-dimensional(3D)LIDAR(Light Detection and Ranging)railway foreign object detection method based on the improvement of PointPillars.Firstly,the parallel attention pillar encoder(PAPE)is designed to fully extract the features of the pillars and alleviate the problem of local fine-grained information loss in PointPillars pillars encoder.Secondly,a fine backbone network is designed to improve the feature extraction capability of the network by combining the coding characteristics of LIDAR point cloud feature and residual structure.Finally,the initial weight parameters of the model were optimised by the transfer learning training method to further improve accuracy.The experimental results on the OSDaR23 dataset show that the average accuracy of Rail-PillarNet reaches 58.51%,which is higher than most mainstream models,and the number of parameters is 5.49 M.Compared with PointPillars,the accuracy of each target is improved by 10.94%,3.53%,16.96%and 19.90%,respectively,and the number of parameters only increases by 0.64M,which achieves a balance between the number of parameters and accuracy.
文摘Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors.
基金the Scientific Research Fund of Hunan Provincial Education Department(23A0423).
文摘Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method.
基金supported by the National Natural Science Foundation of China under Grant 62177029the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KYCX21_0740),China.
文摘Visual object tracking plays a crucial role in computer vision.In recent years,researchers have proposed various methods to achieve high-performance object tracking.Among these,methods based on Transformers have become a research hotspot due to their ability to globally model and contextualize information.However,current Transformer-based object tracking methods still face challenges such as low tracking accuracy and the presence of redundant feature information.In this paper,we introduce self-calibration multi-head self-attention Transformer(SMSTracker)as a solution to these challenges.It employs a hybrid tensor decomposition self-organizing multihead self-attention transformermechanism,which not only compresses and accelerates Transformer operations but also significantly reduces redundant data,thereby enhancing the accuracy and efficiency of tracking.Additionally,we introduce a self-calibration attention fusion block to resolve common issues of attention ambiguities and inconsistencies found in traditional trackingmethods,ensuring the stability and reliability of tracking performance across various scenarios.By integrating a hybrid tensor decomposition approach with a self-organizingmulti-head self-attentive transformer mechanism,SMSTracker enhances the efficiency and accuracy of the tracking process.Experimental results show that SMSTracker achieves competitive performance in visual object tracking,promising more robust and efficient tracking systems,demonstrating its potential to providemore robust and efficient tracking solutions in real-world applications.
基金Supported by the National Natural Science Foundation of China (72174172, 71774134)the Fundamental Research Funds for Central University,Southwest Minzu University (2022NYXXS094)。
文摘Remote sensing image object detection is one of the core tasks of remote sensing image processing.In recent years,with the development of deep learning,great progress has been made in object detection in remote sensing.However,the problems of dense small targets,complex backgrounds and poor target positioning accuracy in remote sensing images make the detection of remote sensing targets still difficult.In order to solve these problems,this research proposes a remote sensing image object detection algorithm based on improved YOLOX-S.Firstly,the Efficient Channel Attention(ECA)module is introduced to improve the network's ability to extract features in the image and suppress useless information such as background;Secondly,the loss function is optimized to improve the regression accuracy of the target bounding box.We evaluate the effectiveness of our algorithm on the NWPU VHR-10 remote sensing image dataset,the experimental results show that the detection accuracy of the algorithm can reach 95.5%,without increasing the amount of parameters.It is significantly improved compared with that of the original YOLOX-S network,and the detection performance is much better than that of some other mainstream remote sensing image detection methods.Besides,our method also shows good generalization detection performance in experiments on aircraft images in the RSOD dataset.
基金This work was partially supported by the National Natural Science Foundation of China(Grant Nos.61906168,U20A20171)Zhejiang Provincial Natural Science Foundation of China(Grant Nos.LY23F020023,LY21F020027)Construction of Hubei Provincial Key Laboratory for Intelligent Visual Monitoring of Hydropower Projects(Grant Nos.2022SDSJ01).
文摘In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,diagnosis and evaluation of kidney and urinary tract disease,providing insight into the specific type and severity.However,manual urine sediment examination is labor-intensive,time-consuming,and subjective.Traditional machine learning based object detection methods require hand-crafted features for localization and classification,which have poor generalization capabilities and are difficult to quickly and accurately detect the number of urine sediments.Deep learning based object detection methods have the potential to address the challenges mentioned above,but these methods require access to large urine sediment image datasets.Unfortunately,only a limited number of publicly available urine sediment datasets are currently available.To alleviate the lack of urine sediment datasets in medical image analysis,we propose a new dataset named UriSed2K,which contains 2465 high-quality images annotated with expert guidance.Two main challenges are associated with our dataset:a large number of small objects and the occlusion between these small objects.Our manuscript focuses on applying deep learning object detection methods to the urine sediment dataset and addressing the challenges presented by this dataset.Specifically,our goal is to improve the accuracy and efficiency of the detection algorithm and,in doing so,provide medical professionals with an automatic detector that saves time and effort.We propose an improved lightweight one-stage object detection algorithm called Discriminatory-YOLO.The proposed algorithm comprises a local context attention module and a global background suppression module,which aid the detector in distinguishing urine sediment features in the image.The local context attention module captures context information beyond the object region,while the global background suppression module emphasizes objects in uninformative backgrounds.We comprehensively evaluate our method on the UriSed2K dataset,which includes seven categories of urine sediments,such as erythrocytes(red blood cells),leukocytes(white blood cells),epithelial cells,crystals,mycetes,broken erythrocytes,and broken leukocytes,achieving the best average precision(AP)of 95.3%while taking only 10 ms per image.The source code and dataset are available at https://github.com/binghuiwu98/discriminatoryyolov5.
基金supported by the National Natural Science Foundation of China under Grant 61671219.
文摘Object detection in unmanned aerial vehicle(UAV)aerial images has become increasingly important in military and civil applications.General object detection models are not robust enough against interclass similarity and intraclass variability of small objects,and UAV-specific nuisances such as uncontrolledweather conditions.Unlike previous approaches focusing on high-level semantic information,we report the importance of underlying features to improve detection accuracy and robustness fromthe information-theoretic perspective.Specifically,we propose a robust and discriminative feature learning approach through mutual information maximization(RD-MIM),which can be integrated into numerous object detection methods for aerial images.Firstly,we present the rank sample mining method to reduce underlying feature differences between the natural image domain and the aerial image domain.Then,we design a momentum contrast learning strategy to make object features similar to the same category and dissimilar to different categories.Finally,we construct a transformer-based global attention mechanism to boost object location semantics by leveraging the high interrelation of different receptive fields.We conduct extensive experiments on the VisDrone and Unmanned Aerial Vehicle Benchmark Object Detection and Tracking(UAVDT)datasets to prove the effectiveness of the proposed method.The experimental results show that our approach brings considerable robustness gains to basic detectors and advanced detection methods,achieving relative growth rates of 51.0%and 39.4%in corruption robustness,respectively.Our code is available at https://github.com/cq100/RD-MIM(accessed on 2 August 2024).
文摘The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’interest in machine learning methods for target detection in the field of blasting.Serverless Computing can provide a variety of computing services for people without hardware foundations and rich software development experience,which has aroused people’s interest in how to use it in the field ofmachine learning.In this paper,we design a distributedmachine learning training application based on the AWS Lambda platform.Based on data parallelism,the data aggregation and training synchronization in Function as a Service(FaaS)are effectively realized.It also encrypts the data set,effectively reducing the risk of data leakage.We rent a cloud server and a Lambda,and then we conduct experiments to evaluate our applications.Our results indicate the effectiveness,rapidity,and economy of distributed training on FaaS.
文摘Object detection finds wide application in various sectors,including autonomous driving,industry,and healthcare.Recent studies have highlighted the vulnerability of object detection models built using deep neural networks when confronted with carefully crafted adversarial examples.This not only reveals their shortcomings in defending against malicious attacks but also raises widespread concerns about the security of existing systems.Most existing adversarial attack strategies focus primarily on image classification problems,failing to fully exploit the unique characteristics of object detectionmodels,thus resulting in widespread deficiencies in their transferability.Furthermore,previous research has predominantly concentrated on the transferability issues of non-targeted attacks,whereas enhancing the transferability of targeted adversarial examples presents even greater challenges.Traditional attack techniques typically employ cross-entropy as a loss measure,iteratively adjusting adversarial examples to match target categories.However,their inherent limitations restrict their broad applicability and transferability across different models.To address the aforementioned challenges,this study proposes a novel targeted adversarial attack method aimed at enhancing the transferability of adversarial samples across object detection models.Within the framework of iterative attacks,we devise a new objective function designed to mitigate consistency issues arising from cumulative noise and to enhance the separation between target and non-target categories(logit margin).Secondly,a data augmentation framework incorporating random erasing and color transformations is introduced into targeted adversarial attacks.This enhances the diversity of gradients,preventing overfitting to white-box models.Lastly,perturbations are applied only within the specified object’s bounding box to reduce the perturbation range,enhancing attack stealthiness.Experiments were conducted on the Microsoft Common Objects in Context(MS COCO)dataset using You Only Look Once version 3(YOLOv3),You Only Look Once version 8(YOLOv8),Faster Region-based Convolutional Neural Networks(Faster R-CNN),and RetinaNet.The results demonstrate a significant advantage of the proposed method in black-box settings.Among these,the success rate of RetinaNet transfer attacks reached a maximum of 82.59%.
基金Doctoral Talent Training Project of Chongqing University of Posts and Telecommunications,Grant/Award Number:BYJS202007Natural Science Foundation of Chongqing,Grant/Award Number:cstc2021jcyj-msxmX0941+1 种基金National Natural Science Foundation of China,Grant/Award Number:62176034Scientific and Technological Research Program of Chongqing Municipal Education Commission,Grant/Award Number:KJQN202101901。
文摘Automatically detecting and locating remote occlusion small objects from the images of complex traffic environments is a valuable and challenging research.Since the boundary box location is not sufficiently accurate and it is difficult to distinguish overlapping and occluded objects,the authors propose a network model with a second-order term attention mechanism and occlusion loss.First,the backbone network is built on CSPDarkNet53.Then a method is designed for the feature extraction network based on an item-wise attention mechanism,which uses the filtered weighted feature vector to replace the original residual fusion and adds a second-order term to reduce the information loss in the process of fusion and accelerate the convergence of the model.Finally,an objected occlusion regression loss function is studied to reduce the problems of missed detections caused by dense objects.Sufficient experimental results demonstrate that the authors’method achieved state-of-the-art performance without reducing the detection speed.The mAP@.5 of the method is 85.8%on the Foggy_cityscapes dataset and the mAP@.5 of the method is 97.8%on the KITTI dataset.
基金supported in part by the National Natural Science Foundation of China(42001408).
文摘Anchor-free object-detection methods achieve a significant advancement in field of computer vision,particularly in the realm of real-time inferences.However,in remote sensing object detection,anchor-free methods often lack of capability in separating the foreground and background.This paper proposes an anchor-free method named probability-enhanced anchor-free detector(ProEnDet)for remote sensing object detection.First,a weighted bidirectional feature pyramid is used for feature extraction.Second,we introduce probability enhancement to strengthen the classification of the object’s foreground and background.The detector uses the logarithm likelihood as the final score to improve the classification of the foreground and background of the object.ProEnDet is verified using the DIOR and NWPU-VHR-10 datasets.The experiment achieved mean average precisions of 61.4 and 69.0 on the DIOR dataset and NWPU-VHR-10 dataset,respectively.ProEnDet achieves a speed of 32.4 FPS on the DIOR dataset,which satisfies the real-time requirements for remote-sensing object detection.