The advancement of navigation systems for the visually impaired has significantly enhanced their mobility by mitigating the risk of encountering obstacles and guiding them along safe,navigable routes.Traditional appro...The advancement of navigation systems for the visually impaired has significantly enhanced their mobility by mitigating the risk of encountering obstacles and guiding them along safe,navigable routes.Traditional approaches primarily focus on broad applications such as wayfinding,obstacle detection,and fall prevention.However,there is a notable discrepancy in applying these technologies to more specific scenarios,like identifying distinct food crop types or recognizing faces.This study proposes a real-time application designed for visually impaired individuals,aiming to bridge this research-application gap.It introduces a system capable of detecting 20 different food crop types and recognizing faces with impressive accuracies of 83.27%and 95.64%,respectively.These results represent a significant contribution to the field of assistive technologies,providing visually impaired users with detailed and relevant information about their surroundings,thereby enhancing their mobility and ensuring their safety.Additionally,it addresses the vital aspects of social engagements,acknowledging the challenges faced by visually impaired individuals in recognizing acquaintances without auditory or tactile signals,and highlights recent developments in prototype systems aimed at assisting with face recognition tasks.This comprehensive approach not only promises enhanced navigational aids but also aims to enrich the social well-being and safety of visually impaired communities.展开更多
Drone also known as unmanned aerial vehicle(UAV)has drawn lots of attention in recent years.Quadcopter as one of the most popular drones has great potential in both industrial and academic fields.Quadcopter drones are...Drone also known as unmanned aerial vehicle(UAV)has drawn lots of attention in recent years.Quadcopter as one of the most popular drones has great potential in both industrial and academic fields.Quadcopter drones are capable of taking off vertically and flying towards any direction.Traditional researches of drones mainly focus on their mechanical structures and movement control.The aircraft movement is usually controlled by a remote controller manually or the trajectory is pre-programmed with specific algorithms.Consumer drones typically use mobile device together with remote controllers to realize flight control and video transmission.Implementing different functions on mobile devices can result in different behaviors of drones indirectly.With the development of deep learning in computer vision field,commercial drones equipped with camera can be much more intelligent and even realize autonomous flight.In the past,running deep learning based algorithms on mobile devices is highly computational intensive and time consuming.This paper utilizes a novel real-time object detection method and deploys the deep learning model on the modern mobile device to realize autonomous object detection and object tracking of drones.展开更多
The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-r...The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-ronment is a challenging task.Current instance segmentation algorithms for strawberries suffer from issues such as poor real-time performance and low accuracy.To this end,the present study proposes an Efficient YOLACT(E-YOLACT)algorithm for strawberry detection and segmentation based on the YOLACT framework.The key enhancements of the E-YOLACT encompass the development of a lightweight attention mechanism,pyramid squeeze shuffle attention(PSSA),for efficient feature extraction.Additionally,an attention-guided context-feature pyramid network(AC-FPN)is employed instead of FPN to optimize the architecture’s performance.Furthermore,a feature-enhanced model(FEM)is introduced to enhance the prediction head’s capabilities,while efficient fast non-maximum suppression(EF-NMS)is devised to improve non-maximum suppression.The experimental results demonstrate that the E-YOLACT achieves a Box-mAP and Mask-mAP of 77.9 and 76.6,respectively,on the custom dataset.Moreover,it exhibits an impressive category accuracy of 93.5%.Notably,the E-YOLACT also demonstrates a remarkable real-time detection capability with a speed of 34.8 FPS.The method proposed in this article presents an efficient approach for the vision system of a strawberry-picking robot.展开更多
Printed Circuit Boards(PCBs)are materials used to connect components to one another to form a working circuit.PCBs play a crucial role in modern electronics by connecting various components.The trend of integrating mo...Printed Circuit Boards(PCBs)are materials used to connect components to one another to form a working circuit.PCBs play a crucial role in modern electronics by connecting various components.The trend of integrating more components onto PCBs is becoming increasingly common,which presents significant challenges for quality control processes.Given the potential impact that even minute defects can have on signal traces,the surface inspection of PCB remains pivotal in ensuring the overall system integrity.To address the limitations associated with manual inspection,this research endeavors to automate the inspection process using the YOLOv8 deep learning algorithm for real-time fault detection in PCBs.Specifically,we explore the effectiveness of two variants of the YOLOv8 architecture:YOLOv8 Small and YOLOv8 Nano.Through rigorous experimentation and evaluation of our dataset which was acquired from Peking University’s Human-Robot Interaction Lab,we aim to assess the suitability of these models for improving fault detection accuracy within the PCB manufacturing process.Our results reveal the remarkable capabilities of YOLOv8 Small models in accurately identifying and classifying PCB faults.The model achieved a precision of 98.7%,a recall of 99%,an accuracy of 98.6%,and an F1 score of 0.98.These findings highlight the potential of the YOLOv8 Small model to significantly improve the quality control processes in PCB manufacturing by providing a reliable and efficient solution for fault detection.展开更多
Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance o...Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance.展开更多
Aiming at the problem of low accuracy of traditional target detection methods for target detection in endoscopes in substation environments, a CNN-based real-time detection method for masked targets is proposed. The m...Aiming at the problem of low accuracy of traditional target detection methods for target detection in endoscopes in substation environments, a CNN-based real-time detection method for masked targets is proposed. The method adopts the overall design of backbone network, detection network and algorithmic parameter optimisation method, completes the model training on the self-constructed occlusion target dataset, and adopts the multi-scale perception method for target detection. The HNM algorithm is used to screen positive and negative samples during the training process, and the NMS algorithm is used to post-process the prediction results during the detection process to improve the detection efficiency. After experimental validation, the obtained model has the multi-class average predicted value (mAP) of the dataset. It has general advantages over traditional target detection methods. The detection time of a single target on FDDB dataset is 39 ms, which can meet the need of real-time target detection. In addition, the project team has successfully deployed the method into substations and put it into use in many places in Beijing, which is important for achieving the anomaly of occlusion target detection.展开更多
Real-time detection for object size has now become a hot topic in the testing field and image processing is the core algorithm. This paper focuses on the processing and display of the collected dynamic images to achie...Real-time detection for object size has now become a hot topic in the testing field and image processing is the core algorithm. This paper focuses on the processing and display of the collected dynamic images to achieve a real-time image pro- cessing for the moving objects. Firstly, the median filtering, gain calibration, image segmentation, image binarization, cor- ner detection and edge fitting are employed to process the images of the moving objects to make the image close to the real object. Then, the processed images are simultaneously displayed on a real-time basis to make it easier to analyze, understand and identify them, and thus it reduces the computation complexity. Finally, human-computer interaction (HCI)-friendly in- terface based on VC ++ is designed to accomplish the digital logic transform, image processing and real-time display of the objects. The experiment shows that the proposed algorithm and software design have better real-time performance and accu- racy which can meet the industrial needs.展开更多
Moving object detection is one of the challenging problems in video monitoring systems, especially when the illumination changes and shadow exists. Amethod for real-time moving object detection is described. Anew back...Moving object detection is one of the challenging problems in video monitoring systems, especially when the illumination changes and shadow exists. Amethod for real-time moving object detection is described. Anew background model is proposed to handle the illumination varition problem. With optical flow technology and background subtraction, a moving object is extracted quickly and accurately. An effective shadow elimination algorithm based on color features is used to refine the moving obj ects. Experimental results demonstrate that the proposed method can update the background exactly and quickly along with the varition of illumination, and the shadow can be eliminated effectively. The proposed algorithm is a real-time one which the foundation for further object recognition and understanding of video mum'toting systems.展开更多
Foreground moving object detection is an important process in various computer vision applications such as intelligent visual surveillance, HCI, object-based video compression, etc. One of the most successful moving o...Foreground moving object detection is an important process in various computer vision applications such as intelligent visual surveillance, HCI, object-based video compression, etc. One of the most successful moving object detection algorithms is based on Adaptive Gaussian Mixture Model (AGMM). Although ACMM-hased object detection shows very good performance with respect to object detection accuracy, AGMM is very complex model requiring lots of floatingpoint arithmetic so that it should pay for expensive computational cost. Thus, direct implementation of the AGMM-based object detection for embedded DSPs without floating-point arithmetic HW support cannot satisfy the real-time processing requirement. This paper presents a novel rcal-time implementation of adaptive Gaussian mixture model-based moving object detection algorithm for fixed-point DSPs. In the proposed implementation, in addition to changes of data types into fixed-point ones, magnification of the Gaussian distribution technique is introduced so that the integer and fixed-point arithmetic can be easily and consistently utilized instead of real nmnher and floatingpoint arithmetic in processing of AGMM algorithm. Experimental results shows that the proposed implementation have a high potential in real-time applications.展开更多
Unknown closed spaces are a big challenge for the navigation of robots since there are no global and pre-defined positioning options in the area.One of the simplest and most efficient algorithms,the artificial potenti...Unknown closed spaces are a big challenge for the navigation of robots since there are no global and pre-defined positioning options in the area.One of the simplest and most efficient algorithms,the artificial potential field algorithm(APF),may provide real-time navigation in those places but fall into local mini-mum in some cases.To overcome this problem and to present alternative escape routes for a robot,possible crossing points in buildings may be detected by using object detection and included in the path planning algorithm.This study utilized a proposed sensor fusion method and an improved object classification method for detecting windows,doors,and stairs in buildings and these objects were classified as valid or invalid for the path planning algorithm.The performance of the approach was evaluated in a simulated environment with a quadrotor that was equipped with camera and laser imaging detection and ranging(LIDAR)sensors to navigate through an unknown closed space and reach a desired goal point.Inclusion of crossing points allows the robot to escape from areas where it is con-gested.The navigation of the robot has been tested in different scenarios based on the proposed path planning algorithm and compared with other improved APF methods.The results showed that the improved APF methods and the methods rein-forced with other path planning algorithms were similar in performance with the proposed method for the same goals in the same room.For the goals outside the current room,traditional APF methods were quite unsuccessful in reaching the goals.Even though improved methods were able to reach some outside targets,the proposed method gave approximately 17%better results than the most success-ful example in achieving targets outside the current room.The proposed method can also work in real-time to discover a building and navigate between rooms.展开更多
Statistical and contextual information are typically used to detect moving regions in image sequences for a fixed camera.In this paper,we propose a fast and stable linear discriminant approach based on Gaussian Single...Statistical and contextual information are typically used to detect moving regions in image sequences for a fixed camera.In this paper,we propose a fast and stable linear discriminant approach based on Gaussian Single Model(GSM)and Markov Random Field(MRF).The performance of GSM is analyzed first,and then two main improvements corresponding to the drawbacks of GSM are proposed:the latest filtered data based update scheme of the background model and the linear classification judgment rule based on spatial-temporal feature specified by MRF.Experimental results show that the proposed method runs more rapidly and accurately when compared with other methods.展开更多
An approach to detection of moving objects in video sequences, with application to video surveillance is presented. The algorithm combines two kinds of change points, which are detected from the region-based frame dif...An approach to detection of moving objects in video sequences, with application to video surveillance is presented. The algorithm combines two kinds of change points, which are detected from the region-based frame difference and adjusted background subtraction. An adaptive threshold technique is employed to automatically choose the threshold value to segment the moving objects from the still background. And experiment results show that the algorithm is effective and efficient in practical situations. Furthermore, the algorithm is robust to the effects of the changing of lighting condition and can be applied for video surveillance system.展开更多
Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unman...Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.展开更多
Confusing object detection(COD),such as glass,mirrors,and camouflaged objects,represents a burgeoning visual detection task centered on pinpointing and distinguishing concealed targets within intricate backgrounds,lev...Confusing object detection(COD),such as glass,mirrors,and camouflaged objects,represents a burgeoning visual detection task centered on pinpointing and distinguishing concealed targets within intricate backgrounds,leveraging deep learning methodologies.Despite garnering increasing attention in computer vision,the focus of most existing works leans toward formulating task-specific solutions rather than delving into in-depth analyses of methodological structures.As of now,there is a notable absence of a comprehensive systematic review that focuses on recently proposed deep learning-based models for these specific tasks.To fill this gap,our study presents a pioneering review that covers both themodels and the publicly available benchmark datasets,while also identifying potential directions for future research in this field.The current dataset primarily focuses on single confusing object detection at the image level,with some studies extending to video-level data.We conduct an in-depth analysis of deep learning architectures,revealing that the current state-of-the-art(SOTA)COD methods demonstrate promising performance in single object detection.We also compile and provide detailed descriptions ofwidely used datasets relevant to these detection tasks.Our endeavor extends to discussing the limitations observed in current methodologies,alongside proposed solutions aimed at enhancing detection accuracy.Additionally,we deliberate on relevant applications and outline future research trajectories,aiming to catalyze advancements in the field of glass,mirror,and camouflaged object detection.展开更多
Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false...Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method.展开更多
Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occ...Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occlusion,and limited labeled data.To address these challenges,we introduce a comprehensive methodology toenhance image classification and object detection accuracy.The proposed approach involves the integration ofmultiple methods in a complementary way.The process commences with the application of Gaussian filters tomitigate the impact of noise interference.These images are then processed for segmentation using Fuzzy C-Meanssegmentation in parallel with saliency mapping techniques to find the most prominent regions.The Binary RobustIndependent Elementary Features(BRIEF)characteristics are then extracted fromdata derived fromsaliency mapsand segmented images.For precise object separation,Oriented FAST and Rotated BRIEF(ORB)algorithms areemployed.Genetic Algorithms(GAs)are used to optimize Random Forest classifier parameters which lead toimproved performance.Our method stands out due to its comprehensive approach,adeptly addressing challengessuch as changing backdrops,occlusion,and limited labeled data concurrently.A significant enhancement hasbeen achieved by integrating Genetic Algorithms(GAs)to precisely optimize parameters.This minor adjustmentnot only boosts the uniqueness of our system but also amplifies its overall efficacy.The proposed methodologyhas demonstrated notable classification accuracies of 90.9%and 89.0%on the challenging Corel-1k and MSRCdatasets,respectively.Furthermore,detection accuracies of 87.2%and 86.6%have been attained.Although ourmethod performed well in both datasets it may face difficulties in real-world data especially where datasets havehighly complex backgrounds.Despite these limitations,GAintegration for parameter optimization shows a notablestrength in enhancing the overall adaptability and performance of our system.展开更多
Aiming at the limitations of the existing railway foreign object detection methods based on two-dimensional(2D)images,such as short detection distance,strong influence of environment and lack of distance information,w...Aiming at the limitations of the existing railway foreign object detection methods based on two-dimensional(2D)images,such as short detection distance,strong influence of environment and lack of distance information,we propose Rail-PillarNet,a three-dimensional(3D)LIDAR(Light Detection and Ranging)railway foreign object detection method based on the improvement of PointPillars.Firstly,the parallel attention pillar encoder(PAPE)is designed to fully extract the features of the pillars and alleviate the problem of local fine-grained information loss in PointPillars pillars encoder.Secondly,a fine backbone network is designed to improve the feature extraction capability of the network by combining the coding characteristics of LIDAR point cloud feature and residual structure.Finally,the initial weight parameters of the model were optimised by the transfer learning training method to further improve accuracy.The experimental results on the OSDaR23 dataset show that the average accuracy of Rail-PillarNet reaches 58.51%,which is higher than most mainstream models,and the number of parameters is 5.49 M.Compared with PointPillars,the accuracy of each target is improved by 10.94%,3.53%,16.96%and 19.90%,respectively,and the number of parameters only increases by 0.64M,which achieves a balance between the number of parameters and accuracy.展开更多
The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’interest in machine learning methods for target detection ...The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’interest in machine learning methods for target detection in the field of blasting.Serverless Computing can provide a variety of computing services for people without hardware foundations and rich software development experience,which has aroused people’s interest in how to use it in the field ofmachine learning.In this paper,we design a distributedmachine learning training application based on the AWS Lambda platform.Based on data parallelism,the data aggregation and training synchronization in Function as a Service(FaaS)are effectively realized.It also encrypts the data set,effectively reducing the risk of data leakage.We rent a cloud server and a Lambda,and then we conduct experiments to evaluate our applications.Our results indicate the effectiveness,rapidity,and economy of distributed training on FaaS.展开更多
Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain les...Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors.展开更多
In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,...In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,diagnosis and evaluation of kidney and urinary tract disease,providing insight into the specific type and severity.However,manual urine sediment examination is labor-intensive,time-consuming,and subjective.Traditional machine learning based object detection methods require hand-crafted features for localization and classification,which have poor generalization capabilities and are difficult to quickly and accurately detect the number of urine sediments.Deep learning based object detection methods have the potential to address the challenges mentioned above,but these methods require access to large urine sediment image datasets.Unfortunately,only a limited number of publicly available urine sediment datasets are currently available.To alleviate the lack of urine sediment datasets in medical image analysis,we propose a new dataset named UriSed2K,which contains 2465 high-quality images annotated with expert guidance.Two main challenges are associated with our dataset:a large number of small objects and the occlusion between these small objects.Our manuscript focuses on applying deep learning object detection methods to the urine sediment dataset and addressing the challenges presented by this dataset.Specifically,our goal is to improve the accuracy and efficiency of the detection algorithm and,in doing so,provide medical professionals with an automatic detector that saves time and effort.We propose an improved lightweight one-stage object detection algorithm called Discriminatory-YOLO.The proposed algorithm comprises a local context attention module and a global background suppression module,which aid the detector in distinguishing urine sediment features in the image.The local context attention module captures context information beyond the object region,while the global background suppression module emphasizes objects in uninformative backgrounds.We comprehensively evaluate our method on the UriSed2K dataset,which includes seven categories of urine sediments,such as erythrocytes(red blood cells),leukocytes(white blood cells),epithelial cells,crystals,mycetes,broken erythrocytes,and broken leukocytes,achieving the best average precision(AP)of 95.3%while taking only 10 ms per image.The source code and dataset are available at https://github.com/binghuiwu98/discriminatoryyolov5.展开更多
基金supported by theKorea Industrial Technology Association(KOITA)Grant Funded by the Korean government(MSIT)(No.KOITA-2023-3-003)supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2024-2020-0-01808)Supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)。
文摘The advancement of navigation systems for the visually impaired has significantly enhanced their mobility by mitigating the risk of encountering obstacles and guiding them along safe,navigable routes.Traditional approaches primarily focus on broad applications such as wayfinding,obstacle detection,and fall prevention.However,there is a notable discrepancy in applying these technologies to more specific scenarios,like identifying distinct food crop types or recognizing faces.This study proposes a real-time application designed for visually impaired individuals,aiming to bridge this research-application gap.It introduces a system capable of detecting 20 different food crop types and recognizing faces with impressive accuracies of 83.27%and 95.64%,respectively.These results represent a significant contribution to the field of assistive technologies,providing visually impaired users with detailed and relevant information about their surroundings,thereby enhancing their mobility and ensuring their safety.Additionally,it addresses the vital aspects of social engagements,acknowledging the challenges faced by visually impaired individuals in recognizing acquaintances without auditory or tactile signals,and highlights recent developments in prototype systems aimed at assisting with face recognition tasks.This comprehensive approach not only promises enhanced navigational aids but also aims to enrich the social well-being and safety of visually impaired communities.
基金This work is supported by the National Key R&D Program of China under grant 2018YFB1003205by the National Natural Science Foundation of China under grant U1836208,U1536206,U1836110,61602253,61672294+2 种基金by the Jiangsu Basic Research Programs-Natural Science Foundation under grant numbers BK20181407by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fundby the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology(CICAEET)fund,China。
文摘Drone also known as unmanned aerial vehicle(UAV)has drawn lots of attention in recent years.Quadcopter as one of the most popular drones has great potential in both industrial and academic fields.Quadcopter drones are capable of taking off vertically and flying towards any direction.Traditional researches of drones mainly focus on their mechanical structures and movement control.The aircraft movement is usually controlled by a remote controller manually or the trajectory is pre-programmed with specific algorithms.Consumer drones typically use mobile device together with remote controllers to realize flight control and video transmission.Implementing different functions on mobile devices can result in different behaviors of drones indirectly.With the development of deep learning in computer vision field,commercial drones equipped with camera can be much more intelligent and even realize autonomous flight.In the past,running deep learning based algorithms on mobile devices is highly computational intensive and time consuming.This paper utilizes a novel real-time object detection method and deploys the deep learning model on the modern mobile device to realize autonomous object detection and object tracking of drones.
基金funded by Anhui Provincial Natural Science Foundation(No.2208085ME128)the Anhui University-Level Special Project of Anhui University of Science and Technology(No.XCZX2021-01)+1 种基金the Research and the Development Fund of the Institute of Environmental Friendly Materials and Occupational Health,Anhui University of Science and Technology(No.ALW2022YF06)Anhui Province New Era Education Quality Project(Graduate Education)(No.2022xscx073).
文摘The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-ronment is a challenging task.Current instance segmentation algorithms for strawberries suffer from issues such as poor real-time performance and low accuracy.To this end,the present study proposes an Efficient YOLACT(E-YOLACT)algorithm for strawberry detection and segmentation based on the YOLACT framework.The key enhancements of the E-YOLACT encompass the development of a lightweight attention mechanism,pyramid squeeze shuffle attention(PSSA),for efficient feature extraction.Additionally,an attention-guided context-feature pyramid network(AC-FPN)is employed instead of FPN to optimize the architecture’s performance.Furthermore,a feature-enhanced model(FEM)is introduced to enhance the prediction head’s capabilities,while efficient fast non-maximum suppression(EF-NMS)is devised to improve non-maximum suppression.The experimental results demonstrate that the E-YOLACT achieves a Box-mAP and Mask-mAP of 77.9 and 76.6,respectively,on the custom dataset.Moreover,it exhibits an impressive category accuracy of 93.5%.Notably,the E-YOLACT also demonstrates a remarkable real-time detection capability with a speed of 34.8 FPS.The method proposed in this article presents an efficient approach for the vision system of a strawberry-picking robot.
文摘Printed Circuit Boards(PCBs)are materials used to connect components to one another to form a working circuit.PCBs play a crucial role in modern electronics by connecting various components.The trend of integrating more components onto PCBs is becoming increasingly common,which presents significant challenges for quality control processes.Given the potential impact that even minute defects can have on signal traces,the surface inspection of PCB remains pivotal in ensuring the overall system integrity.To address the limitations associated with manual inspection,this research endeavors to automate the inspection process using the YOLOv8 deep learning algorithm for real-time fault detection in PCBs.Specifically,we explore the effectiveness of two variants of the YOLOv8 architecture:YOLOv8 Small and YOLOv8 Nano.Through rigorous experimentation and evaluation of our dataset which was acquired from Peking University’s Human-Robot Interaction Lab,we aim to assess the suitability of these models for improving fault detection accuracy within the PCB manufacturing process.Our results reveal the remarkable capabilities of YOLOv8 Small models in accurately identifying and classifying PCB faults.The model achieved a precision of 98.7%,a recall of 99%,an accuracy of 98.6%,and an F1 score of 0.98.These findings highlight the potential of the YOLOv8 Small model to significantly improve the quality control processes in PCB manufacturing by providing a reliable and efficient solution for fault detection.
文摘Real-time indoor camera localization is a significant problem in indoor robot navigation and surveillance systems.The scene can change during the image sequence and plays a vital role in the localization performance of robotic applications in terms of accuracy and speed.This research proposed a real-time indoor camera localization system based on a recurrent neural network that detects scene change during the image sequence.An annotated image dataset trains the proposed system and predicts the camera pose in real-time.The system mainly improved the localization performance of indoor cameras by more accurately predicting the camera pose.It also recognizes the scene changes during the sequence and evaluates the effects of these changes.This system achieved high accuracy and real-time performance.The scene change detection process was performed using visual rhythm and the proposed recurrent deep architecture,which performed camera pose prediction and scene change impact evaluation.Overall,this study proposed a novel real-time localization system for indoor cameras that detects scene changes and shows how they affect localization performance.
文摘Aiming at the problem of low accuracy of traditional target detection methods for target detection in endoscopes in substation environments, a CNN-based real-time detection method for masked targets is proposed. The method adopts the overall design of backbone network, detection network and algorithmic parameter optimisation method, completes the model training on the self-constructed occlusion target dataset, and adopts the multi-scale perception method for target detection. The HNM algorithm is used to screen positive and negative samples during the training process, and the NMS algorithm is used to post-process the prediction results during the detection process to improve the detection efficiency. After experimental validation, the obtained model has the multi-class average predicted value (mAP) of the dataset. It has general advantages over traditional target detection methods. The detection time of a single target on FDDB dataset is 39 ms, which can meet the need of real-time target detection. In addition, the project team has successfully deployed the method into substations and put it into use in many places in Beijing, which is important for achieving the anomaly of occlusion target detection.
基金National Natural Science Foundation of China(No.61302159,61227003,61301259)Natual Science Foundation of Shanxi Province(No.2012021011-2)+2 种基金Specialized Research Fund for the Doctoral Program of Higher Education,China(No.20121420110006)Top Science and Technology Innovation Teams of Higher Learning Institutions of Shanxi Province,ChinaProject Sponsored by Scientific Research for the Returned Overseas Chinese Scholars,Shanxi Province(No.2013-083)
文摘Real-time detection for object size has now become a hot topic in the testing field and image processing is the core algorithm. This paper focuses on the processing and display of the collected dynamic images to achieve a real-time image pro- cessing for the moving objects. Firstly, the median filtering, gain calibration, image segmentation, image binarization, cor- ner detection and edge fitting are employed to process the images of the moving objects to make the image close to the real object. Then, the processed images are simultaneously displayed on a real-time basis to make it easier to analyze, understand and identify them, and thus it reduces the computation complexity. Finally, human-computer interaction (HCI)-friendly in- terface based on VC ++ is designed to accomplish the digital logic transform, image processing and real-time display of the objects. The experiment shows that the proposed algorithm and software design have better real-time performance and accu- racy which can meet the industrial needs.
基金This project was supported by the foundation of the Visual and Auditory Information Processing Laboratory of BeijingUniversity of China (0306) and the National Science Foundation of China (60374031).
文摘Moving object detection is one of the challenging problems in video monitoring systems, especially when the illumination changes and shadow exists. Amethod for real-time moving object detection is described. Anew background model is proposed to handle the illumination varition problem. With optical flow technology and background subtraction, a moving object is extracted quickly and accurately. An effective shadow elimination algorithm based on color features is used to refine the moving obj ects. Experimental results demonstrate that the proposed method can update the background exactly and quickly along with the varition of illumination, and the shadow can be eliminated effectively. The proposed algorithm is a real-time one which the foundation for further object recognition and understanding of video mum'toting systems.
基金supported by Soongsil University Research Fund and BK 21 of Korea
文摘Foreground moving object detection is an important process in various computer vision applications such as intelligent visual surveillance, HCI, object-based video compression, etc. One of the most successful moving object detection algorithms is based on Adaptive Gaussian Mixture Model (AGMM). Although ACMM-hased object detection shows very good performance with respect to object detection accuracy, AGMM is very complex model requiring lots of floatingpoint arithmetic so that it should pay for expensive computational cost. Thus, direct implementation of the AGMM-based object detection for embedded DSPs without floating-point arithmetic HW support cannot satisfy the real-time processing requirement. This paper presents a novel rcal-time implementation of adaptive Gaussian mixture model-based moving object detection algorithm for fixed-point DSPs. In the proposed implementation, in addition to changes of data types into fixed-point ones, magnification of the Gaussian distribution technique is introduced so that the integer and fixed-point arithmetic can be easily and consistently utilized instead of real nmnher and floatingpoint arithmetic in processing of AGMM algorithm. Experimental results shows that the proposed implementation have a high potential in real-time applications.
文摘Unknown closed spaces are a big challenge for the navigation of robots since there are no global and pre-defined positioning options in the area.One of the simplest and most efficient algorithms,the artificial potential field algorithm(APF),may provide real-time navigation in those places but fall into local mini-mum in some cases.To overcome this problem and to present alternative escape routes for a robot,possible crossing points in buildings may be detected by using object detection and included in the path planning algorithm.This study utilized a proposed sensor fusion method and an improved object classification method for detecting windows,doors,and stairs in buildings and these objects were classified as valid or invalid for the path planning algorithm.The performance of the approach was evaluated in a simulated environment with a quadrotor that was equipped with camera and laser imaging detection and ranging(LIDAR)sensors to navigate through an unknown closed space and reach a desired goal point.Inclusion of crossing points allows the robot to escape from areas where it is con-gested.The navigation of the robot has been tested in different scenarios based on the proposed path planning algorithm and compared with other improved APF methods.The results showed that the improved APF methods and the methods rein-forced with other path planning algorithms were similar in performance with the proposed method for the same goals in the same room.For the goals outside the current room,traditional APF methods were quite unsuccessful in reaching the goals.Even though improved methods were able to reach some outside targets,the proposed method gave approximately 17%better results than the most success-ful example in achieving targets outside the current room.The proposed method can also work in real-time to discover a building and navigate between rooms.
基金Project (No. 10577017) supported by the National Natural Science Foundation of China
文摘Statistical and contextual information are typically used to detect moving regions in image sequences for a fixed camera.In this paper,we propose a fast and stable linear discriminant approach based on Gaussian Single Model(GSM)and Markov Random Field(MRF).The performance of GSM is analyzed first,and then two main improvements corresponding to the drawbacks of GSM are proposed:the latest filtered data based update scheme of the background model and the linear classification judgment rule based on spatial-temporal feature specified by MRF.Experimental results show that the proposed method runs more rapidly and accurately when compared with other methods.
文摘An approach to detection of moving objects in video sequences, with application to video surveillance is presented. The algorithm combines two kinds of change points, which are detected from the region-based frame difference and adjusted background subtraction. An adaptive threshold technique is employed to automatically choose the threshold value to segment the moving objects from the still background. And experiment results show that the algorithm is effective and efficient in practical situations. Furthermore, the algorithm is robust to the effects of the changing of lighting condition and can be applied for video surveillance system.
基金This research was funded by the Natural Science Foundation of Hebei Province(F2021506004).
文摘Transformer-based models have facilitated significant advances in object detection.However,their extensive computational consumption and suboptimal detection of dense small objects curtail their applicability in unmanned aerial vehicle(UAV)imagery.Addressing these limitations,we propose a hybrid transformer-based detector,H-DETR,and enhance it for dense small objects,leading to an accurate and efficient model.Firstly,we introduce a hybrid transformer encoder,which integrates a convolutional neural network-based cross-scale fusion module with the original encoder to handle multi-scale feature sequences more efficiently.Furthermore,we propose two novel strategies to enhance detection performance without incurring additional inference computation.Query filter is designed to cope with the dense clustering inherent in drone-captured images by counteracting similar queries with a training-aware non-maximum suppression.Adversarial denoising learning is a novel enhancement method inspired by adversarial learning,which improves the detection of numerous small targets by counteracting the effects of artificial spatial and semantic noise.Extensive experiments on the VisDrone and UAVDT datasets substantiate the effectiveness of our approach,achieving a significant improvement in accuracy with a reduction in computational complexity.Our method achieves 31.9%and 21.1%AP on the VisDrone and UAVDT datasets,respectively,and has a faster inference speed,making it a competitive model in UAV image object detection.
基金supported by the NationalNatural Science Foundation of China Nos.62302167,U23A20343Shanghai Sailing Program(23YF1410500)Chenguang Program of Shanghai Education Development Foundation and Shanghai Municipal Education Commission(23CGA34).
文摘Confusing object detection(COD),such as glass,mirrors,and camouflaged objects,represents a burgeoning visual detection task centered on pinpointing and distinguishing concealed targets within intricate backgrounds,leveraging deep learning methodologies.Despite garnering increasing attention in computer vision,the focus of most existing works leans toward formulating task-specific solutions rather than delving into in-depth analyses of methodological structures.As of now,there is a notable absence of a comprehensive systematic review that focuses on recently proposed deep learning-based models for these specific tasks.To fill this gap,our study presents a pioneering review that covers both themodels and the publicly available benchmark datasets,while also identifying potential directions for future research in this field.The current dataset primarily focuses on single confusing object detection at the image level,with some studies extending to video-level data.We conduct an in-depth analysis of deep learning architectures,revealing that the current state-of-the-art(SOTA)COD methods demonstrate promising performance in single object detection.We also compile and provide detailed descriptions ofwidely used datasets relevant to these detection tasks.Our endeavor extends to discussing the limitations observed in current methodologies,alongside proposed solutions aimed at enhancing detection accuracy.Additionally,we deliberate on relevant applications and outline future research trajectories,aiming to catalyze advancements in the field of glass,mirror,and camouflaged object detection.
基金the Scientific Research Fund of Hunan Provincial Education Department(23A0423).
文摘Remote sensing imagery,due to its high altitude,presents inherent challenges characterized by multiple scales,limited target areas,and intricate backgrounds.These inherent traits often lead to increased miss and false detection rates when applying object recognition algorithms tailored for remote sensing imagery.Additionally,these complexities contribute to inaccuracies in target localization and hinder precise target categorization.This paper addresses these challenges by proposing a solution:The YOLO-MFD model(YOLO-MFD:Remote Sensing Image Object Detection withMulti-scale Fusion Dynamic Head).Before presenting our method,we delve into the prevalent issues faced in remote sensing imagery analysis.Specifically,we emphasize the struggles of existing object recognition algorithms in comprehensively capturing critical image features amidst varying scales and complex backgrounds.To resolve these issues,we introduce a novel approach.First,we propose the implementation of a lightweight multi-scale module called CEF.This module significantly improves the model’s ability to comprehensively capture important image features by merging multi-scale feature information.It effectively addresses the issues of missed detection and mistaken alarms that are common in remote sensing imagery.Second,an additional layer of small target detection heads is added,and a residual link is established with the higher-level feature extraction module in the backbone section.This allows the model to incorporate shallower information,significantly improving the accuracy of target localization in remotely sensed images.Finally,a dynamic head attentionmechanism is introduced.This allows themodel to exhibit greater flexibility and accuracy in recognizing shapes and targets of different sizes.Consequently,the precision of object detection is significantly improved.The trial results show that the YOLO-MFD model shows improvements of 6.3%,3.5%,and 2.5%over the original YOLOv8 model in Precision,map@0.5 and map@0.5:0.95,separately.These results illustrate the clear advantages of the method.
基金a grant from the Basic Science Research Program through the National Research Foundation(NRF)(2021R1F1A1063634)funded by the Ministry of Science and ICT(MSIT)Republic of Korea.This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding program Grant Code(NU/RG/SERC/12/6).
文摘Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occlusion,and limited labeled data.To address these challenges,we introduce a comprehensive methodology toenhance image classification and object detection accuracy.The proposed approach involves the integration ofmultiple methods in a complementary way.The process commences with the application of Gaussian filters tomitigate the impact of noise interference.These images are then processed for segmentation using Fuzzy C-Meanssegmentation in parallel with saliency mapping techniques to find the most prominent regions.The Binary RobustIndependent Elementary Features(BRIEF)characteristics are then extracted fromdata derived fromsaliency mapsand segmented images.For precise object separation,Oriented FAST and Rotated BRIEF(ORB)algorithms areemployed.Genetic Algorithms(GAs)are used to optimize Random Forest classifier parameters which lead toimproved performance.Our method stands out due to its comprehensive approach,adeptly addressing challengessuch as changing backdrops,occlusion,and limited labeled data concurrently.A significant enhancement hasbeen achieved by integrating Genetic Algorithms(GAs)to precisely optimize parameters.This minor adjustmentnot only boosts the uniqueness of our system but also amplifies its overall efficacy.The proposed methodologyhas demonstrated notable classification accuracies of 90.9%and 89.0%on the challenging Corel-1k and MSRCdatasets,respectively.Furthermore,detection accuracies of 87.2%and 86.6%have been attained.Although ourmethod performed well in both datasets it may face difficulties in real-world data especially where datasets havehighly complex backgrounds.Despite these limitations,GAintegration for parameter optimization shows a notablestrength in enhancing the overall adaptability and performance of our system.
基金supported by a grant from the National Key Research and Development Project(2023YFB4302100)Key Research and Development Project of Jiangxi Province(No.20232ACE01011)Independent Deployment Project of Ganjiang Innovation Research Institute,Chinese Academy of Sciences(E255J001).
文摘Aiming at the limitations of the existing railway foreign object detection methods based on two-dimensional(2D)images,such as short detection distance,strong influence of environment and lack of distance information,we propose Rail-PillarNet,a three-dimensional(3D)LIDAR(Light Detection and Ranging)railway foreign object detection method based on the improvement of PointPillars.Firstly,the parallel attention pillar encoder(PAPE)is designed to fully extract the features of the pillars and alleviate the problem of local fine-grained information loss in PointPillars pillars encoder.Secondly,a fine backbone network is designed to improve the feature extraction capability of the network by combining the coding characteristics of LIDAR point cloud feature and residual structure.Finally,the initial weight parameters of the model were optimised by the transfer learning training method to further improve accuracy.The experimental results on the OSDaR23 dataset show that the average accuracy of Rail-PillarNet reaches 58.51%,which is higher than most mainstream models,and the number of parameters is 5.49 M.Compared with PointPillars,the accuracy of each target is improved by 10.94%,3.53%,16.96%and 19.90%,respectively,and the number of parameters only increases by 0.64M,which achieves a balance between the number of parameters and accuracy.
文摘The data analysis of blasting sites has always been the research goal of relevant researchers.The rise of mobile blasting robots has aroused many researchers’interest in machine learning methods for target detection in the field of blasting.Serverless Computing can provide a variety of computing services for people without hardware foundations and rich software development experience,which has aroused people’s interest in how to use it in the field ofmachine learning.In this paper,we design a distributedmachine learning training application based on the AWS Lambda platform.Based on data parallelism,the data aggregation and training synchronization in Function as a Service(FaaS)are effectively realized.It also encrypts the data set,effectively reducing the risk of data leakage.We rent a cloud server and a Lambda,and then we conduct experiments to evaluate our applications.Our results indicate the effectiveness,rapidity,and economy of distributed training on FaaS.
文摘Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors.
基金This work was partially supported by the National Natural Science Foundation of China(Grant Nos.61906168,U20A20171)Zhejiang Provincial Natural Science Foundation of China(Grant Nos.LY23F020023,LY21F020027)Construction of Hubei Provincial Key Laboratory for Intelligent Visual Monitoring of Hydropower Projects(Grant Nos.2022SDSJ01).
文摘In clinical practice,the microscopic examination of urine sediment is considered an important in vitro examination with many broad applications.Measuring the amount of each type of urine sediment allows for screening,diagnosis and evaluation of kidney and urinary tract disease,providing insight into the specific type and severity.However,manual urine sediment examination is labor-intensive,time-consuming,and subjective.Traditional machine learning based object detection methods require hand-crafted features for localization and classification,which have poor generalization capabilities and are difficult to quickly and accurately detect the number of urine sediments.Deep learning based object detection methods have the potential to address the challenges mentioned above,but these methods require access to large urine sediment image datasets.Unfortunately,only a limited number of publicly available urine sediment datasets are currently available.To alleviate the lack of urine sediment datasets in medical image analysis,we propose a new dataset named UriSed2K,which contains 2465 high-quality images annotated with expert guidance.Two main challenges are associated with our dataset:a large number of small objects and the occlusion between these small objects.Our manuscript focuses on applying deep learning object detection methods to the urine sediment dataset and addressing the challenges presented by this dataset.Specifically,our goal is to improve the accuracy and efficiency of the detection algorithm and,in doing so,provide medical professionals with an automatic detector that saves time and effort.We propose an improved lightweight one-stage object detection algorithm called Discriminatory-YOLO.The proposed algorithm comprises a local context attention module and a global background suppression module,which aid the detector in distinguishing urine sediment features in the image.The local context attention module captures context information beyond the object region,while the global background suppression module emphasizes objects in uninformative backgrounds.We comprehensively evaluate our method on the UriSed2K dataset,which includes seven categories of urine sediments,such as erythrocytes(red blood cells),leukocytes(white blood cells),epithelial cells,crystals,mycetes,broken erythrocytes,and broken leukocytes,achieving the best average precision(AP)of 95.3%while taking only 10 ms per image.The source code and dataset are available at https://github.com/binghuiwu98/discriminatoryyolov5.