Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on ...Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively.展开更多
The demand for a non-contact biometric approach for candidate identification has grown over the past ten years.Based on the most important biometric application,human gait analysis is a significant research topic in c...The demand for a non-contact biometric approach for candidate identification has grown over the past ten years.Based on the most important biometric application,human gait analysis is a significant research topic in computer vision.Researchers have paid a lot of attention to gait recognition,specifically the identification of people based on their walking patterns,due to its potential to correctly identify people far away.Gait recognition systems have been used in a variety of applications,including security,medical examinations,identity management,and access control.These systems require a complex combination of technical,operational,and definitional considerations.The employment of gait recognition techniques and technologies has produced a number of beneficial and well-liked applications.Thiswork proposes a novel deep learning-based framework for human gait classification in video sequences.This framework’smain challenge is improving the accuracy of accuracy gait classification under varying conditions,such as carrying a bag and changing clothes.The proposed method’s first step is selecting two pre-trained deep learningmodels and training fromscratch using deep transfer learning.Next,deepmodels have been trained using static hyperparameters;however,the learning rate is calculated using the particle swarmoptimization(PSO)algorithm.Then,the best features are selected from both trained models using the Harris Hawks controlled Sine-Cosine optimization algorithm.This algorithm chooses the best features,combined in a novel correlation-based fusion technique.Finally,the fused best features are categorized using medium,bi-layer,and tri-layered neural networks.On the publicly accessible dataset known as the CASIA-B dataset,the experimental process of the suggested technique was carried out,and an improved accuracy of 94.14% was achieved.The achieved accuracy of the proposed method is improved by the recent state-of-the-art techniques that show the significance of this work.展开更多
Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overa...Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overall accuracy.Deep learning is considered to be an efficient method for object detection in vision-based systems.In this paper,we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5(YOLOv5)detector combined with a segmentation technique.The model consists of six steps.In the first step,all the extracted traffic sequence images are subjected to pre-processing to remove noise and enhance the contrast level of the images.These pre-processed images are segmented by labelling each pixel to extract the uniform regions to aid the detection phase.A single-stage detector YOLOv5 is used to detect and locate vehicles in images.Each detection was exposed to Speeded Up Robust Feature(SURF)feature extraction to track multiple vehicles.Based on this,a unique number is assigned to each vehicle to easily locate them in the succeeding image frames by extracting them using the feature-matching technique.Further,we implemented a Kalman filter to track multiple vehicles.In the end,the vehicle path is estimated by using the centroid points of the rectangular bounding box predicted by the tracking algorithm.The experimental results and comparison reveal that our proposed vehicle detection and tracking system outperformed other state-of-the-art systems.The proposed implemented system provided 94.1%detection precision for Roundabout and 96.1%detection precision for Vehicle Aerial Imaging from Drone(VAID)datasets,respectively.展开更多
Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occ...Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occlusion,and limited labeled data.To address these challenges,we introduce a comprehensive methodology toenhance image classification and object detection accuracy.The proposed approach involves the integration ofmultiple methods in a complementary way.The process commences with the application of Gaussian filters tomitigate the impact of noise interference.These images are then processed for segmentation using Fuzzy C-Meanssegmentation in parallel with saliency mapping techniques to find the most prominent regions.The Binary RobustIndependent Elementary Features(BRIEF)characteristics are then extracted fromdata derived fromsaliency mapsand segmented images.For precise object separation,Oriented FAST and Rotated BRIEF(ORB)algorithms areemployed.Genetic Algorithms(GAs)are used to optimize Random Forest classifier parameters which lead toimproved performance.Our method stands out due to its comprehensive approach,adeptly addressing challengessuch as changing backdrops,occlusion,and limited labeled data concurrently.A significant enhancement hasbeen achieved by integrating Genetic Algorithms(GAs)to precisely optimize parameters.This minor adjustmentnot only boosts the uniqueness of our system but also amplifies its overall efficacy.The proposed methodologyhas demonstrated notable classification accuracies of 90.9%and 89.0%on the challenging Corel-1k and MSRCdatasets,respectively.Furthermore,detection accuracies of 87.2%and 86.6%have been attained.Although ourmethod performed well in both datasets it may face difficulties in real-world data especially where datasets havehighly complex backgrounds.Despite these limitations,GAintegration for parameter optimization shows a notablestrength in enhancing the overall adaptability and performance of our system.展开更多
The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integrat...The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance.展开更多
Road congestion,air pollution,and accident rates have all increased as a result of rising traffic density andworldwide population growth.Over the past ten years,the total number of automobiles has increased significan...Road congestion,air pollution,and accident rates have all increased as a result of rising traffic density andworldwide population growth.Over the past ten years,the total number of automobiles has increased significantly over the world.In this paper,a novel method for intelligent traffic surveillance is presented.The proposed model is based on multilabel semantic segmentation using a random forest classifier which classifies the images into five classes.To improve the results,mean-shift clustering was applied to the segmented images.Afterward,the pixels given the label for the vehicle were extracted and blob detection was applied to mark each vehicle.For the validation of each detection,a vehicle verification method based on the structural similarity index is proposed.The tracking of vehicles across the image frames is done using the Identifier(ID)assignment technique and particle filter.Also,vehicle counting in each frame along with trajectory estimation was done for each object.Our proposed system demonstrated a remarkable vehicle detection rate of 0.83 over Vehicle Aerial Imaging from Drone(VAID),0.86 over AU-AIR,and 0.75 over the Unmanned Aerial Vehicle Benchmark Object Detection and Tracking(UAVDT)dataset during the experimental evaluation.The proposed system can be used for several purposes,such as vehicle identification in traffic,traffic density estimation at intersections,and traffic congestion sensing on a road.展开更多
Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show mor...Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks,such as skin cancer,colorectal cancer,brain tumour,cardiac disease,Breast cancer(BrC),and a few more.The manual diagnosis of medical issues always requires an expert and is also expensive.Therefore,developing some computer diagnosis techniques based on deep learning is essential.Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage.It is estimated that patients with BrC will rise to 70%in the next 20 years.If diagnosed at a later stage,the survival rate of patients with BrC is shallow.Hence,early detection is essential,increasing the survival rate to 50%.A new framework for BrC classification is presented that utilises deep learning and feature optimization.The significant steps of the presented framework include(i)hybrid contrast enhancement of acquired images,(ii)data augmentation to facilitate better learning of the Convolutional Neural Network(CNN)model,(iii)a pre‐trained ResNet‐101 model is utilised and modified according to selected dataset classes,(iv)deep transfer learning based model training for feature extraction,(v)the fusion of features using the proposed highly corrected function‐controlled canonical correlation analysis approach,and(vi)optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers.The experiments of the proposed framework have been carried out using the most critical and publicly available dataset,such as CBISDDSM,and obtained the best accuracy of 94.5%along with improved computation time.The comparison depicts that the presented method surpasses the current state‐ofthe‐art approaches.展开更多
基金supported by the MSIT(Ministry of Science and ICT)Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2023-2018-0-01426)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation)+1 种基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabiathe Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/12/6).
文摘Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively.
基金supported by the“Human Resources Program in Energy Technol-ogy”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP)and Granted Financial Resources from the Ministry of Trade,Industry,and Energy,Republic of Korea(No.20204010600090)The funding of this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The demand for a non-contact biometric approach for candidate identification has grown over the past ten years.Based on the most important biometric application,human gait analysis is a significant research topic in computer vision.Researchers have paid a lot of attention to gait recognition,specifically the identification of people based on their walking patterns,due to its potential to correctly identify people far away.Gait recognition systems have been used in a variety of applications,including security,medical examinations,identity management,and access control.These systems require a complex combination of technical,operational,and definitional considerations.The employment of gait recognition techniques and technologies has produced a number of beneficial and well-liked applications.Thiswork proposes a novel deep learning-based framework for human gait classification in video sequences.This framework’smain challenge is improving the accuracy of accuracy gait classification under varying conditions,such as carrying a bag and changing clothes.The proposed method’s first step is selecting two pre-trained deep learningmodels and training fromscratch using deep transfer learning.Next,deepmodels have been trained using static hyperparameters;however,the learning rate is calculated using the particle swarmoptimization(PSO)algorithm.Then,the best features are selected from both trained models using the Harris Hawks controlled Sine-Cosine optimization algorithm.This algorithm chooses the best features,combined in a novel correlation-based fusion technique.Finally,the fused best features are categorized using medium,bi-layer,and tri-layered neural networks.On the publicly accessible dataset known as the CASIA-B dataset,the experimental process of the suggested technique was carried out,and an improved accuracy of 94.14% was achieved.The achieved accuracy of the proposed method is improved by the recent state-of-the-art techniques that show the significance of this work.
基金This researchwas supported by the Deanship of ScientificResearch at Najran University,under the Research Group Funding Program Grant Code(NU/RG/SERC/12/30)This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)+1 种基金Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThis study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Intelligent vehicle tracking and detection are crucial tasks in the realm of highway management.However,vehicles come in a range of sizes,which is challenging to detect,affecting the traffic monitoring system’s overall accuracy.Deep learning is considered to be an efficient method for object detection in vision-based systems.In this paper,we proposed a vision-based vehicle detection and tracking system based on a You Look Only Once version 5(YOLOv5)detector combined with a segmentation technique.The model consists of six steps.In the first step,all the extracted traffic sequence images are subjected to pre-processing to remove noise and enhance the contrast level of the images.These pre-processed images are segmented by labelling each pixel to extract the uniform regions to aid the detection phase.A single-stage detector YOLOv5 is used to detect and locate vehicles in images.Each detection was exposed to Speeded Up Robust Feature(SURF)feature extraction to track multiple vehicles.Based on this,a unique number is assigned to each vehicle to easily locate them in the succeeding image frames by extracting them using the feature-matching technique.Further,we implemented a Kalman filter to track multiple vehicles.In the end,the vehicle path is estimated by using the centroid points of the rectangular bounding box predicted by the tracking algorithm.The experimental results and comparison reveal that our proposed vehicle detection and tracking system outperformed other state-of-the-art systems.The proposed implemented system provided 94.1%detection precision for Roundabout and 96.1%detection precision for Vehicle Aerial Imaging from Drone(VAID)datasets,respectively.
基金a grant from the Basic Science Research Program through the National Research Foundation(NRF)(2021R1F1A1063634)funded by the Ministry of Science and ICT(MSIT)Republic of Korea.This research is supported and funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding program Grant Code(NU/RG/SERC/12/6).
文摘Advances in machine vision systems have revolutionized applications such as autonomous driving,robotic navigation,and augmented reality.Despite substantial progress,challenges persist,including dynamic backgrounds,occlusion,and limited labeled data.To address these challenges,we introduce a comprehensive methodology toenhance image classification and object detection accuracy.The proposed approach involves the integration ofmultiple methods in a complementary way.The process commences with the application of Gaussian filters tomitigate the impact of noise interference.These images are then processed for segmentation using Fuzzy C-Meanssegmentation in parallel with saliency mapping techniques to find the most prominent regions.The Binary RobustIndependent Elementary Features(BRIEF)characteristics are then extracted fromdata derived fromsaliency mapsand segmented images.For precise object separation,Oriented FAST and Rotated BRIEF(ORB)algorithms areemployed.Genetic Algorithms(GAs)are used to optimize Random Forest classifier parameters which lead toimproved performance.Our method stands out due to its comprehensive approach,adeptly addressing challengessuch as changing backdrops,occlusion,and limited labeled data concurrently.A significant enhancement hasbeen achieved by integrating Genetic Algorithms(GAs)to precisely optimize parameters.This minor adjustmentnot only boosts the uniqueness of our system but also amplifies its overall efficacy.The proposed methodologyhas demonstrated notable classification accuracies of 90.9%and 89.0%on the challenging Corel-1k and MSRCdatasets,respectively.Furthermore,detection accuracies of 87.2%and 86.6%have been attained.Although ourmethod performed well in both datasets it may face difficulties in real-world data especially where datasets havehighly complex backgrounds.Despite these limitations,GAintegration for parameter optimization shows a notablestrength in enhancing the overall adaptability and performance of our system.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)Program(IITP-2024-RS-2022-00156326)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)+2 种基金The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/GP/SERC/13/30)funding for this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the Project Number“NBU-FFR-2024-231-06”.
文摘The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2023-2018-0-01426)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).The funding of this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Road congestion,air pollution,and accident rates have all increased as a result of rising traffic density andworldwide population growth.Over the past ten years,the total number of automobiles has increased significantly over the world.In this paper,a novel method for intelligent traffic surveillance is presented.The proposed model is based on multilabel semantic segmentation using a random forest classifier which classifies the images into five classes.To improve the results,mean-shift clustering was applied to the segmented images.Afterward,the pixels given the label for the vehicle were extracted and blob detection was applied to mark each vehicle.For the validation of each detection,a vehicle verification method based on the structural similarity index is proposed.The tracking of vehicles across the image frames is done using the Identifier(ID)assignment technique and particle filter.Also,vehicle counting in each frame along with trajectory estimation was done for each object.Our proposed system demonstrated a remarkable vehicle detection rate of 0.83 over Vehicle Aerial Imaging from Drone(VAID),0.86 over AU-AIR,and 0.75 over the Unmanned Aerial Vehicle Benchmark Object Detection and Tracking(UAVDT)dataset during the experimental evaluation.The proposed system can be used for several purposes,such as vehicle identification in traffic,traffic density estimation at intersections,and traffic congestion sensing on a road.
基金Supporting Project number(PNURSP2023R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.supported by MRC,UK(MC_PC_17171)+9 种基金Royal Society,UK(RP202G0230)BHF,UK(AA/18/3/34220)Hope Foundation for Cancer Research,UK(RM60G0680)GCRF,UK(P202PF11)Sino‐UK Industrial Fund,UK(RP202G0289)LIAS,UK(P202ED10,P202RE969)Data Science Enhancement Fund,UK(P202RE237)Fight for Sight,UK(24NN201)Sino‐UK Education Fund,UK(OP202006)BBSRC,UK(RM32G0178B8).The funding of this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Currently,the improvement in AI is mainly related to deep learning techniques that are employed for the classification,identification,and quantification of patterns in clinical images.The deep learning models show more remarkable performance than the traditional methods for medical image processing tasks,such as skin cancer,colorectal cancer,brain tumour,cardiac disease,Breast cancer(BrC),and a few more.The manual diagnosis of medical issues always requires an expert and is also expensive.Therefore,developing some computer diagnosis techniques based on deep learning is essential.Breast cancer is the most frequently diagnosed cancer in females with a rapidly growing percentage.It is estimated that patients with BrC will rise to 70%in the next 20 years.If diagnosed at a later stage,the survival rate of patients with BrC is shallow.Hence,early detection is essential,increasing the survival rate to 50%.A new framework for BrC classification is presented that utilises deep learning and feature optimization.The significant steps of the presented framework include(i)hybrid contrast enhancement of acquired images,(ii)data augmentation to facilitate better learning of the Convolutional Neural Network(CNN)model,(iii)a pre‐trained ResNet‐101 model is utilised and modified according to selected dataset classes,(iv)deep transfer learning based model training for feature extraction,(v)the fusion of features using the proposed highly corrected function‐controlled canonical correlation analysis approach,and(vi)optimal feature selection using the modified Satin Bowerbird Optimization controlled Newton Raphson algorithm that finally classified using 10 machine learning classifiers.The experiments of the proposed framework have been carried out using the most critical and publicly available dataset,such as CBISDDSM,and obtained the best accuracy of 94.5%along with improved computation time.The comparison depicts that the presented method surpasses the current state‐ofthe‐art approaches.