Shadow extraction and elimination is essential for intelligent transportation systems(ITS)in vehicle tracking application.The shadow is the source of error for vehicle detection,which causes misclassification of vehic...Shadow extraction and elimination is essential for intelligent transportation systems(ITS)in vehicle tracking application.The shadow is the source of error for vehicle detection,which causes misclassification of vehicles and a high false alarm rate in the research of vehicle counting,vehicle detection,vehicle tracking,and classification.Most of the existing research is on shadow extraction of moving vehicles in high intensity and on standard datasets,but the process of extracting shadows from moving vehicles in low light of real scenes is difficult.The real scenes of vehicles dataset are generated by self on the Vadodara–Mumbai highway during periods of poor illumination for shadow extraction of moving vehicles to address the above problem.This paper offers a robust shadow extraction of moving vehicles and its elimination for vehicle tracking.The method is distributed into two phases:In the first phase,we extract foreground regions using a mixture of Gaussian model,and then in the second phase,with the help of the Gamma correction,intensity ratio,negative transformation,and a combination of Gaussian filters,we locate and remove the shadow region from the foreground areas.Compared to the outcomes proposed method with outcomes of an existing method,the suggested method achieves an average true negative rate of above 90%,a shadow detection rate SDR(η%),and a shadow discrimination rate SDR(ξ%)of 80%.Hence,the suggested method is more appropriate for moving shadow detection in real scenes.展开更多
Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image t...Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model.In contrast,this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions,especially around the eyes,eyebrows,nose,andmouth.Then,we apply a newclassifier using an ensemble network to increase emotion recognition accuracy.The emotion recognition performance was compared with the conventional algorithms using public databases.The results indicated that the proposed method achieved higher accuracy than the traditional based on facial expressions for emotion recognition.In particular,our experiments with the FER2013 database show that our proposed method is robust to lighting conditions and backgrounds,with an average of 25% higher performance than previous studies.Consequently,the proposed method is expected to recognize facial expressions,especially fear and anger,to help prevent severe accidents by detecting security-related or dangerous actions in advance.展开更多
The purpose of this paper is to solve the problem of robust face recognition(FR) with single sample per person(SSPP). In the scenario of FR with SSPP, we present a novel model local robust sparse representation(LRSR) ...The purpose of this paper is to solve the problem of robust face recognition(FR) with single sample per person(SSPP). In the scenario of FR with SSPP, we present a novel model local robust sparse representation(LRSR) to tackle the problem of query images with various intra-class variations,e.g., expressions, illuminations, and occlusion. FR with SSPP is a very difficult challenge due to lacking of information to predict the possible intra-class variation of the query images.The key idea of the proposed method is to combine a local sparse representation model and a patch-based generic variation dictionary learning model to predict the possible facial intraclass variation of the query images. The experimental results on the AR database, Extended Yale B database, CMU-PIE database and LFW database show that the proposed method is robust to intra-class variations in FR with SSPP, and outperforms the state-of-art approaches.展开更多
基金funded by Researchers Supporting Project Number(RSP2023R503),King Saud University,Riyadh,Saudi Arabia。
文摘Shadow extraction and elimination is essential for intelligent transportation systems(ITS)in vehicle tracking application.The shadow is the source of error for vehicle detection,which causes misclassification of vehicles and a high false alarm rate in the research of vehicle counting,vehicle detection,vehicle tracking,and classification.Most of the existing research is on shadow extraction of moving vehicles in high intensity and on standard datasets,but the process of extracting shadows from moving vehicles in low light of real scenes is difficult.The real scenes of vehicles dataset are generated by self on the Vadodara–Mumbai highway during periods of poor illumination for shadow extraction of moving vehicles to address the above problem.This paper offers a robust shadow extraction of moving vehicles and its elimination for vehicle tracking.The method is distributed into two phases:In the first phase,we extract foreground regions using a mixture of Gaussian model,and then in the second phase,with the help of the Gamma correction,intensity ratio,negative transformation,and a combination of Gaussian filters,we locate and remove the shadow region from the foreground areas.Compared to the outcomes proposed method with outcomes of an existing method,the suggested method achieves an average true negative rate of above 90%,a shadow detection rate SDR(η%),and a shadow discrimination rate SDR(ξ%)of 80%.Hence,the suggested method is more appropriate for moving shadow detection in real scenes.
基金supported by the Healthcare AI Convergence R&D Program through the National IT Industry Promotion Agency of Korea(NIPA)funded by the Ministry of Science and ICT(No.S0102-23-1007)the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2017R1A6A1A03015496).
文摘Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model.In contrast,this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions,especially around the eyes,eyebrows,nose,andmouth.Then,we apply a newclassifier using an ensemble network to increase emotion recognition accuracy.The emotion recognition performance was compared with the conventional algorithms using public databases.The results indicated that the proposed method achieved higher accuracy than the traditional based on facial expressions for emotion recognition.In particular,our experiments with the FER2013 database show that our proposed method is robust to lighting conditions and backgrounds,with an average of 25% higher performance than previous studies.Consequently,the proposed method is expected to recognize facial expressions,especially fear and anger,to help prevent severe accidents by detecting security-related or dangerous actions in advance.
基金supported in part by the National Natural Science Foundation of China(61673402,61273270,60802069)the Natural Science Foundation of Guangdong Province(2017A030311029,2016B010109002,2015B090912001,2016B010123005,2017B090909005)+1 种基金the Science and Technology Program of Guangzhou of China(201704020180,201604020024)the Fundamental Research Funds for the Central Universities of China
文摘The purpose of this paper is to solve the problem of robust face recognition(FR) with single sample per person(SSPP). In the scenario of FR with SSPP, we present a novel model local robust sparse representation(LRSR) to tackle the problem of query images with various intra-class variations,e.g., expressions, illuminations, and occlusion. FR with SSPP is a very difficult challenge due to lacking of information to predict the possible intra-class variation of the query images.The key idea of the proposed method is to combine a local sparse representation model and a patch-based generic variation dictionary learning model to predict the possible facial intraclass variation of the query images. The experimental results on the AR database, Extended Yale B database, CMU-PIE database and LFW database show that the proposed method is robust to intra-class variations in FR with SSPP, and outperforms the state-of-art approaches.