Estimating the crowd count and density of highly dense scenes witnessed in Muslim gatherings at religious sites in Makkah and Madinah is critical for developing control strategies and organizing such a large gathering...Estimating the crowd count and density of highly dense scenes witnessed in Muslim gatherings at religious sites in Makkah and Madinah is critical for developing control strategies and organizing such a large gathering.Moreover,since the crowd images in this case can range from low density to high density,detection-based approaches are hard to apply for crowd counting.Recently,deep learning-based regression has become the prominent approach for crowd counting problems,where a density-map is estimated,and its integral is further computed to acquire the final count result.In this paper,we put forward a novel multi-scale network(named 2U-Net)for crowd counting in sparse and dense scenarios.The proposed framework,which employs the U-Net architecture,is straightforward to implement,computationally efficient,and has single-step training.Unpooling layers are used to retrieve the pooling layers’erased information and learn hierarchically pixelwise spatial representation.This helps in obtaining feature values,retaining spatial locations,and maximizing data integrity to avoid data loss.In addition,a modified attention unit is introduced and integrated into the proposed 2UNet model to focus on specific crowd areas.The proposed model concentrates on balancing the number of model parameters,model size,computational cost,and counting accuracy compared with other works,which may involve acquiring one criterion at the expense of other constraints.Experiments on five challenging datasets for density estimation and crowd counting have shown that the proposed model is very effective and outperforms comparable mainstream models.Moreover,it counts very well in both sparse and congested crowd scenes.The 2U-Net model has the lowest MAE in both parts(Part A and Part B)of the ShanghaiTech,UCSD,and Mall benchmarks,with 63.3,7.4,1.5,and 1.6,respectively.Furthermore,it obtains the lowest MSE in the ShanghaiTech-Part B,UCSD,and Mall benchmarks with 12.0,1.9,and 2.1,respectively.展开更多
Face recognition is a big challenge in the research field with a lot of problems like misalignment,illumination changes,pose variations,occlusion,and expressions.Providing a single solution to solve all these problems...Face recognition is a big challenge in the research field with a lot of problems like misalignment,illumination changes,pose variations,occlusion,and expressions.Providing a single solution to solve all these problems at a time is a challenging task.We have put some effort to provide a solution to solving all these issues by introducing a face recognition model based on local tetra patterns and spatial pyramid matching.The technique is based on a procedure where the input image is passed through an algorithm that extracts local features by using spatial pyramid matching andmax-pooling.Finally,the input image is recognized using a robust kernel representation method using extracted features.The qualitative and quantitative analysis of the proposed method is carried on benchmark image datasets.Experimental results showed that the proposed method performs better in terms of standard performance evaluation parameters as compared to state-of-the-art methods on AR,ORL,LFW,and FERET face recognition datasets.展开更多
Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks...Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results.This study proposes using Dual Maxpooling and concatenating convolution Neural Networks(CNN)layers with the activation functions Relu and the Optimized Leaky Relu(OLRelu).The proposed method works by dividing the word image into slices that contain characters.Then pass them to deep learning layers to extract feature maps and reform the predicted words.Bidirectional Short Memory(BiLSTM)layers extractmore compelling features and link the time sequence fromforward and backward directions during the training phase.The Connectionist Temporal Classification(CTC)function calcifies the training and validation loss rates.In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence.The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks(IAM)datasets.The result of IAM was 2.09%for the average loss errors with the proposed dualMaxpooling and OLRelu.In the Mjsynth dataset,the best validation loss rate shrunk to 2.2%by applying concatenating CNN layers,and Relu.展开更多
基金This research work is supported by the Deputyship of Research&Innovation,Ministry of Education in Saudi Arabia(Grant Number 758).
文摘Estimating the crowd count and density of highly dense scenes witnessed in Muslim gatherings at religious sites in Makkah and Madinah is critical for developing control strategies and organizing such a large gathering.Moreover,since the crowd images in this case can range from low density to high density,detection-based approaches are hard to apply for crowd counting.Recently,deep learning-based regression has become the prominent approach for crowd counting problems,where a density-map is estimated,and its integral is further computed to acquire the final count result.In this paper,we put forward a novel multi-scale network(named 2U-Net)for crowd counting in sparse and dense scenarios.The proposed framework,which employs the U-Net architecture,is straightforward to implement,computationally efficient,and has single-step training.Unpooling layers are used to retrieve the pooling layers’erased information and learn hierarchically pixelwise spatial representation.This helps in obtaining feature values,retaining spatial locations,and maximizing data integrity to avoid data loss.In addition,a modified attention unit is introduced and integrated into the proposed 2UNet model to focus on specific crowd areas.The proposed model concentrates on balancing the number of model parameters,model size,computational cost,and counting accuracy compared with other works,which may involve acquiring one criterion at the expense of other constraints.Experiments on five challenging datasets for density estimation and crowd counting have shown that the proposed model is very effective and outperforms comparable mainstream models.Moreover,it counts very well in both sparse and congested crowd scenes.The 2U-Net model has the lowest MAE in both parts(Part A and Part B)of the ShanghaiTech,UCSD,and Mall benchmarks,with 63.3,7.4,1.5,and 1.6,respectively.Furthermore,it obtains the lowest MSE in the ShanghaiTech-Part B,UCSD,and Mall benchmarks with 12.0,1.9,and 2.1,respectively.
基金This project was funded by the Deanship of Scientific Research(DSR)at King Abdul Aziz University,Jeddah,under Grant No.KEP-10-611-42.The authors,therefore,acknowledge with thanks DSR technical and financial support.
文摘Face recognition is a big challenge in the research field with a lot of problems like misalignment,illumination changes,pose variations,occlusion,and expressions.Providing a single solution to solve all these problems at a time is a challenging task.We have put some effort to provide a solution to solving all these issues by introducing a face recognition model based on local tetra patterns and spatial pyramid matching.The technique is based on a procedure where the input image is passed through an algorithm that extracts local features by using spatial pyramid matching andmax-pooling.Finally,the input image is recognized using a robust kernel representation method using extracted features.The qualitative and quantitative analysis of the proposed method is carried on benchmark image datasets.Experimental results showed that the proposed method performs better in terms of standard performance evaluation parameters as compared to state-of-the-art methods on AR,ORL,LFW,and FERET face recognition datasets.
基金supported this project under the Fundamental Research Grant Scheme(FRGS)FRGS/1/2019/ICT02/UKM/02/9 entitled“Convolution Neural Network Enhancement Based on Adaptive Convexity and Regularization Functions for Fake Video Analytics”.This grant was received by Prof.Assis.Dr.S.N.H.Sheikh Abdullah,https://www.ukm.my/spifper/research_news/instrumentfunds.
文摘Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results.This study proposes using Dual Maxpooling and concatenating convolution Neural Networks(CNN)layers with the activation functions Relu and the Optimized Leaky Relu(OLRelu).The proposed method works by dividing the word image into slices that contain characters.Then pass them to deep learning layers to extract feature maps and reform the predicted words.Bidirectional Short Memory(BiLSTM)layers extractmore compelling features and link the time sequence fromforward and backward directions during the training phase.The Connectionist Temporal Classification(CTC)function calcifies the training and validation loss rates.In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence.The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks(IAM)datasets.The result of IAM was 2.09%for the average loss errors with the proposed dualMaxpooling and OLRelu.In the Mjsynth dataset,the best validation loss rate shrunk to 2.2%by applying concatenating CNN layers,and Relu.