This paper presents a large gathering dataset of images extracted from publicly filmed videos by 24 cameras installed on the premises of Masjid Al-Nabvi,Madinah,Saudi Arabia.This dataset consists of raw and processed ...This paper presents a large gathering dataset of images extracted from publicly filmed videos by 24 cameras installed on the premises of Masjid Al-Nabvi,Madinah,Saudi Arabia.This dataset consists of raw and processed images reflecting a highly challenging and unconstraint environment.The methodology for building the dataset consists of four core phases;that include acquisition of videos,extraction of frames,localization of face regions,and cropping and resizing of detected face regions.The raw images in the dataset consist of a total of 4613 frames obtained fromvideo sequences.The processed images in the dataset consist of the face regions of 250 persons extracted from raw data images to ensure the authenticity of the presented data.The dataset further consists of 8 images corresponding to each of the 250 subjects(persons)for a total of 2000 images.It portrays a highly unconstrained and challenging environment with human faces of varying sizes and pixel quality(resolution).Since the face regions in video sequences are severely degraded due to various unavoidable factors,it can be used as a benchmark to test and evaluate face detection and recognition algorithms for research purposes.We have also gathered and displayed records of the presence of subjects who appear in presented frames;in a temporal context.This can also be used as a temporal benchmark for tracking,finding persons,activity monitoring,and crowd counting in large crowd scenarios.展开更多
In the Smart Grid(SG)residential environment,consumers change their power consumption routine according to the price and incentives announced by the utility,which causes the prices to deviate from the initial pattern....In the Smart Grid(SG)residential environment,consumers change their power consumption routine according to the price and incentives announced by the utility,which causes the prices to deviate from the initial pattern.Thereby,electricity demand and price forecasting play a significant role and can help in terms of reliability and sustainability.Due to the massive amount of data,big data analytics for forecasting becomes a hot topic in the SG domain.In this paper,the changing and non-linearity of consumer consumption pattern complex data is taken as input.To minimize the computational cost and complexity of the data,the average of the feature engineering approaches includes:Recursive Feature Eliminator(RFE),Extreme Gradient Boosting(XGboost),Random Forest(RF),and are upgraded to extract the most relevant and significant features.To this end,we have proposed the DensetNet-121 network and Support Vector Machine(SVM)ensemble with Aquila Optimizer(AO)to ensure adaptability and handle the complexity of data in the classification.Further,the AO method helps to tune the parameters of DensNet(121 layers)and SVM,which achieves less training loss,computational time,minimized overfitting problems and more training/test accuracy.Performance evaluation metrics and statistical analysis validate the proposed model results are better than the benchmark schemes.Our proposed method has achieved a minimal value of the Mean Average Percentage Error(MAPE)rate i.e.,8%by DenseNet-AO and 6%by SVM-AO and the maximum accurateness rate of 92%and 95%,respectively.展开更多
基金This research was supported by the Deanship of Scientific Research,Islamic University of Madinah,Madinah(KSA),under Tammayuz program Grant Number 1442/505.
文摘This paper presents a large gathering dataset of images extracted from publicly filmed videos by 24 cameras installed on the premises of Masjid Al-Nabvi,Madinah,Saudi Arabia.This dataset consists of raw and processed images reflecting a highly challenging and unconstraint environment.The methodology for building the dataset consists of four core phases;that include acquisition of videos,extraction of frames,localization of face regions,and cropping and resizing of detected face regions.The raw images in the dataset consist of a total of 4613 frames obtained fromvideo sequences.The processed images in the dataset consist of the face regions of 250 persons extracted from raw data images to ensure the authenticity of the presented data.The dataset further consists of 8 images corresponding to each of the 250 subjects(persons)for a total of 2000 images.It portrays a highly unconstrained and challenging environment with human faces of varying sizes and pixel quality(resolution).Since the face regions in video sequences are severely degraded due to various unavoidable factors,it can be used as a benchmark to test and evaluate face detection and recognition algorithms for research purposes.We have also gathered and displayed records of the presence of subjects who appear in presented frames;in a temporal context.This can also be used as a temporal benchmark for tracking,finding persons,activity monitoring,and crowd counting in large crowd scenarios.
基金The authors acknowledge the support from the Ministry of Education and the Deanship of Scientific Research,Najran University,Saudi Arabia,under code number NU/-/SERC/10/616.
文摘In the Smart Grid(SG)residential environment,consumers change their power consumption routine according to the price and incentives announced by the utility,which causes the prices to deviate from the initial pattern.Thereby,electricity demand and price forecasting play a significant role and can help in terms of reliability and sustainability.Due to the massive amount of data,big data analytics for forecasting becomes a hot topic in the SG domain.In this paper,the changing and non-linearity of consumer consumption pattern complex data is taken as input.To minimize the computational cost and complexity of the data,the average of the feature engineering approaches includes:Recursive Feature Eliminator(RFE),Extreme Gradient Boosting(XGboost),Random Forest(RF),and are upgraded to extract the most relevant and significant features.To this end,we have proposed the DensetNet-121 network and Support Vector Machine(SVM)ensemble with Aquila Optimizer(AO)to ensure adaptability and handle the complexity of data in the classification.Further,the AO method helps to tune the parameters of DensNet(121 layers)and SVM,which achieves less training loss,computational time,minimized overfitting problems and more training/test accuracy.Performance evaluation metrics and statistical analysis validate the proposed model results are better than the benchmark schemes.Our proposed method has achieved a minimal value of the Mean Average Percentage Error(MAPE)rate i.e.,8%by DenseNet-AO and 6%by SVM-AO and the maximum accurateness rate of 92%and 95%,respectively.