COVID-19 (Coronavirus disease of 2019) is caused by SARS-CoV2(Severe Acute Respiratory Syndrome Coronavirus 2) and it was first diagnosedin December 2019 in China. As of 25th Aug 2021, there are 165 million con-firmed...COVID-19 (Coronavirus disease of 2019) is caused by SARS-CoV2(Severe Acute Respiratory Syndrome Coronavirus 2) and it was first diagnosedin December 2019 in China. As of 25th Aug 2021, there are 165 million con-firmed COVID-19 positive cases and 4.4 million deaths globally. As of today,though there are approved COVID-19 vaccine candidates only 4 billion doseshave been administered. Until 100% of the population is safe, no one is safe. Eventhough these vaccines can provide protection against getting seriously ill anddying from the disease, it does not provide 100% protection from getting infectedand passing it on to others. The more the virus spreads;it has more opportunity tomutate. So, it is mandatory to follow all precautions like maintaining social distance, wearing mask, washing hands frequently irrespective of whether a person isvaccinated or not. To prevent spread of the virus, contact tracing based on socialdistance also becomes equally important. The work proposes a solution that canhelp with contact tracing/identification, knowing the infected persons recent travelhistory (even within the city) for few days before being assessed positive. Whilethe person would be able to give the known contacts with whom he/she has interacted with, he/she will not be aware of who all were in proximity if he/she hadbeen in public places. The proposed solution is to get the CCTV (Closed-CircuitTelevision) video clips from those public places for the specific date and time andidentify the people who were in proximity—i.e., not followed the safe distance tothe infected person. The approach uses YOLO V3 (You Only Look Once) whichuses darknet framework for people detection. Once the infected person is locatedfrom the video frames, the distance from that person to the other people in theframe is found, to check if there is a violation of social distance guideline. If thereis, then the people violating the distance are extracted and identified using Facialdetection and recognition algorithms. Two different solutions for Face detectionand Recognition are implemented and results compared—Dlib based modelsand OpenCV (Open Source Computer Vision Library) based models. The solutions were studied for two different CCTV footages and the results for Dlib basedmodels were better than OpenCV based models for the studied videos.展开更多
Background:Machine learning has enabled the automatic detection of facial expressions,which is particularly beneficial in smart monitoring and understanding the mental state of medical and psychological patients.Most ...Background:Machine learning has enabled the automatic detection of facial expressions,which is particularly beneficial in smart monitoring and understanding the mental state of medical and psychological patients.Most algorithms that attain high emotion classification accuracy require extensive computational resources,which either require bulky and inefficient devices or require the sensor data to be processed on cloud servers.However,there is always the risk of privacy invasion,data misuse,and data manipulation when the raw images are transferred to cloud servers for processing facical emotion recognition(FER)data.One possible solution to this problem is to minimize the movement of such privatedata.Methods:In this research,we propose an efficient implementation of a convolutional neural network(CNN)based algorithm for on-device FER on a low-power field programmable gate array(FPGA)platform.This is done by encoding the CNN weights to approximated signed digits,which reduces the number of partial sums to be computed for multiply-accumulate(MAC)operations.This is advantageous for portable devices that lack full-fledged resourceintensivemultipliers.Results:We applied our approximation method on MobileNet-v2 and ResNet18 models,which were pretrained with the FER2013 dataset.Our implementations and simulations reduce the FPGA resource requirement by at least 22%compared to models with integer weight,with negligible loss in classification accuracy.Conclusions:The outcome of this research will help in the development of secure and low-power systems for FER and other biomedical applications.The approximation methods used in this research can also be extended to other imagebasedbiomedicalresearchfields.展开更多
Purpose-The humans are gifted with the potential of recognizing others by their uniqueness,in addition with more other demographic characteristics such as ethnicity(or race),gender and age,respectively.Over the decade...Purpose-The humans are gifted with the potential of recognizing others by their uniqueness,in addition with more other demographic characteristics such as ethnicity(or race),gender and age,respectively.Over the decades,a vast count of researchers had undergone in the field of psychological,biological and cognitive sciences to explore how the human brain characterizes,perceives and memorizes faces.Moreover,certain computational advancements have been developed to accomplish several insights into this issue.Design/methodology/approach-This paper intends to propose a new race detection model using face shape features.The proposed model includes two key phases,namely.(a)feature extraction(b)detection.The feature extraction is the initial stage,where the face color and shape based features get mined.Specifically,maximally stable extremal regions(MSER)and speeded-up robust transform(SURF)are extracted under shape features and dense color feature are extracted as color feature.Since,the extracted features are huge in dimensions;they are alleviated under principle component analysis(PCA)approach,which is the strongest model for solving“curse of dimensionality”.Then,the dimensional reduced features are subjected to deep belief neural network(DBN),where the race gets detected.Further,to make the proposed framework more effective with respect to prediction,the weight of DBNis fine tuned with a new hybrid algorithm referred as lion mutated and updated dragon algorithm(LMUDA),which is the conceptual hybridization of lion algorithm(LA)and dragonfly algorithm(DA).Findings-The performance of proposed work is compared over other state-of-the-art models in terms of accuracy and error performance.Moreover,LMUDA attains high accuracy at 100th iteration with 90%of training,which is 11.1,8.8,5.5 and 3.3%better than the performance when learning percentage(LP)550%,60%,70%,and 80%,respectively.More particularly,the performance of proposed DBNþLMUDAis 22.2,12.5 and 33.3%better than the traditional classifiers DCNN,DBN and LDA,respectively.Originality/value-This paper achieves the objective detecting the human races from the faces.Particularly,MSER feature and SURF features are extracted under shape features and dense color feature are extracted as color feature.As a novelty,to make the race detection more accurate,the weight of DBNis fine tuned with a new hybrid algorithm referred as LMUDA,which is the conceptual hybridization of LA and DA,respectively.展开更多
文摘COVID-19 (Coronavirus disease of 2019) is caused by SARS-CoV2(Severe Acute Respiratory Syndrome Coronavirus 2) and it was first diagnosedin December 2019 in China. As of 25th Aug 2021, there are 165 million con-firmed COVID-19 positive cases and 4.4 million deaths globally. As of today,though there are approved COVID-19 vaccine candidates only 4 billion doseshave been administered. Until 100% of the population is safe, no one is safe. Eventhough these vaccines can provide protection against getting seriously ill anddying from the disease, it does not provide 100% protection from getting infectedand passing it on to others. The more the virus spreads;it has more opportunity tomutate. So, it is mandatory to follow all precautions like maintaining social distance, wearing mask, washing hands frequently irrespective of whether a person isvaccinated or not. To prevent spread of the virus, contact tracing based on socialdistance also becomes equally important. The work proposes a solution that canhelp with contact tracing/identification, knowing the infected persons recent travelhistory (even within the city) for few days before being assessed positive. Whilethe person would be able to give the known contacts with whom he/she has interacted with, he/she will not be aware of who all were in proximity if he/she hadbeen in public places. The proposed solution is to get the CCTV (Closed-CircuitTelevision) video clips from those public places for the specific date and time andidentify the people who were in proximity—i.e., not followed the safe distance tothe infected person. The approach uses YOLO V3 (You Only Look Once) whichuses darknet framework for people detection. Once the infected person is locatedfrom the video frames, the distance from that person to the other people in theframe is found, to check if there is a violation of social distance guideline. If thereis, then the people violating the distance are extracted and identified using Facialdetection and recognition algorithms. Two different solutions for Face detectionand Recognition are implemented and results compared—Dlib based modelsand OpenCV (Open Source Computer Vision Library) based models. The solutions were studied for two different CCTV footages and the results for Dlib basedmodels were better than OpenCV based models for the studied videos.
基金This work was financially supported by the Ministry of Higher Education(MOHE)Malaysia through the Fundamental Research Grant Scheme(FRGS)(No.FRGS/1/2021/TK0/UKM/01/4)the Research University Grant,Universiti Kebangsaan Malaysia(Nos.DIP-2020-004 and GUP-2021-019).
文摘Background:Machine learning has enabled the automatic detection of facial expressions,which is particularly beneficial in smart monitoring and understanding the mental state of medical and psychological patients.Most algorithms that attain high emotion classification accuracy require extensive computational resources,which either require bulky and inefficient devices or require the sensor data to be processed on cloud servers.However,there is always the risk of privacy invasion,data misuse,and data manipulation when the raw images are transferred to cloud servers for processing facical emotion recognition(FER)data.One possible solution to this problem is to minimize the movement of such privatedata.Methods:In this research,we propose an efficient implementation of a convolutional neural network(CNN)based algorithm for on-device FER on a low-power field programmable gate array(FPGA)platform.This is done by encoding the CNN weights to approximated signed digits,which reduces the number of partial sums to be computed for multiply-accumulate(MAC)operations.This is advantageous for portable devices that lack full-fledged resourceintensivemultipliers.Results:We applied our approximation method on MobileNet-v2 and ResNet18 models,which were pretrained with the FER2013 dataset.Our implementations and simulations reduce the FPGA resource requirement by at least 22%compared to models with integer weight,with negligible loss in classification accuracy.Conclusions:The outcome of this research will help in the development of secure and low-power systems for FER and other biomedical applications.The approximation methods used in this research can also be extended to other imagebasedbiomedicalresearchfields.
文摘Purpose-The humans are gifted with the potential of recognizing others by their uniqueness,in addition with more other demographic characteristics such as ethnicity(or race),gender and age,respectively.Over the decades,a vast count of researchers had undergone in the field of psychological,biological and cognitive sciences to explore how the human brain characterizes,perceives and memorizes faces.Moreover,certain computational advancements have been developed to accomplish several insights into this issue.Design/methodology/approach-This paper intends to propose a new race detection model using face shape features.The proposed model includes two key phases,namely.(a)feature extraction(b)detection.The feature extraction is the initial stage,where the face color and shape based features get mined.Specifically,maximally stable extremal regions(MSER)and speeded-up robust transform(SURF)are extracted under shape features and dense color feature are extracted as color feature.Since,the extracted features are huge in dimensions;they are alleviated under principle component analysis(PCA)approach,which is the strongest model for solving“curse of dimensionality”.Then,the dimensional reduced features are subjected to deep belief neural network(DBN),where the race gets detected.Further,to make the proposed framework more effective with respect to prediction,the weight of DBNis fine tuned with a new hybrid algorithm referred as lion mutated and updated dragon algorithm(LMUDA),which is the conceptual hybridization of lion algorithm(LA)and dragonfly algorithm(DA).Findings-The performance of proposed work is compared over other state-of-the-art models in terms of accuracy and error performance.Moreover,LMUDA attains high accuracy at 100th iteration with 90%of training,which is 11.1,8.8,5.5 and 3.3%better than the performance when learning percentage(LP)550%,60%,70%,and 80%,respectively.More particularly,the performance of proposed DBNþLMUDAis 22.2,12.5 and 33.3%better than the traditional classifiers DCNN,DBN and LDA,respectively.Originality/value-This paper achieves the objective detecting the human races from the faces.Particularly,MSER feature and SURF features are extracted under shape features and dense color feature are extracted as color feature.As a novelty,to make the race detection more accurate,the weight of DBNis fine tuned with a new hybrid algorithm referred as LMUDA,which is the conceptual hybridization of LA and DA,respectively.