The demand for a non-contact biometric approach for candidate identification has grown over the past ten years.Based on the most important biometric application,human gait analysis is a significant research topic in c...The demand for a non-contact biometric approach for candidate identification has grown over the past ten years.Based on the most important biometric application,human gait analysis is a significant research topic in computer vision.Researchers have paid a lot of attention to gait recognition,specifically the identification of people based on their walking patterns,due to its potential to correctly identify people far away.Gait recognition systems have been used in a variety of applications,including security,medical examinations,identity management,and access control.These systems require a complex combination of technical,operational,and definitional considerations.The employment of gait recognition techniques and technologies has produced a number of beneficial and well-liked applications.Thiswork proposes a novel deep learning-based framework for human gait classification in video sequences.This framework’smain challenge is improving the accuracy of accuracy gait classification under varying conditions,such as carrying a bag and changing clothes.The proposed method’s first step is selecting two pre-trained deep learningmodels and training fromscratch using deep transfer learning.Next,deepmodels have been trained using static hyperparameters;however,the learning rate is calculated using the particle swarmoptimization(PSO)algorithm.Then,the best features are selected from both trained models using the Harris Hawks controlled Sine-Cosine optimization algorithm.This algorithm chooses the best features,combined in a novel correlation-based fusion technique.Finally,the fused best features are categorized using medium,bi-layer,and tri-layered neural networks.On the publicly accessible dataset known as the CASIA-B dataset,the experimental process of the suggested technique was carried out,and an improved accuracy of 94.14% was achieved.The achieved accuracy of the proposed method is improved by the recent state-of-the-art techniques that show the significance of this work.展开更多
Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not e...Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not easy and makes the system difficult if any object is carried by a subject,such as a bag or coat.This article proposes an automated architecture based on deep features optimization for HGR.To our knowledge,it is the first architecture in which features are fused using multiset canonical correlation analysis(MCCA).In the proposed method,original video frames are processed for all 11 selected angles of the CASIA B dataset and utilized to train two fine-tuned deep learning models such as Squeezenet and Efficientnet.Deep transfer learning was used to train both fine-tuned models on selected angles,yielding two new targeted models that were later used for feature engineering.Features are extracted from the deep layer of both fine-tuned models and fused into one vector using MCCA.An improved manta ray foraging optimization algorithm is also proposed to select the best features from the fused feature matrix and classified using a narrow neural network classifier.The experimental process was conducted on all 11 angles of the large multi-view gait dataset(CASIA B)dataset and obtained improved accuracy than the state-of-the-art techniques.Moreover,a detailed confidence interval based analysis also shows the effectiveness of the proposed architecture for HGR.展开更多
Gait recognition is an active research area that uses a walking theme to identify the subject correctly.Human Gait Recognition(HGR)is performed without any cooperation from the individual.However,in practice,it remain...Gait recognition is an active research area that uses a walking theme to identify the subject correctly.Human Gait Recognition(HGR)is performed without any cooperation from the individual.However,in practice,it remains a challenging task under diverse walking sequences due to the covariant factors such as normal walking and walking with wearing a coat.Researchers,over the years,have worked on successfully identifying subjects using different techniques,but there is still room for improvement in accuracy due to these covariant factors.This paper proposes an automated model-free framework for human gait recognition in this article.There are a few critical steps in the proposed method.Firstly,optical flow-based motion region esti-mation and dynamic coordinates-based cropping are performed.The second step involves training a fine-tuned pre-trained MobileNetV2 model on both original and optical flow cropped frames;the training has been conducted using static hyperparameters.The third step proposed a fusion technique known as normal distribution serially fusion.In the fourth step,a better optimization algorithm is applied to select the best features,which are then classified using a Bi-Layered neural network.Three publicly available datasets,CASIA A,CASIA B,and CASIA C,were used in the experimental process and obtained average accuracies of 99.6%,91.6%,and 95.02%,respectively.The proposed framework has achieved improved accuracy compared to the other methods.展开更多
A multiple classifier fusion approach based on evidence combination is proposed in this paper. The individual classifier is designed based on a refined Nearest Feature Line (NFL),which is called Center-based Nearest N...A multiple classifier fusion approach based on evidence combination is proposed in this paper. The individual classifier is designed based on a refined Nearest Feature Line (NFL),which is called Center-based Nearest Neighbor (CNN). CNN retains the advantages of NFL while it has relatively low computational cost. Different member classifiers are trained based on different feature spaces respectively. Corresponding mass functions can be generated based on proposed mass function determination approach. The classification decision can be made based on the combined evidence and better classification performance can be expected. Experimental results on face recognition provided verify that the new approach is rational and effective.展开更多
With the aim of extracting the features of face images in face recognition, a new method of face recognition by fusing global features and local features is presented. The global features are extracted using principal...With the aim of extracting the features of face images in face recognition, a new method of face recognition by fusing global features and local features is presented. The global features are extracted using principal component analysis (PCA). Active appearance model (AAM) locates 58 facial fiducial points, from which 17 points are characterized as local features using the Gabor wavelet transform (GWT). Normalized global match degree (local match degree) can be obtained by global features (local features) of the probe image and each gallery image. After the fusion of normalized global match degree and normalized local match degree, the recognition result is the class that included the gallery image corresponding to the largest fused match degree. The method is evaluated by the recognition rates over two face image databases (AR and SJTU-IPPR). The experimental results show that the method outperforms PCA and elastic bunch graph matching (EBGM). Moreover, it is effective and robust to expression, illumination and pose variation in some degree.展开更多
Background—Human Gait Recognition(HGR)is an approach based on biometric and is being widely used for surveillance.HGR is adopted by researchers for the past several decades.Several factors are there that affect the s...Background—Human Gait Recognition(HGR)is an approach based on biometric and is being widely used for surveillance.HGR is adopted by researchers for the past several decades.Several factors are there that affect the system performance such as the walking variation due to clothes,a person carrying some luggage,variations in the view angle.Proposed—In this work,a new method is introduced to overcome different problems of HGR.A hybrid method is proposed or efficient HGR using deep learning and selection of best features.Four major steps are involved in this work-preprocessing of the video frames,manipulation of the pre-trained CNN model VGG-16 for the computation of the features,removing redundant features extracted from the CNN model,and classification.In the reduction of irrelevant features Principal Score and Kurtosis based approach is proposed named PSbK.After that,the features of PSbK are fused in one materix.Finally,this fused vector is fed to the One against All Multi Support Vector Machine(OAMSVM)classifier for the final results.Results—The system is evaluated by utilizing the CASIA B database and six angles 00◦,18◦,36◦,54◦,72◦,and 90◦are used and attained the accuracy of 95.80%,96.0%,95.90%,96.20%,95.60%,and 95.50%,respectively.Conclusion—The comparison with recent methods show the proposed method work better.展开更多
Human gait is one of the unobtrusive behavioral biometrics that has been extensively studied for various commercial and government applications.Biometric security,medical rehabilitation,virtual reality,and autonomous ...Human gait is one of the unobtrusive behavioral biometrics that has been extensively studied for various commercial and government applications.Biometric security,medical rehabilitation,virtual reality,and autonomous driving cars are some of the fields of study that rely on accurate gait recognition.While majority of studies have been focused on achieving very high recognition performance on a specific dataset,different issues arise in the real-world applications of this technology.This research is one of the first to evaluate the effects of changing walking speeds and directions on gait recognition rates under various walking conditions.Dataset was collected using the KINECT sensor.To draw an overall conclusion about the effects of walking speed and di-rection to the sensor,we define distance features and angle features.Furthermore,we propose two feature fusion methods for person recognition.Results of the study provide insights into how walking speeds and walking di-rections to the KINECT sensor influence the accuracy of gait recognition.展开更多
因为它的 insensitivity, 3D 脸识别吸引越来越多的注意到照明和姿势的变化。有在这个话题要解决的许多关键问题,例如 3D 脸表示和有效多特征熔化。在这份报纸,一个新奇 3D 脸识别算法被建议,它的性能在 BJUT-3D 脸数据库上被表明...因为它的 insensitivity, 3D 脸识别吸引越来越多的注意到照明和姿势的变化。有在这个话题要解决的许多关键问题,例如 3D 脸表示和有效多特征熔化。在这份报纸,一个新奇 3D 脸识别算法被建议,它的性能在 BJUT-3D 脸数据库上被表明。这个算法选择脸表面性质和相对关系矩阵的原则部件为脸表示特征。为每个特征的类似公制被定义。特征熔化策略被建议。它基于菲希尔是线性加权的策略线性判别式分析。最后,介绍算法在 BJUT-3D 脸数据库上被测试。算法和熔化策略的表演是令人满意的,这被结束。展开更多
A novel face recognition method based on fusion of spatial and frequency features was presented to improve recognition accuracy. Dual-Tree Complex Wavelet Transform derives desirable facial features to cope with the v...A novel face recognition method based on fusion of spatial and frequency features was presented to improve recognition accuracy. Dual-Tree Complex Wavelet Transform derives desirable facial features to cope with the variation due to the illumination and facial expression changes. By adopting spectral regression and complex fusion technologies respectively, two improved neighborhood preserving discriminant analysis feature extraction methods were proposed to capture the face manifold structures and locality discriminatory information. Extensive experiments have been made to compare the recognition performance of the proposed method with some popular dimensionality reduction methods on ORL and Yale face databases. The results verify the effectiveness of the proposed method.展开更多
Improved local tangent space alignment (ILTSA) is a recent nonlinear dimensionality reduction method which can efficiently recover the geometrical structure of sparse or non-uniformly distributed data manifold. In thi...Improved local tangent space alignment (ILTSA) is a recent nonlinear dimensionality reduction method which can efficiently recover the geometrical structure of sparse or non-uniformly distributed data manifold. In this paper, based on combination of modified maximum margin criterion and ILTSA, a novel feature extraction method named orthogonal discriminant improved local tangent space alignment (ODILTSA) is proposed. ODILTSA can preserve local geometry structure and maximize the margin between different classes simultaneously. Based on ODILTSA, a novel face recognition method which combines augmented complex wavelet features and original image features is developed. Experimental results on Yale, AR and PIE face databases demonstrate the effectiveness of ODILTSA and the feature fusion method.展开更多
Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recogni...Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems,or jointly with two or more as in multimodal systems.However,multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels.Despite this enhancement,in real-life applications some factors degrade multimodal systems’performance,such as occlusion,face poses,and noise in voice data.In this paper,we propose two algorithms that effectively apply dynamic fusion at feature level based on the data quality of multimodal biometrics.The proposed algorithms attempt to minimize the negative influence of confusing and low-quality features by either exclusion or weight reduction to achieve better recognition performance.The proposed dynamic fusion was achieved using face and voice biometrics,where face features were extracted using principal component analysis(PCA),and Gabor filters separately,whilst voice features were extracted using Mel-Frequency Cepstral Coefficients(MFCCs).Here,the facial data quality assessment of face images is mainly based on the existence of occlusion,whereas the assessment of voice data quality is substantially based on the calculation of signal to noise ratio(SNR)as per the existence of noise.To evaluate the performance of the proposed algorithms,several experiments were conducted using two combinations of three different databases,AR database,and the extended Yale Face Database B for face images,in addition to VOiCES database for voice data.The obtained results show that both proposed dynamic fusion algorithms attain improved performance and offer more advantages in identification and verification over not only the standard unimodal algorithms but also the multimodal algorithms using standard fusion methods.展开更多
A 3D face recognition approach which uses principal axes registration(PAR)and three face representation features from the re-sampling depth image:Eigenfaces,Fisherfaces and Zernike moments is presented.The approach ad...A 3D face recognition approach which uses principal axes registration(PAR)and three face representation features from the re-sampling depth image:Eigenfaces,Fisherfaces and Zernike moments is presented.The approach addresses the issue of 3D face registration instantly achieved by PAR.Because each facial feature has its own advantages,limitations and scope of use,different features will complement each other.Thus the fusing features can learn more expressive characterizations than a single feature.The support vector machine(SVM)is applied for classification.In this method,based on the complementarity between different features,weighted decision-level fusion makes the recognition system have certain fault tolerance.Experimental results show that the proposed approach achieves superior performance with the rank-1 recognition rate of 98.36%for GavabDB database.展开更多
With the continuous progress of The Times and the development of technology,the rise of network social media has also brought the“explosive”growth of image data.As one of the main ways of People’s Daily communicati...With the continuous progress of The Times and the development of technology,the rise of network social media has also brought the“explosive”growth of image data.As one of the main ways of People’s Daily communication,image is widely used as a carrier of communication because of its rich content,intuitive and other advantages.Image recognition based on convolution neural network is the first application in the field of image recognition.A series of algorithm operations such as image eigenvalue extraction,recognition and convolution are used to identify and analyze different images.The rapid development of artificial intelligence makes machine learning more and more important in its research field.Use algorithms to learn each piece of data and predict the outcome.This has become an important key to open the door of artificial intelligence.In machine vision,image recognition is the foundation,but how to associate the low-level information in the image with the high-level image semantics becomes the key problem of image recognition.Predecessors have provided many model algorithms,which have laid a solid foundation for the development of artificial intelligence and image recognition.The multi-level information fusion model based on the VGG16 model is an improvement on the fully connected neural network.Different from full connection network,convolutional neural network does not use full connection method in each layer of neurons of neural network,but USES some nodes for connection.Although this method reduces the computation time,due to the fact that the convolutional neural network model will lose some useful feature information in the process of propagation and calculation,this paper improves the model to be a multi-level information fusion of the convolution calculation method,and further recovers the discarded feature information,so as to improve the recognition rate of the image.VGG divides the network into five groups(mimicking the five layers of AlexNet),yet it USES 3*3 filters and combines them as a convolution sequence.Network deeper DCNN,channel number is bigger.The recognition rate of the model was verified by 0RL Face Database,BioID Face Database and CASIA Face Image Database.展开更多
Background Several face detection and recogni tion methods have been proposed in the past decades that have excellent performance.The conventional face recognition pipeline comprises the following:(1)face detection,(2...Background Several face detection and recogni tion methods have been proposed in the past decades that have excellent performance.The conventional face recognition pipeline comprises the following:(1)face detection,(2)face alignment,(3)feature extraction,and(4)similarity,which are independent of each other.The separate facial analysis stages lead to redundant model calculations,and are difficult for use in end-to-end training.Methods In this paper,we propose a novel end-to-end trainable convolutional network framework for face detection and recognition,in which a geometric transformation matrix is directly learned to align the faces rather than predicting the facial landmarks.In the training stage,our single CNN model is supervised only by face bounding boxes and personal identities,which are publicly available from WIDER FACE and CASIA-WebFace datasets.Our model is tested on Face Detection Dataset and Benchmark(FDDB)and Labeled Face in the Wild(LFW)datasets.Results The results show 89.24%recall for face detection tasks and 98.63%accura cy for face recognition tasks.展开更多
Gait is a biological typical that defines the method by that people walk.Walking is the most significant performance which keeps our day-to-day life and physical condition.Surface electromyography(sEMG)is a weak bioel...Gait is a biological typical that defines the method by that people walk.Walking is the most significant performance which keeps our day-to-day life and physical condition.Surface electromyography(sEMG)is a weak bioelectric signal that portrays the functional state between the human muscles and nervous system to any extent.Gait classifiers dependent upon sEMG signals are extremely utilized in analysing muscle diseases and as a guide path for recovery treatment.Several approaches are established in the works for gait recognition utilizing conventional and deep learning(DL)approaches.This study designs an Enhanced Artificial Algae Algorithm with Hybrid Deep Learning based Human Gait Classification(EAAA-HDLGR)technique on sEMG signals.The EAAA-HDLGR technique extracts the time domain(TD)and frequency domain(FD)features from the sEMG signals and is fused.In addition,the EAAA-HDLGR technique exploits the hybrid deep learning(HDL)model for gait recognition.At last,an EAAA-based hyperparameter optimizer is applied for the HDL model,which is mainly derived from the quasi-oppositional based learning(QOBL)concept,showing the novelty of the work.A brief classifier outcome of the EAAA-HDLGR technique is examined under diverse aspects,and the results indicate improving the EAAA-HDLGR technique.The results imply that the EAAA-HDLGR technique accomplishes improved results with the inclusion of EAAA on gait recognition.展开更多
Face recognition has been a hot-topic in the field of pattern recognition where feature extraction and classification play an important role. However, convolutional neural network (CNN) and local binary pattern (LB...Face recognition has been a hot-topic in the field of pattern recognition where feature extraction and classification play an important role. However, convolutional neural network (CNN) and local binary pattern (LBP) can only extract single features of facial images, and fail to select the optimal classifier. To deal with the problem of classifier parameter optimization, two structures based on the support vector machine (SVM) optimized by artificial bee colony (ABC) algorithm are proposed to classify CNN and LBP features separately. In order to solve the single feature problem, a fusion system based on CNN and LBP features is proposed. The facial features can be better represented by extracting and fusing the global and local information of face images. We achieve the goal by fusing the outputs of feature classifiers. Explicit experimental results on Olivetti Research Laboratory (ORL) and face recognition technology (FERET) databases show the superiority of the proposed approaches.展开更多
To investigate the robustness of face recognition algorithms under the complicated variations of illumination, facial expression and posture, the advantages and disadvantages of seven typical algorithms on extracting ...To investigate the robustness of face recognition algorithms under the complicated variations of illumination, facial expression and posture, the advantages and disadvantages of seven typical algorithms on extracting global and local features are studied through the experiments respectively on the Olivetti Research Laboratory database and the other three databases (the three subsets of illumination, expression and posture that are constructed by selecting images from several existing face databases). By taking the above experimental results into consideration, two schemes of face recognition which are based on the decision fusion of the twodimensional linear discriminant analysis (2DLDA) and local binary pattern (LBP) are proposed in this paper to heighten the recognition rates. In addition, partitioning a face nonuniformly for its LBP histograms is conducted to improve the performance. Our experimental results have shown the complementarities of the two kinds of features, the 2DLDA and LBP, and have verified the effectiveness of the proposed fusion algorithms.展开更多
Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms....Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms.First,in combination with the collected gait information of individuals from triaxial accelerometers on smartphones,the collected information is preprocessed,and multimodal fusion is used with the existing standard datasets to yield a multimodal synthetic dataset;then,with the multimodal characteristics of the collected biological gait information,a Convolutional Neural Network based Gait Recognition(CNN-GR)model and the related scheme for the multimodal features are developed;at last,regarding the proposed CNN-GR model and scheme,a unimodal gait feature identity single-gait feature identification algorithm and a multimodal gait feature fusion identity multimodal gait information algorithm are proposed.Experimental results show that the proposed algorithms perform well in recognition accuracy,the confusion matrix,and the kappa statistic,and they have better recognition scores and robustness than the compared algorithms;thus,the proposed algorithm has prominent promise in practice.展开更多
基金supported by the“Human Resources Program in Energy Technol-ogy”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP)and Granted Financial Resources from the Ministry of Trade,Industry,and Energy,Republic of Korea(No.20204010600090)The funding of this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R410),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The demand for a non-contact biometric approach for candidate identification has grown over the past ten years.Based on the most important biometric application,human gait analysis is a significant research topic in computer vision.Researchers have paid a lot of attention to gait recognition,specifically the identification of people based on their walking patterns,due to its potential to correctly identify people far away.Gait recognition systems have been used in a variety of applications,including security,medical examinations,identity management,and access control.These systems require a complex combination of technical,operational,and definitional considerations.The employment of gait recognition techniques and technologies has produced a number of beneficial and well-liked applications.Thiswork proposes a novel deep learning-based framework for human gait classification in video sequences.This framework’smain challenge is improving the accuracy of accuracy gait classification under varying conditions,such as carrying a bag and changing clothes.The proposed method’s first step is selecting two pre-trained deep learningmodels and training fromscratch using deep transfer learning.Next,deepmodels have been trained using static hyperparameters;however,the learning rate is calculated using the particle swarmoptimization(PSO)algorithm.Then,the best features are selected from both trained models using the Harris Hawks controlled Sine-Cosine optimization algorithm.This algorithm chooses the best features,combined in a novel correlation-based fusion technique.Finally,the fused best features are categorized using medium,bi-layer,and tri-layered neural networks.On the publicly accessible dataset known as the CASIA-B dataset,the experimental process of the suggested technique was carried out,and an improved accuracy of 94.14% was achieved.The achieved accuracy of the proposed method is improved by the recent state-of-the-art techniques that show the significance of this work.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)program(IITP-2022-2020-0-01832)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)and the Soonchunhyang University Research Fund.
文摘Human gait recognition(HGR)is the process of identifying a sub-ject(human)based on their walking pattern.Each subject is a unique walking pattern and cannot be simulated by other subjects.But,gait recognition is not easy and makes the system difficult if any object is carried by a subject,such as a bag or coat.This article proposes an automated architecture based on deep features optimization for HGR.To our knowledge,it is the first architecture in which features are fused using multiset canonical correlation analysis(MCCA).In the proposed method,original video frames are processed for all 11 selected angles of the CASIA B dataset and utilized to train two fine-tuned deep learning models such as Squeezenet and Efficientnet.Deep transfer learning was used to train both fine-tuned models on selected angles,yielding two new targeted models that were later used for feature engineering.Features are extracted from the deep layer of both fine-tuned models and fused into one vector using MCCA.An improved manta ray foraging optimization algorithm is also proposed to select the best features from the fused feature matrix and classified using a narrow neural network classifier.The experimental process was conducted on all 11 angles of the large multi-view gait dataset(CASIA B)dataset and obtained improved accuracy than the state-of-the-art techniques.Moreover,a detailed confidence interval based analysis also shows the effectiveness of the proposed architecture for HGR.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP)granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea.(No.20204010600090).
文摘Gait recognition is an active research area that uses a walking theme to identify the subject correctly.Human Gait Recognition(HGR)is performed without any cooperation from the individual.However,in practice,it remains a challenging task under diverse walking sequences due to the covariant factors such as normal walking and walking with wearing a coat.Researchers,over the years,have worked on successfully identifying subjects using different techniques,but there is still room for improvement in accuracy due to these covariant factors.This paper proposes an automated model-free framework for human gait recognition in this article.There are a few critical steps in the proposed method.Firstly,optical flow-based motion region esti-mation and dynamic coordinates-based cropping are performed.The second step involves training a fine-tuned pre-trained MobileNetV2 model on both original and optical flow cropped frames;the training has been conducted using static hyperparameters.The third step proposed a fusion technique known as normal distribution serially fusion.In the fourth step,a better optimization algorithm is applied to select the best features,which are then classified using a Bi-Layered neural network.Three publicly available datasets,CASIA A,CASIA B,and CASIA C,were used in the experimental process and obtained average accuracies of 99.6%,91.6%,and 95.02%,respectively.The proposed framework has achieved improved accuracy compared to the other methods.
基金Supported by Grant for State Key Program for Basic Research of China (973) (No. 2007CB311006)
文摘A multiple classifier fusion approach based on evidence combination is proposed in this paper. The individual classifier is designed based on a refined Nearest Feature Line (NFL),which is called Center-based Nearest Neighbor (CNN). CNN retains the advantages of NFL while it has relatively low computational cost. Different member classifiers are trained based on different feature spaces respectively. Corresponding mass functions can be generated based on proposed mass function determination approach. The classification decision can be made based on the combined evidence and better classification performance can be expected. Experimental results on face recognition provided verify that the new approach is rational and effective.
文摘With the aim of extracting the features of face images in face recognition, a new method of face recognition by fusing global features and local features is presented. The global features are extracted using principal component analysis (PCA). Active appearance model (AAM) locates 58 facial fiducial points, from which 17 points are characterized as local features using the Gabor wavelet transform (GWT). Normalized global match degree (local match degree) can be obtained by global features (local features) of the probe image and each gallery image. After the fusion of normalized global match degree and normalized local match degree, the recognition result is the class that included the gallery image corresponding to the largest fused match degree. The method is evaluated by the recognition rates over two face image databases (AR and SJTU-IPPR). The experimental results show that the method outperforms PCA and elastic bunch graph matching (EBGM). Moreover, it is effective and robust to expression, illumination and pose variation in some degree.
基金This study was supported by the grants of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),funded by the Ministry of Health&Welfare(HI18C1216)and the Soonchunhyang University Research Fund.
文摘Background—Human Gait Recognition(HGR)is an approach based on biometric and is being widely used for surveillance.HGR is adopted by researchers for the past several decades.Several factors are there that affect the system performance such as the walking variation due to clothes,a person carrying some luggage,variations in the view angle.Proposed—In this work,a new method is introduced to overcome different problems of HGR.A hybrid method is proposed or efficient HGR using deep learning and selection of best features.Four major steps are involved in this work-preprocessing of the video frames,manipulation of the pre-trained CNN model VGG-16 for the computation of the features,removing redundant features extracted from the CNN model,and classification.In the reduction of irrelevant features Principal Score and Kurtosis based approach is proposed named PSbK.After that,the features of PSbK are fused in one materix.Finally,this fused vector is fed to the One against All Multi Support Vector Machine(OAMSVM)classifier for the final results.Results—The system is evaluated by utilizing the CASIA B database and six angles 00◦,18◦,36◦,54◦,72◦,and 90◦are used and attained the accuracy of 95.80%,96.0%,95.90%,96.20%,95.60%,and 95.50%,respectively.Conclusion—The comparison with recent methods show the proposed method work better.
文摘Human gait is one of the unobtrusive behavioral biometrics that has been extensively studied for various commercial and government applications.Biometric security,medical rehabilitation,virtual reality,and autonomous driving cars are some of the fields of study that rely on accurate gait recognition.While majority of studies have been focused on achieving very high recognition performance on a specific dataset,different issues arise in the real-world applications of this technology.This research is one of the first to evaluate the effects of changing walking speeds and directions on gait recognition rates under various walking conditions.Dataset was collected using the KINECT sensor.To draw an overall conclusion about the effects of walking speed and di-rection to the sensor,we define distance features and angle features.Furthermore,we propose two feature fusion methods for person recognition.Results of the study provide insights into how walking speeds and walking di-rections to the KINECT sensor influence the accuracy of gait recognition.
基金Supported by National Natural Science Foundation of China (60533030) and Beijing Natural Science Foundation (4061001)
文摘因为它的 insensitivity, 3D 脸识别吸引越来越多的注意到照明和姿势的变化。有在这个话题要解决的许多关键问题,例如 3D 脸表示和有效多特征熔化。在这份报纸,一个新奇 3D 脸识别算法被建议,它的性能在 BJUT-3D 脸数据库上被表明。这个算法选择脸表面性质和相对关系矩阵的原则部件为脸表示特征。为每个特征的类似公制被定义。特征熔化策略被建议。它基于菲希尔是线性加权的策略线性判别式分析。最后,介绍算法在 BJUT-3D 脸数据库上被测试。算法和熔化策略的表演是令人满意的,这被结束。
基金National Natural Science Foundation of China(No.61004088)Key Basic Research Foundation of Shanghai Municipal Science and Technology Commission,China(No.09JC1408000)
文摘A novel face recognition method based on fusion of spatial and frequency features was presented to improve recognition accuracy. Dual-Tree Complex Wavelet Transform derives desirable facial features to cope with the variation due to the illumination and facial expression changes. By adopting spectral regression and complex fusion technologies respectively, two improved neighborhood preserving discriminant analysis feature extraction methods were proposed to capture the face manifold structures and locality discriminatory information. Extensive experiments have been made to compare the recognition performance of the proposed method with some popular dimensionality reduction methods on ORL and Yale face databases. The results verify the effectiveness of the proposed method.
基金the National Natural Science Foundation of China(No.61004088)the Key Basic Research Foundation of Shanghai Municipal Science and Technology Commission(No.09JC1408000)
文摘Improved local tangent space alignment (ILTSA) is a recent nonlinear dimensionality reduction method which can efficiently recover the geometrical structure of sparse or non-uniformly distributed data manifold. In this paper, based on combination of modified maximum margin criterion and ILTSA, a novel feature extraction method named orthogonal discriminant improved local tangent space alignment (ODILTSA) is proposed. ODILTSA can preserve local geometry structure and maximize the margin between different classes simultaneously. Based on ODILTSA, a novel face recognition method which combines augmented complex wavelet features and original image features is developed. Experimental results on Yale, AR and PIE face databases demonstrate the effectiveness of ODILTSA and the feature fusion method.
文摘Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems,or jointly with two or more as in multimodal systems.However,multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels.Despite this enhancement,in real-life applications some factors degrade multimodal systems’performance,such as occlusion,face poses,and noise in voice data.In this paper,we propose two algorithms that effectively apply dynamic fusion at feature level based on the data quality of multimodal biometrics.The proposed algorithms attempt to minimize the negative influence of confusing and low-quality features by either exclusion or weight reduction to achieve better recognition performance.The proposed dynamic fusion was achieved using face and voice biometrics,where face features were extracted using principal component analysis(PCA),and Gabor filters separately,whilst voice features were extracted using Mel-Frequency Cepstral Coefficients(MFCCs).Here,the facial data quality assessment of face images is mainly based on the existence of occlusion,whereas the assessment of voice data quality is substantially based on the calculation of signal to noise ratio(SNR)as per the existence of noise.To evaluate the performance of the proposed algorithms,several experiments were conducted using two combinations of three different databases,AR database,and the extended Yale Face Database B for face images,in addition to VOiCES database for voice data.The obtained results show that both proposed dynamic fusion algorithms attain improved performance and offer more advantages in identification and verification over not only the standard unimodal algorithms but also the multimodal algorithms using standard fusion methods.
基金The authors would like to acknowledge the use of the GavabDB face database in this paper due to Moreno and Sanchez.This work was supported in part by the National Natural Science Foundation of China(Grant No.60872145)the National High Technology Research and Development Program of China(No.2009AA01Z315)the Cultivation Fund of the Key Scientific and Technical Innovation Project,Ministry of Education of China(No.708085).
文摘A 3D face recognition approach which uses principal axes registration(PAR)and three face representation features from the re-sampling depth image:Eigenfaces,Fisherfaces and Zernike moments is presented.The approach addresses the issue of 3D face registration instantly achieved by PAR.Because each facial feature has its own advantages,limitations and scope of use,different features will complement each other.Thus the fusing features can learn more expressive characterizations than a single feature.The support vector machine(SVM)is applied for classification.In this method,based on the complementarity between different features,weighted decision-level fusion makes the recognition system have certain fault tolerance.Experimental results show that the proposed approach achieves superior performance with the rank-1 recognition rate of 98.36%for GavabDB database.
文摘With the continuous progress of The Times and the development of technology,the rise of network social media has also brought the“explosive”growth of image data.As one of the main ways of People’s Daily communication,image is widely used as a carrier of communication because of its rich content,intuitive and other advantages.Image recognition based on convolution neural network is the first application in the field of image recognition.A series of algorithm operations such as image eigenvalue extraction,recognition and convolution are used to identify and analyze different images.The rapid development of artificial intelligence makes machine learning more and more important in its research field.Use algorithms to learn each piece of data and predict the outcome.This has become an important key to open the door of artificial intelligence.In machine vision,image recognition is the foundation,but how to associate the low-level information in the image with the high-level image semantics becomes the key problem of image recognition.Predecessors have provided many model algorithms,which have laid a solid foundation for the development of artificial intelligence and image recognition.The multi-level information fusion model based on the VGG16 model is an improvement on the fully connected neural network.Different from full connection network,convolutional neural network does not use full connection method in each layer of neurons of neural network,but USES some nodes for connection.Although this method reduces the computation time,due to the fact that the convolutional neural network model will lose some useful feature information in the process of propagation and calculation,this paper improves the model to be a multi-level information fusion of the convolution calculation method,and further recovers the discarded feature information,so as to improve the recognition rate of the image.VGG divides the network into five groups(mimicking the five layers of AlexNet),yet it USES 3*3 filters and combines them as a convolution sequence.Network deeper DCNN,channel number is bigger.The recognition rate of the model was verified by 0RL Face Database,BioID Face Database and CASIA Face Image Database.
文摘Background Several face detection and recogni tion methods have been proposed in the past decades that have excellent performance.The conventional face recognition pipeline comprises the following:(1)face detection,(2)face alignment,(3)feature extraction,and(4)similarity,which are independent of each other.The separate facial analysis stages lead to redundant model calculations,and are difficult for use in end-to-end training.Methods In this paper,we propose a novel end-to-end trainable convolutional network framework for face detection and recognition,in which a geometric transformation matrix is directly learned to align the faces rather than predicting the facial landmarks.In the training stage,our single CNN model is supervised only by face bounding boxes and personal identities,which are publicly available from WIDER FACE and CASIA-WebFace datasets.Our model is tested on Face Detection Dataset and Benchmark(FDDB)and Labeled Face in the Wild(LFW)datasets.Results The results show 89.24%recall for face detection tasks and 98.63%accura cy for face recognition tasks.
基金supported by a grant from the Korea Health Technology R&D Project through the KoreaHealth Industry Development Institute (KHIDI)funded by the Ministry of Health&Welfare,Republic of Korea (grant number:HI21C1831)the Soonchunhyang University Research Fund.
文摘Gait is a biological typical that defines the method by that people walk.Walking is the most significant performance which keeps our day-to-day life and physical condition.Surface electromyography(sEMG)is a weak bioelectric signal that portrays the functional state between the human muscles and nervous system to any extent.Gait classifiers dependent upon sEMG signals are extremely utilized in analysing muscle diseases and as a guide path for recovery treatment.Several approaches are established in the works for gait recognition utilizing conventional and deep learning(DL)approaches.This study designs an Enhanced Artificial Algae Algorithm with Hybrid Deep Learning based Human Gait Classification(EAAA-HDLGR)technique on sEMG signals.The EAAA-HDLGR technique extracts the time domain(TD)and frequency domain(FD)features from the sEMG signals and is fused.In addition,the EAAA-HDLGR technique exploits the hybrid deep learning(HDL)model for gait recognition.At last,an EAAA-based hyperparameter optimizer is applied for the HDL model,which is mainly derived from the quasi-oppositional based learning(QOBL)concept,showing the novelty of the work.A brief classifier outcome of the EAAA-HDLGR technique is examined under diverse aspects,and the results indicate improving the EAAA-HDLGR technique.The results imply that the EAAA-HDLGR technique accomplishes improved results with the inclusion of EAAA on gait recognition.
基金supported by the Natural Science Foundation of Shandong Province ( ZR2014FM039)the National Natural Science Foundation of China ( 61771293)
文摘Face recognition has been a hot-topic in the field of pattern recognition where feature extraction and classification play an important role. However, convolutional neural network (CNN) and local binary pattern (LBP) can only extract single features of facial images, and fail to select the optimal classifier. To deal with the problem of classifier parameter optimization, two structures based on the support vector machine (SVM) optimized by artificial bee colony (ABC) algorithm are proposed to classify CNN and LBP features separately. In order to solve the single feature problem, a fusion system based on CNN and LBP features is proposed. The facial features can be better represented by extracting and fusing the global and local information of face images. We achieve the goal by fusing the outputs of feature classifiers. Explicit experimental results on Olivetti Research Laboratory (ORL) and face recognition technology (FERET) databases show the superiority of the proposed approaches.
文摘To investigate the robustness of face recognition algorithms under the complicated variations of illumination, facial expression and posture, the advantages and disadvantages of seven typical algorithms on extracting global and local features are studied through the experiments respectively on the Olivetti Research Laboratory database and the other three databases (the three subsets of illumination, expression and posture that are constructed by selecting images from several existing face databases). By taking the above experimental results into consideration, two schemes of face recognition which are based on the decision fusion of the twodimensional linear discriminant analysis (2DLDA) and local binary pattern (LBP) are proposed in this paper to heighten the recognition rates. In addition, partitioning a face nonuniformly for its LBP histograms is conducted to improve the performance. Our experimental results have shown the complementarities of the two kinds of features, the 2DLDA and LBP, and have verified the effectiveness of the proposed fusion algorithms.
基金supported by the Smart Manufacturing New Model Application Project Ministry of Industry and Information Technology(No.ZH-XZ-18004)Future Research Projects Funds for Science and Technology Department of Jiangsu Province(No.BY2013015-23)+2 种基金the Fundamental Research Funds for the Ministry of Education(No.JUSRP211A 41)the Fundamental Research Funds for the Central Universities(No.JUSRP42003)the 111 Project(No.B2018)。
文摘Identity-recognition technologies require assistive equipment,whereas they are poor in recognition accuracy and expensive.To overcome this deficiency,this paper proposes several gait feature identification algorithms.First,in combination with the collected gait information of individuals from triaxial accelerometers on smartphones,the collected information is preprocessed,and multimodal fusion is used with the existing standard datasets to yield a multimodal synthetic dataset;then,with the multimodal characteristics of the collected biological gait information,a Convolutional Neural Network based Gait Recognition(CNN-GR)model and the related scheme for the multimodal features are developed;at last,regarding the proposed CNN-GR model and scheme,a unimodal gait feature identity single-gait feature identification algorithm and a multimodal gait feature fusion identity multimodal gait information algorithm are proposed.Experimental results show that the proposed algorithms perform well in recognition accuracy,the confusion matrix,and the kappa statistic,and they have better recognition scores and robustness than the compared algorithms;thus,the proposed algorithm has prominent promise in practice.