With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear an...With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear and iris based multi-modal biometric systems constitutes an urgent and interesting research topic to pre-serve enterprise security,particularly with wearing a face mask as a precaution against the new coronavirus epidemic.This study proposes a multimodal system based on ear and iris biometrics at the feature fusion level to identify students in electronic examinations(E-exams)during the COVID-19 pandemic.The proposed system comprises four steps.Thefirst step is image preprocessing,which includes enhancing,segmenting,and extracting the regions of interest.The second step is feature extraction,where the Haralick texture and shape methods are used to extract the features of ear images,whereas Tamura texture and color histogram methods are used to extract the features of iris images.The third step is feature fusion,where the extracted features of the ear and iris images are combined into one sequential fused vector.The fourth step is the matching,which is executed using the City Block Dis-tance(CTB)for student identification.Thefindings of the study indicate that the system’s recognition accuracy is 97%,with a 2%False Acceptance Rate(FAR),a 4%False Rejection Rate(FRR),a 94%Correct Recognition Rate(CRR),and a 96%Genuine Acceptance Rate(GAR).In addition,the proposed recognition sys-tem achieved higher accuracy than other related systems.展开更多
As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their characte...As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their character in real-time.Humans incorporate physiological attributes like a fingerprint,face,iris,palm print,finger knuckle print,Deoxyribonucleic Acid(DNA),and behavioral qualities like walk,voice,mark,or keystroke.The main goal of this paper is to design a robust framework for automatic face recognition.Scale Invariant Feature Transform(SIFT)and Speeded-up Robust Features(SURF)are employed for face recognition.Also,we propose a modified Gabor Wavelet Transform for SIFT/SURF(GWT-SIFT/GWT-SURF)to increase the recognition accuracy of human faces.The proposed scheme is composed of three steps.First,the entropy of the image is removed using Discrete Wavelet Transform(DWT).Second,the computational complexity of the SIFT/SURF is reduced.Third,the accuracy is increased for authentication by the proposed GWT-SIFT/GWT-SURF algorithm.A comparative analysis of the proposed scheme is done on real-time Olivetti Research Laboratory(ORL)and Poznan University of Technology(PUT)databases.When compared to the traditional SIFT/SURF methods,we verify that the GWT-SIFT achieves the better accuracy of 99.32%and the better approach is the GWT-SURF as the run time of the GWT-SURF for 100 images is 3.4 seconds when compared to the GWT-SIFT which has a run time of 4.9 seconds for 100 images.展开更多
A multimodal biometric system is applied to recognize individuals for authentication using neural networks. In this paper multimodal biometric algorithm is designed by integrating iris, finger vein, palm print and fac...A multimodal biometric system is applied to recognize individuals for authentication using neural networks. In this paper multimodal biometric algorithm is designed by integrating iris, finger vein, palm print and face biometric traits. Normalized score level fusion approach is applied and optimized, encoded for matching decision. It is a multilevel wavelet, phase based fusion algorithm. This robust multimodal biometric algorithm increases the security level, accuracy, reduces memory size and equal error rate and eliminates unimodal biometric algorithm vulnerabilities.展开更多
Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recogni...Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems,or jointly with two or more as in multimodal systems.However,multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels.Despite this enhancement,in real-life applications some factors degrade multimodal systems’performance,such as occlusion,face poses,and noise in voice data.In this paper,we propose two algorithms that effectively apply dynamic fusion at feature level based on the data quality of multimodal biometrics.The proposed algorithms attempt to minimize the negative influence of confusing and low-quality features by either exclusion or weight reduction to achieve better recognition performance.The proposed dynamic fusion was achieved using face and voice biometrics,where face features were extracted using principal component analysis(PCA),and Gabor filters separately,whilst voice features were extracted using Mel-Frequency Cepstral Coefficients(MFCCs).Here,the facial data quality assessment of face images is mainly based on the existence of occlusion,whereas the assessment of voice data quality is substantially based on the calculation of signal to noise ratio(SNR)as per the existence of noise.To evaluate the performance of the proposed algorithms,several experiments were conducted using two combinations of three different databases,AR database,and the extended Yale Face Database B for face images,in addition to VOiCES database for voice data.The obtained results show that both proposed dynamic fusion algorithms attain improved performance and offer more advantages in identification and verification over not only the standard unimodal algorithms but also the multimodal algorithms using standard fusion methods.展开更多
Multimodal biometric fusion is gaining more attention among researchers in recent days. As multimodal biometric system consolidates the information from multiple biometric sources, the effective fusion of information ...Multimodal biometric fusion is gaining more attention among researchers in recent days. As multimodal biometric system consolidates the information from multiple biometric sources, the effective fusion of information obtained at score level is a challenging task. In this paper, we propose a framework for optimal fusion of match scores based on Gaussian Mixture Mode] (GMM) and Monte Carlo sampling based hypothesis testing. The proposed fusion approach has the ability to handle: 1) small size of match scores as is more commonly encountered in biometric fusion, and 2) arbitrary distribution of match scores which is more pronounced when discrete scores and multimodal features are present. The proposed fusion scheme is compared with well established schemes such as Likelihood Ratio (LR) method and weighted SUM rule. Extensive experiments carried out on five different multimodal biometric databases indicate that the proposed fusion scheme achieves higher performance as compared with other contemporary state of art fusion techniques.展开更多
Information fusion is a key step in multimodal biometric systems. The feature-level fusion is more effective than the score-level and decision-level method owing to the fact that the original feature set contains rich...Information fusion is a key step in multimodal biometric systems. The feature-level fusion is more effective than the score-level and decision-level method owing to the fact that the original feature set contains richer information about the biometric data. In this paper, we present a multiset generalized canonical discriminant projection (MGCDP) method for feature-level multimodal biometric information fusion, which maximizes the correlation of the intra-class features while minimizes the correlation of the between-class. In addition, the serial MGCDP (S-MGCDP) and parallel MGCDP (P-MGCDP) strategy were also proposed, which can fuse more than two kinds of biometric information, so as to achieve better identification effect. Experiments performed on various biometric databases shows that MGCDP method outperforms other state-of-the-art feature-level information fusion approaches.展开更多
A Tsinghua-developed biometric recognition system, designed to bolster traditional public security identification measures, was highly commended in an appraisal by the Ministry of Education on June 22, 2005.
文摘With the rapid spread of the coronavirus epidemic all over the world,educational and other institutions are heading towards digitization.In the era of digitization,identifying educational e-platform users using ear and iris based multi-modal biometric systems constitutes an urgent and interesting research topic to pre-serve enterprise security,particularly with wearing a face mask as a precaution against the new coronavirus epidemic.This study proposes a multimodal system based on ear and iris biometrics at the feature fusion level to identify students in electronic examinations(E-exams)during the COVID-19 pandemic.The proposed system comprises four steps.Thefirst step is image preprocessing,which includes enhancing,segmenting,and extracting the regions of interest.The second step is feature extraction,where the Haralick texture and shape methods are used to extract the features of ear images,whereas Tamura texture and color histogram methods are used to extract the features of iris images.The third step is feature fusion,where the extracted features of the ear and iris images are combined into one sequential fused vector.The fourth step is the matching,which is executed using the City Block Dis-tance(CTB)for student identification.Thefindings of the study indicate that the system’s recognition accuracy is 97%,with a 2%False Acceptance Rate(FAR),a 4%False Rejection Rate(FRR),a 94%Correct Recognition Rate(CRR),and a 96%Genuine Acceptance Rate(GAR).In addition,the proposed recognition sys-tem achieved higher accuracy than other related systems.
文摘As multimedia data sharing increases,data security in mobile devices and its mechanism can be seen as critical.Biometrics combines the physiological and behavioral qualities of an individual to validate their character in real-time.Humans incorporate physiological attributes like a fingerprint,face,iris,palm print,finger knuckle print,Deoxyribonucleic Acid(DNA),and behavioral qualities like walk,voice,mark,or keystroke.The main goal of this paper is to design a robust framework for automatic face recognition.Scale Invariant Feature Transform(SIFT)and Speeded-up Robust Features(SURF)are employed for face recognition.Also,we propose a modified Gabor Wavelet Transform for SIFT/SURF(GWT-SIFT/GWT-SURF)to increase the recognition accuracy of human faces.The proposed scheme is composed of three steps.First,the entropy of the image is removed using Discrete Wavelet Transform(DWT).Second,the computational complexity of the SIFT/SURF is reduced.Third,the accuracy is increased for authentication by the proposed GWT-SIFT/GWT-SURF algorithm.A comparative analysis of the proposed scheme is done on real-time Olivetti Research Laboratory(ORL)and Poznan University of Technology(PUT)databases.When compared to the traditional SIFT/SURF methods,we verify that the GWT-SIFT achieves the better accuracy of 99.32%and the better approach is the GWT-SURF as the run time of the GWT-SURF for 100 images is 3.4 seconds when compared to the GWT-SIFT which has a run time of 4.9 seconds for 100 images.
文摘A multimodal biometric system is applied to recognize individuals for authentication using neural networks. In this paper multimodal biometric algorithm is designed by integrating iris, finger vein, palm print and face biometric traits. Normalized score level fusion approach is applied and optimized, encoded for matching decision. It is a multilevel wavelet, phase based fusion algorithm. This robust multimodal biometric algorithm increases the security level, accuracy, reduces memory size and equal error rate and eliminates unimodal biometric algorithm vulnerabilities.
文摘Biometric recognition refers to the process of recognizing a person’s identity using physiological or behavioral modalities,such as face,voice,fingerprint,gait,etc.Such biometric modalities are mostly used in recognition tasks separately as in unimodal systems,or jointly with two or more as in multimodal systems.However,multimodal systems can usually enhance the recognition performance over unimodal systems by integrating the biometric data of multiple modalities at different fusion levels.Despite this enhancement,in real-life applications some factors degrade multimodal systems’performance,such as occlusion,face poses,and noise in voice data.In this paper,we propose two algorithms that effectively apply dynamic fusion at feature level based on the data quality of multimodal biometrics.The proposed algorithms attempt to minimize the negative influence of confusing and low-quality features by either exclusion or weight reduction to achieve better recognition performance.The proposed dynamic fusion was achieved using face and voice biometrics,where face features were extracted using principal component analysis(PCA),and Gabor filters separately,whilst voice features were extracted using Mel-Frequency Cepstral Coefficients(MFCCs).Here,the facial data quality assessment of face images is mainly based on the existence of occlusion,whereas the assessment of voice data quality is substantially based on the calculation of signal to noise ratio(SNR)as per the existence of noise.To evaluate the performance of the proposed algorithms,several experiments were conducted using two combinations of three different databases,AR database,and the extended Yale Face Database B for face images,in addition to VOiCES database for voice data.The obtained results show that both proposed dynamic fusion algorithms attain improved performance and offer more advantages in identification and verification over not only the standard unimodal algorithms but also the multimodal algorithms using standard fusion methods.
文摘Multimodal biometric fusion is gaining more attention among researchers in recent days. As multimodal biometric system consolidates the information from multiple biometric sources, the effective fusion of information obtained at score level is a challenging task. In this paper, we propose a framework for optimal fusion of match scores based on Gaussian Mixture Mode] (GMM) and Monte Carlo sampling based hypothesis testing. The proposed fusion approach has the ability to handle: 1) small size of match scores as is more commonly encountered in biometric fusion, and 2) arbitrary distribution of match scores which is more pronounced when discrete scores and multimodal features are present. The proposed fusion scheme is compared with well established schemes such as Likelihood Ratio (LR) method and weighted SUM rule. Extensive experiments carried out on five different multimodal biometric databases indicate that the proposed fusion scheme achieves higher performance as compared with other contemporary state of art fusion techniques.
文摘Information fusion is a key step in multimodal biometric systems. The feature-level fusion is more effective than the score-level and decision-level method owing to the fact that the original feature set contains richer information about the biometric data. In this paper, we present a multiset generalized canonical discriminant projection (MGCDP) method for feature-level multimodal biometric information fusion, which maximizes the correlation of the intra-class features while minimizes the correlation of the between-class. In addition, the serial MGCDP (S-MGCDP) and parallel MGCDP (P-MGCDP) strategy were also proposed, which can fuse more than two kinds of biometric information, so as to achieve better identification effect. Experiments performed on various biometric databases shows that MGCDP method outperforms other state-of-the-art feature-level information fusion approaches.
文摘A Tsinghua-developed biometric recognition system, designed to bolster traditional public security identification measures, was highly commended in an appraisal by the Ministry of Education on June 22, 2005.