Automatic Speaker Identification(ASI)involves the process of distinguishing an audio stream associated with numerous speakers’utterances.Some common aspects,such as the framework difference,overlapping of different s...Automatic Speaker Identification(ASI)involves the process of distinguishing an audio stream associated with numerous speakers’utterances.Some common aspects,such as the framework difference,overlapping of different sound events,and the presence of various sound sources during recording,make the ASI task much more complicated and complex.This research proposes a deep learning model to improve the accuracy of the ASI system and reduce the model training time under limited computation resources.In this research,the performance of the transformer model is investigated.Seven audio features,chromagram,Mel-spectrogram,tonnetz,Mel-Frequency Cepstral Coefficients(MFCCs),delta MFCCs,delta-delta MFCCs and spectral contrast,are extracted from the ELSDSR,CSTRVCTK,and Ar-DAD,datasets.The evaluation of various experiments demonstrates that the best performance was achieved by the proposed transformer model using seven audio features on all datasets.For ELSDSR,CSTRVCTK,and Ar-DAD,the highest attained accuracies are 0.99,0.97,and 0.99,respectively.The experimental results reveal that the proposed technique can achieve the best performance for ASI problems.展开更多
Most current security and authentication systems are based on personal biometrics.The security problem is a major issue in the field of biometric systems.This is due to the use in databases of the original biometrics....Most current security and authentication systems are based on personal biometrics.The security problem is a major issue in the field of biometric systems.This is due to the use in databases of the original biometrics.Then biometrics will forever be lost if these databases are attacked.Protecting privacy is the most important goal of cancelable biometrics.In order to protect privacy,therefore,cancelable biometrics should be non-invertible in such a way that no information can be inverted from the cancelable biometric templates stored in personal identification/verification databases.One methodology to achieve non-invertibility is the employment of non-invertible transforms.This work suggests an encryption process for cancellable speaker identification using a hybrid encryption system.This system includes the 3D Jigsaw transforms and Fractional Fourier Transform(FrFT).The proposed scheme is compared with the optical Double Random Phase Encoding(DRPE)encryption process.The evaluation of simulation results of cancellable biometrics shows that the algorithm proposed is secure,authoritative,and feasible.The encryption and cancelability effects are good and reveal good performance.Also,it introduces recommended security and robustness levels for its utilization for achieving efficient cancellable biometrics systems.展开更多
This paper proposes a new phase feature derived from the formant instantaneous characteristics for speech recognition (SR) and speaker identification (SI) systems. Using Hilbert transform (HT), the formant chara...This paper proposes a new phase feature derived from the formant instantaneous characteristics for speech recognition (SR) and speaker identification (SI) systems. Using Hilbert transform (HT), the formant characteristics can be represented by instantaneous frequency (IF) and instantaneous bandwidth, namely formant instantaneous characteristics (FIC). In order to explore the importance of FIC both in SR and SI, this paper proposes different features from FIC used for SR and SI systems. When combing these new features with conventional parameters, higher identification rate can be achieved than that of using Mel-frequency cepstral coefficients (MFCC) parameters only. The experiment results show that the new features are effective characteristic parameters and can be treated as the compensation of conventional parameters for SR and SI.展开更多
This paper discusses application of fractal dimensions to speech processing. Generalized dimensions of arbitrary orders and associated fractal parameters are used in speaker identification. A characteristic vactor bas...This paper discusses application of fractal dimensions to speech processing. Generalized dimensions of arbitrary orders and associated fractal parameters are used in speaker identification. A characteristic vactor based on these parameters is formed, and a recognition criterion definded in order to identify individual speakers. Experimental results show the usefulness of fractal dimensions in characterizing speaker identity.展开更多
A novel text independent speaker identification system is proposed. In the proposed system, the 12-order perceptual linear predictive cepstrum and their delta coefficients in the span of five frames are extracted from...A novel text independent speaker identification system is proposed. In the proposed system, the 12-order perceptual linear predictive cepstrum and their delta coefficients in the span of five frames are extracted from the segmented speech based on the method of pitch synchronous analysis. The Fisher ratios of the original coefficients then be calculated, and the coefficients whose Fisher ratios are bigger are selected to form the 13-dimensional feature vectors of speaker. The Gaussian mixture model is used to model the speakers. The experimental results show that the identification accuracy of the proposed system is obviously better than that of the systems based on other conventional coefficients like the linear predictive cepstral coefficients and the Mel-frequency cepstral coefficients.展开更多
This letter proposes an effective and robust speech feature extraction method based on statistical analysis of Pitch Frequency Distributions (PFD) for speaker identification. Compared with the conventional cepstrum, P...This letter proposes an effective and robust speech feature extraction method based on statistical analysis of Pitch Frequency Distributions (PFD) for speaker identification. Compared with the conventional cepstrum, PFD is relatively insensitive to Additive White Gaussian Noise (AWGN), but it does not show good performance for speaker identification, even if under clean environments. To compensate this shortcoming, PFD and conventional cepstrum are combined to make the ultimate decision, instead of simply taking one kind of features into account.Experimental results indicate that the hybrid approach can give outstanding improvement for text-independent speaker identification under noisy environments corrupted by AWGN.展开更多
The Mel-frequency cepstral coefficient (MFCC) is the most widely used feature in speech and speaker recognition. However, MFCC is very sensitive to noise interference, which tends to drastically de- grade the perfor...The Mel-frequency cepstral coefficient (MFCC) is the most widely used feature in speech and speaker recognition. However, MFCC is very sensitive to noise interference, which tends to drastically de- grade the performance of recognition systems because of the mismatches between training and testing. In this paper, the logarithmic transformation in the standard MFCC analysis is replaced by a combined function to improve the noisy sensitivity. The proposed feature extraction process is also combined with speech en- hancement methods, such as spectral subtraction and median-filter to further suppress the noise. Experi- ments show that the proposed robust MFCC-based feature significantly reduces the recognition error rate over a wide signal-to-noise ratio range.展开更多
Unseen handset mismatch is the major source of performance degradation in speaker identification in telecommunication environments. To alleviate the problem, a maximum likelihood a priori knowledge interpolation (ML-...Unseen handset mismatch is the major source of performance degradation in speaker identification in telecommunication environments. To alleviate the problem, a maximum likelihood a priori knowledge interpolation (ML-AKI)-based handset mismatch compensation approach is proposed. It first collects a set of handset characteristics of seen handsets to use as the a priori knowledge for representing the space of handsets. During evaluation the characteristics of an unknown test handset are optimally estimated by interpolation from the set of the a priori knowledge. Experimental results on the HTIMIT database show that the ML-AKI method can improve the average speaker identification rate from 60.0% to 74.6% as compared with conventional maximum a posteriori-adapted Gaussian mixture models. The proposed ML-AKI method is a promising method for robust speaker identification.展开更多
基金The authors are grateful to the Taif University Researchers Supporting Project Number(TURSP-2020/36)Taif University,Taif,Saudi Arabia.
文摘Automatic Speaker Identification(ASI)involves the process of distinguishing an audio stream associated with numerous speakers’utterances.Some common aspects,such as the framework difference,overlapping of different sound events,and the presence of various sound sources during recording,make the ASI task much more complicated and complex.This research proposes a deep learning model to improve the accuracy of the ASI system and reduce the model training time under limited computation resources.In this research,the performance of the transformer model is investigated.Seven audio features,chromagram,Mel-spectrogram,tonnetz,Mel-Frequency Cepstral Coefficients(MFCCs),delta MFCCs,delta-delta MFCCs and spectral contrast,are extracted from the ELSDSR,CSTRVCTK,and Ar-DAD,datasets.The evaluation of various experiments demonstrates that the best performance was achieved by the proposed transformer model using seven audio features on all datasets.For ELSDSR,CSTRVCTK,and Ar-DAD,the highest attained accuracies are 0.99,0.97,and 0.99,respectively.The experimental results reveal that the proposed technique can achieve the best performance for ASI problems.
文摘Most current security and authentication systems are based on personal biometrics.The security problem is a major issue in the field of biometric systems.This is due to the use in databases of the original biometrics.Then biometrics will forever be lost if these databases are attacked.Protecting privacy is the most important goal of cancelable biometrics.In order to protect privacy,therefore,cancelable biometrics should be non-invertible in such a way that no information can be inverted from the cancelable biometric templates stored in personal identification/verification databases.One methodology to achieve non-invertibility is the employment of non-invertible transforms.This work suggests an encryption process for cancellable speaker identification using a hybrid encryption system.This system includes the 3D Jigsaw transforms and Fractional Fourier Transform(FrFT).The proposed scheme is compared with the optical Double Random Phase Encoding(DRPE)encryption process.The evaluation of simulation results of cancellable biometrics shows that the algorithm proposed is secure,authoritative,and feasible.The encryption and cancelability effects are good and reveal good performance.Also,it introduces recommended security and robustness levels for its utilization for achieving efficient cancellable biometrics systems.
基金Project supported by the National Natural Science Foundation of China (Grant No.60903186)the Shanghai Leading Academic Discipline Project (Grant No.J50104)
文摘This paper proposes a new phase feature derived from the formant instantaneous characteristics for speech recognition (SR) and speaker identification (SI) systems. Using Hilbert transform (HT), the formant characteristics can be represented by instantaneous frequency (IF) and instantaneous bandwidth, namely formant instantaneous characteristics (FIC). In order to explore the importance of FIC both in SR and SI, this paper proposes different features from FIC used for SR and SI systems. When combing these new features with conventional parameters, higher identification rate can be achieved than that of using Mel-frequency cepstral coefficients (MFCC) parameters only. The experiment results show that the new features are effective characteristic parameters and can be treated as the compensation of conventional parameters for SR and SI.
文摘This paper discusses application of fractal dimensions to speech processing. Generalized dimensions of arbitrary orders and associated fractal parameters are used in speaker identification. A characteristic vactor based on these parameters is formed, and a recognition criterion definded in order to identify individual speakers. Experimental results show the usefulness of fractal dimensions in characterizing speaker identity.
文摘A novel text independent speaker identification system is proposed. In the proposed system, the 12-order perceptual linear predictive cepstrum and their delta coefficients in the span of five frames are extracted from the segmented speech based on the method of pitch synchronous analysis. The Fisher ratios of the original coefficients then be calculated, and the coefficients whose Fisher ratios are bigger are selected to form the 13-dimensional feature vectors of speaker. The Gaussian mixture model is used to model the speakers. The experimental results show that the identification accuracy of the proposed system is obviously better than that of the systems based on other conventional coefficients like the linear predictive cepstral coefficients and the Mel-frequency cepstral coefficients.
文摘This letter proposes an effective and robust speech feature extraction method based on statistical analysis of Pitch Frequency Distributions (PFD) for speaker identification. Compared with the conventional cepstrum, PFD is relatively insensitive to Additive White Gaussian Noise (AWGN), but it does not show good performance for speaker identification, even if under clean environments. To compensate this shortcoming, PFD and conventional cepstrum are combined to make the ultimate decision, instead of simply taking one kind of features into account.Experimental results indicate that the hybrid approach can give outstanding improvement for text-independent speaker identification under noisy environments corrupted by AWGN.
基金Supported by the National Natural Science Foundation of China(No. 6007201)
文摘The Mel-frequency cepstral coefficient (MFCC) is the most widely used feature in speech and speaker recognition. However, MFCC is very sensitive to noise interference, which tends to drastically de- grade the performance of recognition systems because of the mismatches between training and testing. In this paper, the logarithmic transformation in the standard MFCC analysis is replaced by a combined function to improve the noisy sensitivity. The proposed feature extraction process is also combined with speech en- hancement methods, such as spectral subtraction and median-filter to further suppress the noise. Experi- ments show that the proposed robust MFCC-based feature significantly reduces the recognition error rate over a wide signal-to-noise ratio range.
基金the Science Council of Taiwan, China (No. NSC95-2221-E-027-102)
文摘Unseen handset mismatch is the major source of performance degradation in speaker identification in telecommunication environments. To alleviate the problem, a maximum likelihood a priori knowledge interpolation (ML-AKI)-based handset mismatch compensation approach is proposed. It first collects a set of handset characteristics of seen handsets to use as the a priori knowledge for representing the space of handsets. During evaluation the characteristics of an unknown test handset are optimally estimated by interpolation from the set of the a priori knowledge. Experimental results on the HTIMIT database show that the ML-AKI method can improve the average speaker identification rate from 60.0% to 74.6% as compared with conventional maximum a posteriori-adapted Gaussian mixture models. The proposed ML-AKI method is a promising method for robust speaker identification.