The present work presents a statistical method to translate human voices across age groups,based on commonalities in voices of blood relations.The age-translated voices have been naturalized extracting the blood relat...The present work presents a statistical method to translate human voices across age groups,based on commonalities in voices of blood relations.The age-translated voices have been naturalized extracting the blood relation features e.g.,pitch,duration,energy,using Mel Frequency Cepstrum Coefficients(MFCC),for social compatibility of the voice-impaired.The system has been demonstrated using standard English and an Indian language.The voice samples for resynthesis were derived from 12 families,with member ages ranging from 8–80 years.The voice-age translation,performed using the Pitch synchronous overlap and add(PSOLA)approach,by modulation of extracted voice features,was validated by perception test.The translated and resynthesized voices were correlated using Linde,Buzo,Gray(LBG),and Kekre’s Fast Codebook generation(KFCG)algorithms.For translated voice targets,a strong(θ>∼93%andθ>∼96%)correlation was found with blood relatives,whereas,a weak(θ<∼78%andθ<∼80%)correlation range was found between different families and different gender from same families.The study further subcategorized the sampling and synthesis of the voices into similar or dissimilar gender groups,using a support vector machine(SVM)choosing between available voice samples.Finally,∼96%,∼93%,and∼94%accuracies were obtained in the identification of the gender of the voice sample,the age group samples,and the correlation between the original and converted voice samples,respectively.The results obtained were close to the natural voice sample features and are envisaged to facilitate a near-natural voice for speech-impaired easily.展开更多
Voice conversion algorithm aims to provide high level of similarity to the target voice with an acceptable level of quality.The main object of this paper was to build a nonlinear relationship between the parameters fo...Voice conversion algorithm aims to provide high level of similarity to the target voice with an acceptable level of quality.The main object of this paper was to build a nonlinear relationship between the parameters for the acoustical features of source and target speaker using Non-Linear Canonical Correlation Analysis(NLCCA) based on jointed Gaussian mixture model.Speaker indi-viduality transformation was achieved mainly by altering vocal tract characteristics represented by Line Spectral Frequencies(LSF).To obtain the transformed speech which sounded more like the target voices,prosody modification is involved through residual prediction.Both objective and subjective evaluations were conducted.The experimental results demonstrated that our proposed algorithm was effective and outperformed the conventional conversion method utilized by the Minimum Mean Square Error(MMSE) estimation.展开更多
This paper improves and presents an advanced method of the voice conversion system based on Gaussian Mixture Models(GMM) models by changing the time-scale of speech.The Speech Transformation and Representation using A...This paper improves and presents an advanced method of the voice conversion system based on Gaussian Mixture Models(GMM) models by changing the time-scale of speech.The Speech Transformation and Representation using Adaptive Interpolation of weiGHTed spectrum(STRAIGHT) model is adopted to extract the spectrum features,and the GMM models are trained to generate the conversion function.The spectrum features of a source speech will be converted by the conversion function.The time-scale of speech is changed by extracting the converted features and adding to the spectrum.The conversion voice was evaluated by subjective and objective measurements.The results confirm that the transformed speech not only approximates the characteristics of the target speaker,but also more natural and more intelligible.展开更多
基金The authors would like to acknowledge the Ministry of Electronics and Information Technology(MeitY),Government of India for financial support through the scholarship for Palli Padmini,during research work through Visvesvaraya Ph.D.Scheme for Electronics and IT.
文摘The present work presents a statistical method to translate human voices across age groups,based on commonalities in voices of blood relations.The age-translated voices have been naturalized extracting the blood relation features e.g.,pitch,duration,energy,using Mel Frequency Cepstrum Coefficients(MFCC),for social compatibility of the voice-impaired.The system has been demonstrated using standard English and an Indian language.The voice samples for resynthesis were derived from 12 families,with member ages ranging from 8–80 years.The voice-age translation,performed using the Pitch synchronous overlap and add(PSOLA)approach,by modulation of extracted voice features,was validated by perception test.The translated and resynthesized voices were correlated using Linde,Buzo,Gray(LBG),and Kekre’s Fast Codebook generation(KFCG)algorithms.For translated voice targets,a strong(θ>∼93%andθ>∼96%)correlation was found with blood relatives,whereas,a weak(θ<∼78%andθ<∼80%)correlation range was found between different families and different gender from same families.The study further subcategorized the sampling and synthesis of the voices into similar or dissimilar gender groups,using a support vector machine(SVM)choosing between available voice samples.Finally,∼96%,∼93%,and∼94%accuracies were obtained in the identification of the gender of the voice sample,the age group samples,and the correlation between the original and converted voice samples,respectively.The results obtained were close to the natural voice sample features and are envisaged to facilitate a near-natural voice for speech-impaired easily.
基金Supported by the National High Technology Research and Development Program of China (863 Program,No.2006AA010102)
文摘Voice conversion algorithm aims to provide high level of similarity to the target voice with an acceptable level of quality.The main object of this paper was to build a nonlinear relationship between the parameters for the acoustical features of source and target speaker using Non-Linear Canonical Correlation Analysis(NLCCA) based on jointed Gaussian mixture model.Speaker indi-viduality transformation was achieved mainly by altering vocal tract characteristics represented by Line Spectral Frequencies(LSF).To obtain the transformed speech which sounded more like the target voices,prosody modification is involved through residual prediction.Both objective and subjective evaluations were conducted.The experimental results demonstrated that our proposed algorithm was effective and outperformed the conventional conversion method utilized by the Minimum Mean Square Error(MMSE) estimation.
基金Supported by the National Natural Science Foundation of China (No. 60872105)the Program for Science & Technology Innovative Research Team of Qing Lan Project in Higher Educational Institutions of Jiangsuthe Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD)
文摘This paper improves and presents an advanced method of the voice conversion system based on Gaussian Mixture Models(GMM) models by changing the time-scale of speech.The Speech Transformation and Representation using Adaptive Interpolation of weiGHTed spectrum(STRAIGHT) model is adopted to extract the spectrum features,and the GMM models are trained to generate the conversion function.The spectrum features of a source speech will be converted by the conversion function.The time-scale of speech is changed by extracting the converted features and adding to the spectrum.The conversion voice was evaluated by subjective and objective measurements.The results confirm that the transformed speech not only approximates the characteristics of the target speaker,but also more natural and more intelligible.