期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Seven Qualitative-Soft Communicative Characteristics of Human Voice
1
《Journalism and Mass Communication》 2013年第8期524-527,共4页
The study has its theme communicative characteristics of the human voice, having utility in the journalistic interview. It finds, first, that there is no convergence in the number of characteristics of the human voice... The study has its theme communicative characteristics of the human voice, having utility in the journalistic interview. It finds, first, that there is no convergence in the number of characteristics of the human voice: M. L. Knapp talks about three characteristics, T. O. Meservy and J. K. Burgoon about three, F. Poyatos about eight, and P Glenn about seven features. Our thesis discuss that the human voice has nine identity-communicative features: two quantitative-hard communicative characteristics (fundamental frequency--Fo and vocal register) and seven qualitative-soft characteristics: tone height, intonation, volume, accent, diction, timbre of phonation, and average verbal flow. 展开更多
关键词 communication human voice journalistic interview
下载PDF
Finite Element Dynamics of Human Ear System Comprising Middle Ear and Cochlea in Inner Ear
2
作者 Hidayat Shingo Okamoto +3 位作者 Jae Hoon Lee Naohito Hato Hiroyuki Yamada Daiki Takagi 《Journal of Biomedical Science and Engineering》 2016年第13期597-610,共14页
A human middle ear consists of an eardrum and three ossicles which are linked by each other, and connect with the eardrum and an inner ear. The inner ear consists of a cochlea and a vestibular system. An abnormality o... A human middle ear consists of an eardrum and three ossicles which are linked by each other, and connect with the eardrum and an inner ear. The inner ear consists of a cochlea and a vestibular system. An abnormality of the human middle ear such as ossicular dislocation may cause conductive hearing loss. The conductive hearing loss is generally treated by surgery using artificial ossicles. The treatments of conductive hearing loss require a better understanding of characteristics and dynamic behaviors of the human middle ear when the sounds transmit from outer inner to inner ear. The purpose of this research is to simulate the dynamic behaviors of a human ear system comprising the middle ear and the cochlea in the inner ear using the finite element method (FEM). Firstly, the eigen-value analysis was performed to obtain the natural frequencies and vibration modes of the total ear system. Secondly, the frequency response analysis was carried out. Thirdly, the time history response analyses were performed using human voices as the external forces. In the time history response analyses, the sounds created as input sound pressures were used. Human voices, for example vowels “I”, “u” and “e” as input sound pressures were created by using the sound pressures downloaded from the opening samples of human voices as wav files in a website. Then it was clarified that the high frequency components of sounds are reduced by the middle ear system. 展开更多
关键词 EARDRUM Middle Ear Dynamics human voice Finite Element Method
下载PDF
Automatic Speaker Recognition Using Mel-Frequency Cepstral Coefficients Through Machine Learning 被引量:1
3
作者 U˘gur Ayvaz Hüseyin Gürüler +3 位作者 Faheem Khan Naveed Ahmed Taegkeun Whangbo Abdusalomov Akmalbek Bobomirzaevich 《Computers, Materials & Continua》 SCIE EI 2022年第6期5511-5521,共11页
Automatic speaker recognition(ASR)systems are the field of Human-machine interaction and scientists have been using feature extraction and feature matching methods to analyze and synthesize these signals.One of the mo... Automatic speaker recognition(ASR)systems are the field of Human-machine interaction and scientists have been using feature extraction and feature matching methods to analyze and synthesize these signals.One of the most commonly used methods for feature extraction is Mel Frequency Cepstral Coefficients(MFCCs).Recent researches show that MFCCs are successful in processing the voice signal with high accuracies.MFCCs represents a sequence of voice signal-specific features.This experimental analysis is proposed to distinguish Turkish speakers by extracting the MFCCs from the speech recordings.Since the human perception of sound is not linear,after the filterbank step in theMFCC method,we converted the obtained log filterbanks into decibel(dB)features-based spectrograms without applying the Discrete Cosine Transform(DCT).A new dataset was created with converted spectrogram into a 2-D array.Several learning algorithms were implementedwith a 10-fold cross-validationmethod to detect the speaker.The highest accuracy of 90.2%was achieved using Multi-layer Perceptron(MLP)with tanh activation function.The most important output of this study is the inclusion of human voice as a new feature set. 展开更多
关键词 Automatic speaker recognition human voice recognition spatial pattern recognition MFCCs SPECTROGRAM machine learning artificial intelligence
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部