This paper is trying to analyze the E-C interpreting scripts of Inaugural Address, Remarks on Winning the Nobel Prize and Shanghai Speech by the 44th president of United States Barack Obama with a comparative method b...This paper is trying to analyze the E-C interpreting scripts of Inaugural Address, Remarks on Winning the Nobel Prize and Shanghai Speech by the 44th president of United States Barack Obama with a comparative method based on data collected. The analysis will be employed on the lexical, syntactic as well as rhetorical level and the features of E-C public speech interpreting will be achieved accordingly. The features may serve as reference for the interpreters in their interpretation practice in order to improve the interpretation effects.展开更多
According to Reiss’s Text Type theory,a key part of the functionalist approach in translation studies,the source text can be assigned to a text type and to a genre.In making this assignment,the translator can decide ...According to Reiss’s Text Type theory,a key part of the functionalist approach in translation studies,the source text can be assigned to a text type and to a genre.In making this assignment,the translator can decide on the hierarchy of postulates which has to be observed during target-text production(Mona,2005).This essay intends to conduct a linguistic and stylistic analysis of the Chinese translation of Obama’s speech to explore the general approach of the translator(if there is one),by comparing the respective results of the two analyses from the perspective of Katharina Reiss’s Text Type theory.In doing so,critical judgments will accordingly be made as to whether such an approach is justifiable or not.展开更多
This paper analyzes the stylistic features of Obama’s speech in Shanghai,from the aspects of Phonetic,lexical,syntax,rhetorical devices,etc.in order to have a better appreciation of Obama’s art of public speaking.
Delivering a public speech is to produce a work of verbal art.As other kinds of art,the intelligent use of techniques along with rich and true feelings makes public speech attractive.On September 4th,Michelle Obama ad...Delivering a public speech is to produce a work of verbal art.As other kinds of art,the intelligent use of techniques along with rich and true feelings makes public speech attractive.On September 4th,Michelle Obama addressed a speech at the Democratic National Party Convention for his husband-Barack Obama's re-election.In the speech,many stylistic techniques are employed that deserve a specific analysis to exhibit what they are,and how they help the speech win an active reaction from the audience.The analysis will be carried out from four perspectives,namely,phonological,lexical,syntactical and semantic analy ses.展开更多
This paper attempts to conduct a comparative study on Obama’s two speeches—his second inaugural address and his commencement speech in The Ohio State University in 2013 based on Halliday’s theory of Interpersonal F...This paper attempts to conduct a comparative study on Obama’s two speeches—his second inaugural address and his commencement speech in The Ohio State University in 2013 based on Halliday’s theory of Interpersonal Function.The author finds that mood,modality and personal pronouns are the major language means used to realize the interpersonal function;in his two speeches,Obama makes different choices of those resources,and different choices indicate different attitudes and intentions.展开更多
This paper intended to analyze the speech made by Michelle Obama based on the theory of Kenneth Burke's New Rhetoric. With Dramatistic Pentad and Identification, used in the paper, examples are extracted from the ...This paper intended to analyze the speech made by Michelle Obama based on the theory of Kenneth Burke's New Rhetoric. With Dramatistic Pentad and Identification, used in the paper, examples are extracted from the text and analyzed respectively. Through the study, the underneath motives and reason why the speech is so persuasive are revealed.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hate...Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.展开更多
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona...Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.展开更多
In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p...In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.展开更多
The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universiti...The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universities.This article takes the theory of second language acquisition as the background,analyzes the important role and value of this theory in English speech teaching in universities,and explores how to apply the theory of second language acquisition in English speech teaching in universities.It aims to strengthen the cultivation of English skilled talents and provide a brief reference for improving English speech teaching in universities.展开更多
This thesis tries to analyze the language features of Barack Obama's two inaugural speeches in 2008 and 2012 from the linguistic aspects,including sentence types as well as figures of speech which included imperat...This thesis tries to analyze the language features of Barack Obama's two inaugural speeches in 2008 and 2012 from the linguistic aspects,including sentence types as well as figures of speech which included imperative sentences,parallelism,rhetorical question,alliteration,hyperbole,simile,metaphor and so on.展开更多
Based on three aesthetic criteria of logos, pathos and ethos in classic rhetoric theory, this article analyzes the modes of persuasion applied in Barack Obama's first inaugural speech. It has been found out that t...Based on three aesthetic criteria of logos, pathos and ethos in classic rhetoric theory, this article analyzes the modes of persuasion applied in Barack Obama's first inaugural speech. It has been found out that the perfect combination of these three appeals endows the speech with the rhetoric charm, passion and art of language which efficiently helps Obama to guarantee the political success in election.展开更多
Public speech is an art.It presents the features of formal written language while exhibiting characteristics of the spoken.Barack Obama,an excellent speaker,addressed his victory speech on NOV 5,2008 in Chicago.This s...Public speech is an art.It presents the features of formal written language while exhibiting characteristics of the spoken.Barack Obama,an excellent speaker,addressed his victory speech on NOV 5,2008 in Chicago.This speech,which is very convincing,is considered a classic.This paper analyses the speech from four aspects:content,grammatical features,lexical features and semantic features.展开更多
Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect huma...Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions.This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes(LED)alert signals and it also keep track of the places like households,hospitals and remote areas,etc.The proposed approach is able to detect seven emotions:worry,surprise,neutral,sadness,happiness,hate and love.The key elements for the implementation of speech emotion recognition are voice processing,and once the emotion is recognized,the machine interface automatically detects the actions by buzzer and LED.The proposed system is trained and tested on various benchmark datasets,i.e.,Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS)database,Acoustic-Phonetic Continuous Speech Corpus(TIMIT)database,Emotional Speech database(Emo-DB)database and evaluated based on various parameters,i.e.,accuracy,error rate,and time.While comparing with existing technologies,the proposed algorithm gave a better error rate and less time.Error rate and time is decreased by 19.79%,5.13 s.for the RAVDEES dataset,15.77%,0.01 s for the Emo-DB dataset and 14.88%,3.62 for the TIMIT database.The proposed model shows better accuracy of 81.02%for the RAVDEES dataset,84.23%for the TIMIT dataset and 85.12%for the Emo-DB dataset compared to Gaussian Mixture Modeling(GMM)and Support Vector Machine(SVM)Model.展开更多
Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due...Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach.展开更多
Patients with age-related hearing loss face hearing difficulties in daily life.The causes of age-related hearing loss are complex and include changes in peripheral hearing,central processing,and cognitive-related abil...Patients with age-related hearing loss face hearing difficulties in daily life.The causes of age-related hearing loss are complex and include changes in peripheral hearing,central processing,and cognitive-related abilities.Furthermore,the factors by which aging relates to hearing loss via changes in audito ry processing ability are still unclear.In this cross-sectional study,we evaluated 27 older adults(over 60 years old) with age-related hearing loss,21 older adults(over 60years old) with normal hearing,and 30 younger subjects(18-30 years old) with normal hearing.We used the outcome of the uppe r-threshold test,including the time-compressed thres h old and the speech recognition threshold in noisy conditions,as a behavioral indicator of auditory processing ability.We also used electroencephalogra p hy to identify presbycusis-related abnormalities in the brain while the participants were in a spontaneous resting state.The timecompressed threshold and speech recognition threshold data indicated significant diffe rences among the groups.In patients with age-related hearing loss,information masking(babble noise) had a greater effect than energy masking(speech-shaped noise) on processing difficulties.In terms of resting-state electroencephalography signals,we observed enhanced fro ntal lobe(Brodmann’s area,BA11) activation in the older adults with normal hearing compared with the younger participants with normal hearing,and greater activation in the parietal(BA7) and occipital(BA19) lobes in the individuals with age-related hearing loss compared with the younger adults.Our functional connection analysis suggested that compared with younger people,the older adults with normal hearing exhibited enhanced connections among networks,including the default mode network,sensorimotor network,cingulo-opercular network,occipital network,and frontoparietal network.These results suggest that both normal aging and the development of age-related hearing loss have a negative effect on advanced audito ry processing capabilities and that hearing loss accele rates the decline in speech comprehension,especially in speech competition situations.Older adults with normal hearing may have increased compensatory attentional resource recruitment represented by the to p-down active listening mechanism,while those with age-related hearing loss exhibit decompensation of network connections involving multisensory integration.展开更多
Aristotle’s famous theory on rhetoric(Logos,Pathos,Ethos,)is a cornerstone in public speech.The paper would focus on Ethos and its implement in rhetoric devices in public speech.It would make an analysis on Michelle ...Aristotle’s famous theory on rhetoric(Logos,Pathos,Ethos,)is a cornerstone in public speech.The paper would focus on Ethos and its implement in rhetoric devices in public speech.It would make an analysis on Michelle Obama’s farewell speech at the White House on its rhetorical applications in lexis,syntax,phonetics and gestures.From the paper,the readers can obtain some experiences for appreciating public speeches and the teachers can find more effective ways in composition teaching of public speech.展开更多
In the speech recognition system,the acoustic model is an important underlying model,and its accuracy directly affects the performance of the entire system.This paper introduces the construction and training process o...In the speech recognition system,the acoustic model is an important underlying model,and its accuracy directly affects the performance of the entire system.This paper introduces the construction and training process of the acoustic model in detail and studies the Connectionist temporal classification(CTC)algorithm,which plays an important role in the end-to-end framework,established a convolutional neural network(CNN)combined with an acoustic model of Connectionist temporal classification to improve the accuracy of speech recognition.This study uses a sound sensor,ReSpeakerMic Array v2.0.1,to convert the collected speech signals into text or corresponding speech signals to improve communication and reduce noise and hardware interference.The baseline acousticmodel in this study faces challenges such as long training time,high error rate,and a certain degree of overfitting.The model is trained through continuous design and improvement of the relevant parameters of the acousticmodel,and finally the performance is selected according to the evaluation index.Excellentmodel,which reduces the error rate to about 18%,thus improving the accuracy rate.Finally,comparative verificationwas carried out from the selection of acoustic feature parameters,the selection of modeling units,and the speaker’s speech rate,which further verified the excellent performance of the CTCCNN_5+BN+Residual model structure.In terms of experiments,to train and verify the CTC-CNN baseline acoustic model,this study uses THCHS-30 and ST-CMDS speech data sets as training data sets,and after 54 epochs of training,the word error rate of the acoustic model training set is 31%,the word error rate of the test set is stable at about 43%.This experiment also considers the surrounding environmental noise.Under the noise level of 80∼90 dB,the accuracy rate is 88.18%,which is the worst performance among all levels.In contrast,at 40–60 dB,the accuracy was as high as 97.33%due to less noise pollution.展开更多
Purpose:Our study aims to compare speech understanding in noise and spectral-temporal resolution skills with regard to the degree of hearing loss,age,hearing aid use experience and gender of hearing aid users.Methods:...Purpose:Our study aims to compare speech understanding in noise and spectral-temporal resolution skills with regard to the degree of hearing loss,age,hearing aid use experience and gender of hearing aid users.Methods:Our study included sixty-eight hearing aid users aged between 40-70 years,with bilateral mild and moderate symmetrical sensorineural hearing loss.Random gap detection test,Turkish matrix test and spectral-temporally modulated ripple test were implemented on the participants with bilateral hearing aids.The test results acquired were compared statistically according to different variables and the correlations were examined.Results:No statistically significant differences were observed for speech-in-noise recognition,spectraltemporal resolution among older and younger adults in hearing aid users(p>0.05).There wasn’t found a statistically significant difference among test outcomes as regards different hearing loss degrees(p>0.05).Higher performances were obtained in terms of temporal resolution in male participants and participants with more hearing aid use experience(p<0.05).Significant correlations were obtained between the results of speech-in-noise recognition,temporal resolution and spectral resolution tests performed with hearing aids(p<0.05).Conclusion:Our study findings emphasized the importance of regular hearing aid use and it showed that some auditory skills can be improved with hearing aids.Observation of correlations among the speechin-noise recognition,temporal resolution and spectral resolution tests have revealed that these skills should be evaluated as a whole to maximize the patient’s communication abilities.展开更多
文摘This paper is trying to analyze the E-C interpreting scripts of Inaugural Address, Remarks on Winning the Nobel Prize and Shanghai Speech by the 44th president of United States Barack Obama with a comparative method based on data collected. The analysis will be employed on the lexical, syntactic as well as rhetorical level and the features of E-C public speech interpreting will be achieved accordingly. The features may serve as reference for the interpreters in their interpretation practice in order to improve the interpretation effects.
文摘According to Reiss’s Text Type theory,a key part of the functionalist approach in translation studies,the source text can be assigned to a text type and to a genre.In making this assignment,the translator can decide on the hierarchy of postulates which has to be observed during target-text production(Mona,2005).This essay intends to conduct a linguistic and stylistic analysis of the Chinese translation of Obama’s speech to explore the general approach of the translator(if there is one),by comparing the respective results of the two analyses from the perspective of Katharina Reiss’s Text Type theory.In doing so,critical judgments will accordingly be made as to whether such an approach is justifiable or not.
文摘This paper analyzes the stylistic features of Obama’s speech in Shanghai,from the aspects of Phonetic,lexical,syntax,rhetorical devices,etc.in order to have a better appreciation of Obama’s art of public speaking.
文摘Delivering a public speech is to produce a work of verbal art.As other kinds of art,the intelligent use of techniques along with rich and true feelings makes public speech attractive.On September 4th,Michelle Obama addressed a speech at the Democratic National Party Convention for his husband-Barack Obama's re-election.In the speech,many stylistic techniques are employed that deserve a specific analysis to exhibit what they are,and how they help the speech win an active reaction from the audience.The analysis will be carried out from four perspectives,namely,phonological,lexical,syntactical and semantic analy ses.
文摘This paper attempts to conduct a comparative study on Obama’s two speeches—his second inaugural address and his commencement speech in The Ohio State University in 2013 based on Halliday’s theory of Interpersonal Function.The author finds that mood,modality and personal pronouns are the major language means used to realize the interpersonal function;in his two speeches,Obama makes different choices of those resources,and different choices indicate different attitudes and intentions.
文摘This paper intended to analyze the speech made by Michelle Obama based on the theory of Kenneth Burke's New Rhetoric. With Dramatistic Pentad and Identification, used in the paper, examples are extracted from the text and analyzed respectively. Through the study, the underneath motives and reason why the speech is so persuasive are revealed.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
文摘Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.
文摘Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.
基金This research was funded by Shenzhen Science and Technology Program(Grant No.RCBS20221008093121051)the General Higher Education Project of Guangdong Provincial Education Department(Grant No.2020ZDZX3085)+1 种基金China Postdoctoral Science Foundation(Grant No.2021M703371)the Post-Doctoral Foundation Project of Shenzhen Polytechnic(Grant No.6021330002K).
文摘In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.
文摘The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universities.This article takes the theory of second language acquisition as the background,analyzes the important role and value of this theory in English speech teaching in universities,and explores how to apply the theory of second language acquisition in English speech teaching in universities.It aims to strengthen the cultivation of English skilled talents and provide a brief reference for improving English speech teaching in universities.
文摘This thesis tries to analyze the language features of Barack Obama's two inaugural speeches in 2008 and 2012 from the linguistic aspects,including sentence types as well as figures of speech which included imperative sentences,parallelism,rhetorical question,alliteration,hyperbole,simile,metaphor and so on.
文摘Based on three aesthetic criteria of logos, pathos and ethos in classic rhetoric theory, this article analyzes the modes of persuasion applied in Barack Obama's first inaugural speech. It has been found out that the perfect combination of these three appeals endows the speech with the rhetoric charm, passion and art of language which efficiently helps Obama to guarantee the political success in election.
文摘Public speech is an art.It presents the features of formal written language while exhibiting characteristics of the spoken.Barack Obama,an excellent speaker,addressed his victory speech on NOV 5,2008 in Chicago.This speech,which is very convincing,is considered a classic.This paper analyses the speech from four aspects:content,grammatical features,lexical features and semantic features.
基金Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2022-166.
文摘Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions.This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes(LED)alert signals and it also keep track of the places like households,hospitals and remote areas,etc.The proposed approach is able to detect seven emotions:worry,surprise,neutral,sadness,happiness,hate and love.The key elements for the implementation of speech emotion recognition are voice processing,and once the emotion is recognized,the machine interface automatically detects the actions by buzzer and LED.The proposed system is trained and tested on various benchmark datasets,i.e.,Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS)database,Acoustic-Phonetic Continuous Speech Corpus(TIMIT)database,Emotional Speech database(Emo-DB)database and evaluated based on various parameters,i.e.,accuracy,error rate,and time.While comparing with existing technologies,the proposed algorithm gave a better error rate and less time.Error rate and time is decreased by 19.79%,5.13 s.for the RAVDEES dataset,15.77%,0.01 s for the Emo-DB dataset and 14.88%,3.62 for the TIMIT database.The proposed model shows better accuracy of 81.02%for the RAVDEES dataset,84.23%for the TIMIT dataset and 85.12%for the Emo-DB dataset compared to Gaussian Mixture Modeling(GMM)and Support Vector Machine(SVM)Model.
基金the National Natural Science Foundation of China(No.61872231)the National Key Research and Development Program of China(No.2021YFC2801000)the Major Research plan of the National Social Science Foundation of China(No.2000&ZD130).
文摘Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach.
基金supported by the National Natural Science Foundation of China,Nos.82171138 (to YQZ),82071 062 (to YXC)the Natural Science Foundation of Guangdong Province,No.2021A1515012038 (to YXC)+1 种基金the Fundamental Research Funds for the Central Universities,No.20ykpy91 (to YXC)the Sun Yat-Sen Clinical Research Cultivating Program,No.SYS-Q-201903 (to YXC)。
文摘Patients with age-related hearing loss face hearing difficulties in daily life.The causes of age-related hearing loss are complex and include changes in peripheral hearing,central processing,and cognitive-related abilities.Furthermore,the factors by which aging relates to hearing loss via changes in audito ry processing ability are still unclear.In this cross-sectional study,we evaluated 27 older adults(over 60 years old) with age-related hearing loss,21 older adults(over 60years old) with normal hearing,and 30 younger subjects(18-30 years old) with normal hearing.We used the outcome of the uppe r-threshold test,including the time-compressed thres h old and the speech recognition threshold in noisy conditions,as a behavioral indicator of auditory processing ability.We also used electroencephalogra p hy to identify presbycusis-related abnormalities in the brain while the participants were in a spontaneous resting state.The timecompressed threshold and speech recognition threshold data indicated significant diffe rences among the groups.In patients with age-related hearing loss,information masking(babble noise) had a greater effect than energy masking(speech-shaped noise) on processing difficulties.In terms of resting-state electroencephalography signals,we observed enhanced fro ntal lobe(Brodmann’s area,BA11) activation in the older adults with normal hearing compared with the younger participants with normal hearing,and greater activation in the parietal(BA7) and occipital(BA19) lobes in the individuals with age-related hearing loss compared with the younger adults.Our functional connection analysis suggested that compared with younger people,the older adults with normal hearing exhibited enhanced connections among networks,including the default mode network,sensorimotor network,cingulo-opercular network,occipital network,and frontoparietal network.These results suggest that both normal aging and the development of age-related hearing loss have a negative effect on advanced audito ry processing capabilities and that hearing loss accele rates the decline in speech comprehension,especially in speech competition situations.Older adults with normal hearing may have increased compensatory attentional resource recruitment represented by the to p-down active listening mechanism,while those with age-related hearing loss exhibit decompensation of network connections involving multisensory integration.
基金Project structure event schema and verb potential semantic two-level interaction researchThe humanities and social sciences research youth fund project in 2017 of ministry of education approval(Number:17YJC740130).
文摘Aristotle’s famous theory on rhetoric(Logos,Pathos,Ethos,)is a cornerstone in public speech.The paper would focus on Ethos and its implement in rhetoric devices in public speech.It would make an analysis on Michelle Obama’s farewell speech at the White House on its rhetorical applications in lexis,syntax,phonetics and gestures.From the paper,the readers can obtain some experiences for appreciating public speeches and the teachers can find more effective ways in composition teaching of public speech.
基金Supported by the Department of Electrical Engineering at National Chin-Yi University of TechnologyNational Chin-Yi University of Technology,TakmingUniversity of Science and Technology,Taiwan,for supporting this research。
文摘In the speech recognition system,the acoustic model is an important underlying model,and its accuracy directly affects the performance of the entire system.This paper introduces the construction and training process of the acoustic model in detail and studies the Connectionist temporal classification(CTC)algorithm,which plays an important role in the end-to-end framework,established a convolutional neural network(CNN)combined with an acoustic model of Connectionist temporal classification to improve the accuracy of speech recognition.This study uses a sound sensor,ReSpeakerMic Array v2.0.1,to convert the collected speech signals into text or corresponding speech signals to improve communication and reduce noise and hardware interference.The baseline acousticmodel in this study faces challenges such as long training time,high error rate,and a certain degree of overfitting.The model is trained through continuous design and improvement of the relevant parameters of the acousticmodel,and finally the performance is selected according to the evaluation index.Excellentmodel,which reduces the error rate to about 18%,thus improving the accuracy rate.Finally,comparative verificationwas carried out from the selection of acoustic feature parameters,the selection of modeling units,and the speaker’s speech rate,which further verified the excellent performance of the CTCCNN_5+BN+Residual model structure.In terms of experiments,to train and verify the CTC-CNN baseline acoustic model,this study uses THCHS-30 and ST-CMDS speech data sets as training data sets,and after 54 epochs of training,the word error rate of the acoustic model training set is 31%,the word error rate of the test set is stable at about 43%.This experiment also considers the surrounding environmental noise.Under the noise level of 80∼90 dB,the accuracy rate is 88.18%,which is the worst performance among all levels.In contrast,at 40–60 dB,the accuracy was as high as 97.33%due to less noise pollution.
文摘Purpose:Our study aims to compare speech understanding in noise and spectral-temporal resolution skills with regard to the degree of hearing loss,age,hearing aid use experience and gender of hearing aid users.Methods:Our study included sixty-eight hearing aid users aged between 40-70 years,with bilateral mild and moderate symmetrical sensorineural hearing loss.Random gap detection test,Turkish matrix test and spectral-temporally modulated ripple test were implemented on the participants with bilateral hearing aids.The test results acquired were compared statistically according to different variables and the correlations were examined.Results:No statistically significant differences were observed for speech-in-noise recognition,spectraltemporal resolution among older and younger adults in hearing aid users(p>0.05).There wasn’t found a statistically significant difference among test outcomes as regards different hearing loss degrees(p>0.05).Higher performances were obtained in terms of temporal resolution in male participants and participants with more hearing aid use experience(p<0.05).Significant correlations were obtained between the results of speech-in-noise recognition,temporal resolution and spectral resolution tests performed with hearing aids(p<0.05).Conclusion:Our study findings emphasized the importance of regular hearing aid use and it showed that some auditory skills can be improved with hearing aids.Observation of correlations among the speechin-noise recognition,temporal resolution and spectral resolution tests have revealed that these skills should be evaluated as a whole to maximize the patient’s communication abilities.