To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design...To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design a speech semantic coded communication system,referred to as Deep-STS(i.e.,Deep-learning based Speech To Speech),for the lowbandwidth speech communication.Specifically,we first deeply compress the speech data through extracting the textual information from the speech based on the conformer encoder and connectionist temporal classification decoder at the transmitter side of Deep-STS system.In order to facilitate the final speech timbre recovery,we also extract the short-term timbre feature of speech signals only for the starting 2s duration by the long short-term memory network.Then,the Reed-Solomon coding and hybrid automatic repeat request protocol are applied to improve the reliability of transmitting the extracted text and timbre feature over the wireless channel.Third,we reconstruct the speech signal by the mel spectrogram prediction network and vocoder,when the extracted text is received along with the timbre feature at the receiver of Deep-STS system.Finally,we develop the demo system based on the USRP and GNU radio for the performance evaluation of Deep-STS.Numerical results show that the ac-Received:Jan.17,2024 Revised:Jun.12,2024 Editor:Niu Kai curacy of text extraction approaches 95%,and the mel cepstral distortion between the recovered speech signal and the original one in the spectrum domain is less than 10.Furthermore,the experimental results show that the proposed Deep-STS system can reduce the total delay of speech communication by 85%on average compared to the G.723 coding at the transmission rate of 5.4 kbps.More importantly,the coding rate of the proposed Deep-STS system is extremely low,only 0.2 kbps for continuous speech communication.It is worth noting that the Deep-STS with lower coding rate can support the low-zero-power speech communication,unveiling a new era in ultra-efficient coded communications.展开更多
This paper is a continuation of recent work by Guo-Xiang-Zheng[10].We deduce the sharp Morrey regularity theory for weak solutions to the fourth order nonhomogeneous Lamm-Rivière equation △^{2}u=△(V▽u)+div(w▽...This paper is a continuation of recent work by Guo-Xiang-Zheng[10].We deduce the sharp Morrey regularity theory for weak solutions to the fourth order nonhomogeneous Lamm-Rivière equation △^{2}u=△(V▽u)+div(w▽u)+(▽ω+F)·▽u+f in B^(4),under the smallest regularity assumptions of V,ω,ω,F,where f belongs to some Morrey spaces.This work was motivated by many geometrical problems such as the flow of biharmonic mappings.Our results deepens the Lp type regularity theory of[10],and generalizes the work of Du,Kang and Wang[4]on a second order problem to our fourth order problems.展开更多
Your Excellency President Susilo Bambang Yudhoyono, Dear Chairman, ladies and gentlemen, I am very glad to have this op- portunity to attend the fourth Bali Democracy Forum. Today, state leaders and experts from diff...Your Excellency President Susilo Bambang Yudhoyono, Dear Chairman, ladies and gentlemen, I am very glad to have this op- portunity to attend the fourth Bali Democracy Forum. Today, state leaders and experts from different countries gather together at Bali, the beautiful "Flower Island," to share experience and exchange views of developing democracy, and explore ways to promote the democratic de- velopment of our own countries.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hate...Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.展开更多
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona...Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.展开更多
In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p...In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.展开更多
Reporting is essential in language use,including the re-expression of other people’s or self’s words,opinions,psychological activities,etc.Grasping the translation methods of reported speech in German academic paper...Reporting is essential in language use,including the re-expression of other people’s or self’s words,opinions,psychological activities,etc.Grasping the translation methods of reported speech in German academic papers is very important to improve the accuracy of academic paper translation.This study takes the translation of“Internationalization of German Universities”(Die Internationalisierung der deutschen Hochschulen),an academic paper of higher education,as an example to explore the translation methods of reported speech in German academic papers.It is found that the use of word order conversion,part of speech conversion and split translation methods can make the translation more accurate and fluent.This paper helps to grasp the rules and characteristics of the translation of reported speech in German academic papers,and also provides a reference for improving the quality of German-Chinese translation.展开更多
In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that op...In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.展开更多
On January 8th,the China Chamber of Commerce for Import and Export of Textiles released data showing that in the fourth quarter,China’s textile and clothing exports gradually stabilised.The proportion of intermediate...On January 8th,the China Chamber of Commerce for Import and Export of Textiles released data showing that in the fourth quarter,China’s textile and clothing exports gradually stabilised.The proportion of intermediate exports in textile and clothing exports continued to rise,providing a new impetus for the substantial expansion of exports to traditional markets and international supply chain cooperation.展开更多
This article explores how the Fourth Industrial Revolution(4IR)can transform and enhance sustainable health systems in Africa,supporting the continent’s growth and development goals.The 4IR integrates digital technol...This article explores how the Fourth Industrial Revolution(4IR)can transform and enhance sustainable health systems in Africa,supporting the continent’s growth and development goals.The 4IR integrates digital technologies such as Artificial Intelligence(AI),robotics,automation,big data,machine learning,the Internet of Things(IoT),algorithms,and data-driven innovations,offering unique opportunities to address persistent healthcare challenges in Africa.However,this paper will focus specifically on AI systems within the healthcare sector.The paper will investigate how to address AI-related concerns,including privacy rights violations,data exploitation,and the lack of education and training in adopting these advanced technologies in hospitals.The study aims to find a balance between protecting patients’data and privacy through cybersecurity and effectively utilizing AI systems in healthcare.This balance can be achieved by adopting a robust ethical and legal framework that establishes precise data and privacy protection policies,ensuring that AI systems are used responsibly and in compliance with established standards.展开更多
The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universiti...The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universities.This article takes the theory of second language acquisition as the background,analyzes the important role and value of this theory in English speech teaching in universities,and explores how to apply the theory of second language acquisition in English speech teaching in universities.It aims to strengthen the cultivation of English skilled talents and provide a brief reference for improving English speech teaching in universities.展开更多
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India...Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.展开更多
This paper is concerned with the following fourth-order three-point boundary value problem , where , we discuss the existence of positive solutions to the above problem by applying to the fixed point theory in cones a...This paper is concerned with the following fourth-order three-point boundary value problem , where , we discuss the existence of positive solutions to the above problem by applying to the fixed point theory in cones and iterative technique.展开更多
目的 比较韶音后挂式骨导助听器对不同类型听力损失患者的听力干预短期效果,探讨其临床应用前景。方法 55例听力损失患者(年龄18~82岁;传导性听力损失9例,感音神经性听力损失15例,混合性听力损失31例;左右耳0.5、1、2、4 kHz四个频率的...目的 比较韶音后挂式骨导助听器对不同类型听力损失患者的听力干预短期效果,探讨其临床应用前景。方法 55例听力损失患者(年龄18~82岁;传导性听力损失9例,感音神经性听力损失15例,混合性听力损失31例;左右耳0.5、1、2、4 kHz四个频率的骨导纯音听阈均≤60 dB HL)配戴韶音后挂式骨导助听器,分别于配戴助听器前和配戴第14±2 d行声场总体听阈、单音节识别率及安静环境语句识别阈测试,比较配戴助听器前后的结果差异。并于配戴第14±2 d使用IOI-HA问卷对助听器使用效果进行评估。结果 患者配戴后挂式骨导式助听器后声场四个频率平均听阈(39.3±4.9 dB HL)较配戴前(56.5±8.2 dB HL)显著改善,差异有统计学意义(P<0.001)。患者助听前单音节识别率(给声强度:患者助听前双音节言语识别阈减5 dB)为29.8%±11.4%,配戴第14±2 d为72.4%±14.4%,配戴后单音节识别率显著提高,差异有统计学意义(P<0.001)。患者语句识别阈由配戴前的48.6±9.7 dB HL降至34.3±5.6 dB HL,差异有统计学意义(P<0.001)。配戴14±2 d时IOI-HA问卷评估总分平均值为29.0±3.8分。结论 后挂式骨导助听器可显著提高传导性、0.5~4 kHz骨导纯音听阈不超过60 dB HL的混合性及感音神经性听力损失患者的听力及言语识别能力。展开更多
基金supported in part by National Natural Science Foundation of China under Grants 62122069,62071431,and 62201507.
文摘To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design a speech semantic coded communication system,referred to as Deep-STS(i.e.,Deep-learning based Speech To Speech),for the lowbandwidth speech communication.Specifically,we first deeply compress the speech data through extracting the textual information from the speech based on the conformer encoder and connectionist temporal classification decoder at the transmitter side of Deep-STS system.In order to facilitate the final speech timbre recovery,we also extract the short-term timbre feature of speech signals only for the starting 2s duration by the long short-term memory network.Then,the Reed-Solomon coding and hybrid automatic repeat request protocol are applied to improve the reliability of transmitting the extracted text and timbre feature over the wireless channel.Third,we reconstruct the speech signal by the mel spectrogram prediction network and vocoder,when the extracted text is received along with the timbre feature at the receiver of Deep-STS system.Finally,we develop the demo system based on the USRP and GNU radio for the performance evaluation of Deep-STS.Numerical results show that the ac-Received:Jan.17,2024 Revised:Jun.12,2024 Editor:Niu Kai curacy of text extraction approaches 95%,and the mel cepstral distortion between the recovered speech signal and the original one in the spectrum domain is less than 10.Furthermore,the experimental results show that the proposed Deep-STS system can reduce the total delay of speech communication by 85%on average compared to the G.723 coding at the transmission rate of 5.4 kbps.More importantly,the coding rate of the proposed Deep-STS system is extremely low,only 0.2 kbps for continuous speech communication.It is worth noting that the Deep-STS with lower coding rate can support the low-zero-power speech communication,unveiling a new era in ultra-efficient coded communications.
基金supported by the National Natural Science Foundation of China(12271296,12271195).
文摘This paper is a continuation of recent work by Guo-Xiang-Zheng[10].We deduce the sharp Morrey regularity theory for weak solutions to the fourth order nonhomogeneous Lamm-Rivière equation △^{2}u=△(V▽u)+div(w▽u)+(▽ω+F)·▽u+f in B^(4),under the smallest regularity assumptions of V,ω,ω,F,where f belongs to some Morrey spaces.This work was motivated by many geometrical problems such as the flow of biharmonic mappings.Our results deepens the Lp type regularity theory of[10],and generalizes the work of Du,Kang and Wang[4]on a second order problem to our fourth order problems.
文摘Your Excellency President Susilo Bambang Yudhoyono, Dear Chairman, ladies and gentlemen, I am very glad to have this op- portunity to attend the fourth Bali Democracy Forum. Today, state leaders and experts from different countries gather together at Bali, the beautiful "Flower Island," to share experience and exchange views of developing democracy, and explore ways to promote the democratic de- velopment of our own countries.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
文摘Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.
文摘Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.
基金This research was funded by Shenzhen Science and Technology Program(Grant No.RCBS20221008093121051)the General Higher Education Project of Guangdong Provincial Education Department(Grant No.2020ZDZX3085)+1 种基金China Postdoctoral Science Foundation(Grant No.2021M703371)the Post-Doctoral Foundation Project of Shenzhen Polytechnic(Grant No.6021330002K).
文摘In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.
文摘Reporting is essential in language use,including the re-expression of other people’s or self’s words,opinions,psychological activities,etc.Grasping the translation methods of reported speech in German academic papers is very important to improve the accuracy of academic paper translation.This study takes the translation of“Internationalization of German Universities”(Die Internationalisierung der deutschen Hochschulen),an academic paper of higher education,as an example to explore the translation methods of reported speech in German academic papers.It is found that the use of word order conversion,part of speech conversion and split translation methods can make the translation more accurate and fluent.This paper helps to grasp the rules and characteristics of the translation of reported speech in German academic papers,and also provides a reference for improving the quality of German-Chinese translation.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.This study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.
文摘On January 8th,the China Chamber of Commerce for Import and Export of Textiles released data showing that in the fourth quarter,China’s textile and clothing exports gradually stabilised.The proportion of intermediate exports in textile and clothing exports continued to rise,providing a new impetus for the substantial expansion of exports to traditional markets and international supply chain cooperation.
文摘This article explores how the Fourth Industrial Revolution(4IR)can transform and enhance sustainable health systems in Africa,supporting the continent’s growth and development goals.The 4IR integrates digital technologies such as Artificial Intelligence(AI),robotics,automation,big data,machine learning,the Internet of Things(IoT),algorithms,and data-driven innovations,offering unique opportunities to address persistent healthcare challenges in Africa.However,this paper will focus specifically on AI systems within the healthcare sector.The paper will investigate how to address AI-related concerns,including privacy rights violations,data exploitation,and the lack of education and training in adopting these advanced technologies in hospitals.The study aims to find a balance between protecting patients’data and privacy through cybersecurity and effectively utilizing AI systems in healthcare.This balance can be achieved by adopting a robust ethical and legal framework that establishes precise data and privacy protection policies,ensuring that AI systems are used responsibly and in compliance with established standards.
文摘The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universities.This article takes the theory of second language acquisition as the background,analyzes the important role and value of this theory in English speech teaching in universities,and explores how to apply the theory of second language acquisition in English speech teaching in universities.It aims to strengthen the cultivation of English skilled talents and provide a brief reference for improving English speech teaching in universities.
文摘Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.
文摘This paper is concerned with the following fourth-order three-point boundary value problem , where , we discuss the existence of positive solutions to the above problem by applying to the fixed point theory in cones and iterative technique.
文摘目的 比较韶音后挂式骨导助听器对不同类型听力损失患者的听力干预短期效果,探讨其临床应用前景。方法 55例听力损失患者(年龄18~82岁;传导性听力损失9例,感音神经性听力损失15例,混合性听力损失31例;左右耳0.5、1、2、4 kHz四个频率的骨导纯音听阈均≤60 dB HL)配戴韶音后挂式骨导助听器,分别于配戴助听器前和配戴第14±2 d行声场总体听阈、单音节识别率及安静环境语句识别阈测试,比较配戴助听器前后的结果差异。并于配戴第14±2 d使用IOI-HA问卷对助听器使用效果进行评估。结果 患者配戴后挂式骨导式助听器后声场四个频率平均听阈(39.3±4.9 dB HL)较配戴前(56.5±8.2 dB HL)显著改善,差异有统计学意义(P<0.001)。患者助听前单音节识别率(给声强度:患者助听前双音节言语识别阈减5 dB)为29.8%±11.4%,配戴第14±2 d为72.4%±14.4%,配戴后单音节识别率显著提高,差异有统计学意义(P<0.001)。患者语句识别阈由配戴前的48.6±9.7 dB HL降至34.3±5.6 dB HL,差异有统计学意义(P<0.001)。配戴14±2 d时IOI-HA问卷评估总分平均值为29.0±3.8分。结论 后挂式骨导助听器可显著提高传导性、0.5~4 kHz骨导纯音听阈不超过60 dB HL的混合性及感音神经性听力损失患者的听力及言语识别能力。