To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design...To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design a speech semantic coded communication system,referred to as Deep-STS(i.e.,Deep-learning based Speech To Speech),for the lowbandwidth speech communication.Specifically,we first deeply compress the speech data through extracting the textual information from the speech based on the conformer encoder and connectionist temporal classification decoder at the transmitter side of Deep-STS system.In order to facilitate the final speech timbre recovery,we also extract the short-term timbre feature of speech signals only for the starting 2s duration by the long short-term memory network.Then,the Reed-Solomon coding and hybrid automatic repeat request protocol are applied to improve the reliability of transmitting the extracted text and timbre feature over the wireless channel.Third,we reconstruct the speech signal by the mel spectrogram prediction network and vocoder,when the extracted text is received along with the timbre feature at the receiver of Deep-STS system.Finally,we develop the demo system based on the USRP and GNU radio for the performance evaluation of Deep-STS.Numerical results show that the ac-Received:Jan.17,2024 Revised:Jun.12,2024 Editor:Niu Kai curacy of text extraction approaches 95%,and the mel cepstral distortion between the recovered speech signal and the original one in the spectrum domain is less than 10.Furthermore,the experimental results show that the proposed Deep-STS system can reduce the total delay of speech communication by 85%on average compared to the G.723 coding at the transmission rate of 5.4 kbps.More importantly,the coding rate of the proposed Deep-STS system is extremely low,only 0.2 kbps for continuous speech communication.It is worth noting that the Deep-STS with lower coding rate can support the low-zero-power speech communication,unveiling a new era in ultra-efficient coded communications.展开更多
To enhance the speech quality that is degraded by environmental noise,an algorithm was proposed to reduce the noise and reinforce the speech.The minima controlled recursive averaging(MCRA) algorithm was used to estima...To enhance the speech quality that is degraded by environmental noise,an algorithm was proposed to reduce the noise and reinforce the speech.The minima controlled recursive averaging(MCRA) algorithm was used to estimate the noise spectrum and the partial masking effect which is one of the psychoacoustic properties was introduced to reinforce speech.The performance evaluation was performed by comparing the PESQ(perceptual evaluation of speech quality) and segSNR(segmental signal to noise ratio) by the proposed algorithm with the conventional algorithm.As a result,average PESQ by the proposed algorithm was higher than the average PESQ by the conventional noise reduction algorithm and segSNR was higher as much as 3.2 dB in average than that of the noise reduction algorithm.展开更多
Speech recognition or speech to text includes capturing and digitizing the sound waves, transformation of basic linguistic units or phonemes, constructing words from phonemes and contextually analyzing the words to en...Speech recognition or speech to text includes capturing and digitizing the sound waves, transformation of basic linguistic units or phonemes, constructing words from phonemes and contextually analyzing the words to ensure the correct spelling of words that sounds the same. Approach: Studying the possibility of designing a software system using one of the techniques of artificial intelligence applications neuron networks where this system is able to distinguish the sound signals and neural networks of irregular users. Fixed weights are trained on those forms first and then the system gives the output match for each of these formats and high speed. The proposed neural network study is based on solutions of speech recognition tasks, detecting signals using angular modulation and detection of modulated techniques.展开更多
Structural and statistical characteristics of signals can improve the performance of Compressed Sensing (CS). Two kinds of features of Discrete Cosine Transform (DCT) coefficients of voiced speech signals are discusse...Structural and statistical characteristics of signals can improve the performance of Compressed Sensing (CS). Two kinds of features of Discrete Cosine Transform (DCT) coefficients of voiced speech signals are discussed in this paper. The first one is the block sparsity of DCT coefficients of voiced speech formulated from two different aspects which are the distribution of the DCT coefficients of voiced speech and the comparison of reconstruction performance between the mixed program and Basis Pursuit (BP). The block sparsity of DCT coefficients of voiced speech means that some algorithms of block-sparse CS can be used to improve the recovery performance of speech signals. It is proved by the simulation results of the mixed program which is an improved version of the mixed program. The second one is the well known large DCT coefficients of voiced speech focus on low frequency. In line with this feature, a special Gaussian and Partial Identity Joint (GPIJ) matrix is constructed as the sensing matrix for voiced speech signals. Simulation results show that the GPIJ matrix outperforms the classical Gaussian matrix for speech signals of male and female adults.展开更多
Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect huma...Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions.This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes(LED)alert signals and it also keep track of the places like households,hospitals and remote areas,etc.The proposed approach is able to detect seven emotions:worry,surprise,neutral,sadness,happiness,hate and love.The key elements for the implementation of speech emotion recognition are voice processing,and once the emotion is recognized,the machine interface automatically detects the actions by buzzer and LED.The proposed system is trained and tested on various benchmark datasets,i.e.,Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS)database,Acoustic-Phonetic Continuous Speech Corpus(TIMIT)database,Emotional Speech database(Emo-DB)database and evaluated based on various parameters,i.e.,accuracy,error rate,and time.While comparing with existing technologies,the proposed algorithm gave a better error rate and less time.Error rate and time is decreased by 19.79%,5.13 s.for the RAVDEES dataset,15.77%,0.01 s for the Emo-DB dataset and 14.88%,3.62 for the TIMIT database.The proposed model shows better accuracy of 81.02%for the RAVDEES dataset,84.23%for the TIMIT dataset and 85.12%for the Emo-DB dataset compared to Gaussian Mixture Modeling(GMM)and Support Vector Machine(SVM)Model.展开更多
Parts of speech conversion refers to transforming the certain words of the source language into another category words of the target language, which is one of the common methods and techniques used in the translation ...Parts of speech conversion refers to transforming the certain words of the source language into another category words of the target language, which is one of the common methods and techniques used in the translation of English and Chinese languages. This paper starts from a relatively significant difference in English and Chinese language— the tendency of static words in English in contrast to that of dynamic words in Chinese, to explore the theoretical basis of transference in parts of speech in English and Chinese language translation. Combining with a large number of examples, the author puts forward some skills on transformation of parts of speech in English-Chinese translation to guide translation practice. The study found that the theoretical basis of the conversion in English and Chinese mostly including:1) there is a boundary ambiguity between lexical category;2)English SV/SVO structure is rigorous, which leads to expressing the dynamic meaning by means of using other lexical category; 3) the development of social culture not only makes new words continue to increase rapidly, but also gives many used words with new meanings; 4)it is acknowledged in Lexical morphology that English words which come from derivation are in larger number and derivations can make the word class of the corresponding original words either the same or different. The dynamic and specific features of Chinese make it more use of verbs in language use. Thus, in the process of translation, appropriate parts of speech conversion can make the translation more in line with their own habits of expression.展开更多
Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot dist...Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot distinguish the same words under different parts of speech(POS).Aiming to alleviate this problem,this paper proposed a new word vector training method based on POS feature.It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors.In the experiments,we conducted extensive experiments to evaluate our methods.The experimental result shows that the proposed method is beneficial to improve the quality of translation from English into Chinese.展开更多
The co-articulation is one of the main reasons that makes the speech recognition difficult. However, the traditional Hidden Markov Models(HMM) can not model the co-articulation, because they depend on the first-order ...The co-articulation is one of the main reasons that makes the speech recognition difficult. However, the traditional Hidden Markov Models(HMM) can not model the co-articulation, because they depend on the first-order assumption. In this paper, for modeling the co-articulation, a more perfect HMM than traditional first order HMM is proposed on the basis of the authors’ previous works(1997, 1998) and they give a method in that this HMM is used in continuous speech recognition by means of multilayer perceptrons(MLP), i.e. the hybrid HMM/MLP method with triple MLP structure. The experimental result shows that this new hybrid HMM/MLP method decreases error rate in comparison with authors’ previous works.展开更多
Automatic speech recognition, often incorrectly called voice recognition, is a computer based software technique that analyzes audio signals captured by a microphone and translates them into machine interpreted text. ...Automatic speech recognition, often incorrectly called voice recognition, is a computer based software technique that analyzes audio signals captured by a microphone and translates them into machine interpreted text. Speech processing is based on techniques that need local CPU or cloud computing with an Internet link. An activation word starts the uplink;“OK google”, “Alexa”, … and voice analysis is not usually suitable for autonomous limited CPU system (16 bits microcontroller) with low energy. To achieve this realization, this paper presents specific techniques and details an efficiency voice command method compatible with an embedded IOT low-power device.展开更多
BACKGROUND Anti-N-methyl-D-aspartate receptor(anti-NMDAR)encephalitis is a treatable but frequently misdiagnosed autoimmune disease.Speech dysfunction,as one of the common manifestations of anti-NMDAR encephalitis,is ...BACKGROUND Anti-N-methyl-D-aspartate receptor(anti-NMDAR)encephalitis is a treatable but frequently misdiagnosed autoimmune disease.Speech dysfunction,as one of the common manifestations of anti-NMDAR encephalitis,is usually reported as a symptom secondary to psychiatric symptoms or seizures rather than the initial symptom in a paroxysmal form.We report a case of anti-NMDAR encephalitis with paroxysmal speech disorder as a rare initial manifestation,and hope that it will contribute to the literature.CASE SUMMARY A 39-year-old man with anti-NMDAR encephalitis initially presented with paroxysmal nonfluent aphasia and was misdiagnosed with a transient ischemic attack and cerebral infarction successively.The patient subsequently presented with seizures,but no abnormalities were found on brain magnetic resonance imaging or electroencephalogram.Cerebrospinal fluid(CSF)analysis revealed mild pleocytosis and increased protein levels.Anti-NMDAR antibodies in serum and CSF were detected for a conclusive diagnosis.After immunotherapy,the patient made a full recovery.CONCLUSION This case suggests that paroxysmal speech disorder may be the presenting symptom of anti-NMDAR encephalitis in a young patient.展开更多
Purpose: Based on the dilemma faced by the digitization of Zhuang Brocade intangible cultural heritage in Guangxi and the analysis of the advantages of artificial intelligence art in the digitization and innovation of...Purpose: Based on the dilemma faced by the digitization of Zhuang Brocade intangible cultural heritage in Guangxi and the analysis of the advantages of artificial intelligence art in the digitization and innovation of intangible cultural heritage, this study explores the application path of the digital inheritance and dissemination of Zhuang Brocade in Guangxi by relying on the current theory and practice of artificial intelligence art, and provides reference significance for the inheritance and dissemination of intangible cultural heritage through artificial intelligence art. Method: Through in-depth analysis of the types, characteristics, and cultural connotations of Zhuang Brocade patterns in Guangxi, machine learning is performed using StyleGAN’s adversarial neural network, and digital art works are generated by applying Clip-style. The feasibility of developing digital resources for Zhuang Brocade intangible cultural heritage is explored through artistic practice, and an application process and implementation strategy for digital art innovation are proposed. Result: It is feasible to create NFT digital collections through artificial intelligence art to achieve the application scenarios of digital inheritance, innovation, cross-regional dissemination, and even industrialization of Zhuang Brocade in Guangxi. Conclusion: Artificial intelligence art creation can provide new opportunities for digital cultural dissemination and inheritance of Zhuang Brocade while reflecting its cultural connotations and characteristics, and ensure traceable development while ensuring intellectual property rights. It realizes the continuation and revival of the value of Zhuang Brocade in Guangxi, and provides a certain reference for the inheritance and development of other intangible cultural heritage in the current context of rapid media updates and iterations.展开更多
The study in this paper belongs to the field of the grammar of metaphor. Six types of animal words were investigated in order to find the connection between their parts of speech and metaphorical relationship by apply...The study in this paper belongs to the field of the grammar of metaphor. Six types of animal words were investigated in order to find the connection between their parts of speech and metaphorical relationship by applying the British National Corpus in combination with Collins Dictionary. It is found that not all verbs formed via conversion from nominal animal words carry the conceptual features of the animal entities, and in this case, the resultant verbs do not form metaphors. Where the verbs carry over the conceptual features of the nominal animal words, metaphors will occur. Of course, there are a few exceptions, like fish in fish for, which forms a metaphor though fish as a verb here does not carry over the conceptual features of the nominal fish.As to adjectives and adverbs derived from nominal animal words, they form metaphors in most cases by carrying over the conceptual features of the relevant animals. The study can be broadened to other entity nouns in order to find regularities about when their derivatives form metaphoricity.展开更多
Judging from the references, Chinese scholars have dozens of terminology translations on the British analytic philosopher Austin’s locutionary act, illocutionary act and perlocutionary act. There is no unified transl...Judging from the references, Chinese scholars have dozens of terminology translations on the British analytic philosopher Austin’s locutionary act, illocutionary act and perlocutionary act. There is no unified translation for it, and scholars did not cause enough attention to this. Thus teachers use the terms in the classroom inconsistently, resulting in doctoral students, graduates and undergraduates writing dissertations in confusion and chaos. This paper reviews and trims 25 frequently -used terminology translations of Speech Act Tripartite Model since 1955. Furthermore, some suggestions on Chinese terminology translations of it are put forward as well.展开更多
It analyzes the father image of the hotel keeper in Hemingway's short story Cat in the Rain from the perspective of Speech Act Theoy.Firstly,it presents the frame work of Speech Act Theory.According to Austin,ther...It analyzes the father image of the hotel keeper in Hemingway's short story Cat in the Rain from the perspective of Speech Act Theoy.Firstly,it presents the frame work of Speech Act Theory.According to Austin,there are two types of sentences: performatives and constatives.Yule further presents the action performed by producing an utterance will consist of three related acts: locutionary act,illocutionary act,and perlocutionary act.Then it analyzes respectively the speech act of the husband,the hotel keeper,and the wife.Through the analysis,the daughter image of the wife and the coldness of the husband the reveal and accentuate the father image of the hotel keeper who is considerate to his daughter and gives his daughter selfless love.Furthermore,it unfolds a broader horizon for readers to understand and appreciate the story.展开更多
In this paper, we have presented an effective method for recognizing partial speech with the help of Non Audible Murmur (NAM) microphone which is robust against noise. NAM is a kind of soft murmur that is so weak that...In this paper, we have presented an effective method for recognizing partial speech with the help of Non Audible Murmur (NAM) microphone which is robust against noise. NAM is a kind of soft murmur that is so weak that even people nearby the speaker cannot hear it. We can recognize this NAM from the mastoid of humans. It can be detected only with the help of a special type of microphone termed as NAM microphone. We can use this approach for impaired people who can hear sound but can speak only partial words (semi-mute) or incomplete words. We can record and recognize partial speech using NAM microphone. This approach can be used to solve problems for paralysed people who use voice controlled wheelchair which helps them to move around without the help of others. The present voice controlled wheelchair systems can recognize only fully spoken words and can’t recognise words spoken by semi-mute or partially speech impaired people. Further it uses normal microphone which hassevere degradation and external noise influence when used for recognizing partial speech inputs from impaired people. To overcome this problem, we can use NAM microphone along with Tamil Speech Recognition Engine (TSRE) to improve the accuracy of the results. The proposed method was designed and implemented in a wheelchair like model using Arduino microcontroller kit. Experimental results have shown that 80% accuracy can be obtained in this method and also proved that recognizing partially spoken words using NAM microphone was much efficient compared to the normal microphone.展开更多
基金supported in part by National Natural Science Foundation of China under Grants 62122069,62071431,and 62201507.
文摘To address the contradiction between the explosive growth of wireless data and the limited spectrum resources,semantic communication has been emerging as a promising communication paradigm.In this paper,we thus design a speech semantic coded communication system,referred to as Deep-STS(i.e.,Deep-learning based Speech To Speech),for the lowbandwidth speech communication.Specifically,we first deeply compress the speech data through extracting the textual information from the speech based on the conformer encoder and connectionist temporal classification decoder at the transmitter side of Deep-STS system.In order to facilitate the final speech timbre recovery,we also extract the short-term timbre feature of speech signals only for the starting 2s duration by the long short-term memory network.Then,the Reed-Solomon coding and hybrid automatic repeat request protocol are applied to improve the reliability of transmitting the extracted text and timbre feature over the wireless channel.Third,we reconstruct the speech signal by the mel spectrogram prediction network and vocoder,when the extracted text is received along with the timbre feature at the receiver of Deep-STS system.Finally,we develop the demo system based on the USRP and GNU radio for the performance evaluation of Deep-STS.Numerical results show that the ac-Received:Jan.17,2024 Revised:Jun.12,2024 Editor:Niu Kai curacy of text extraction approaches 95%,and the mel cepstral distortion between the recovered speech signal and the original one in the spectrum domain is less than 10.Furthermore,the experimental results show that the proposed Deep-STS system can reduce the total delay of speech communication by 85%on average compared to the G.723 coding at the transmission rate of 5.4 kbps.More importantly,the coding rate of the proposed Deep-STS system is extremely low,only 0.2 kbps for continuous speech communication.It is worth noting that the Deep-STS with lower coding rate can support the low-zero-power speech communication,unveiling a new era in ultra-efficient coded communications.
文摘To enhance the speech quality that is degraded by environmental noise,an algorithm was proposed to reduce the noise and reinforce the speech.The minima controlled recursive averaging(MCRA) algorithm was used to estimate the noise spectrum and the partial masking effect which is one of the psychoacoustic properties was introduced to reinforce speech.The performance evaluation was performed by comparing the PESQ(perceptual evaluation of speech quality) and segSNR(segmental signal to noise ratio) by the proposed algorithm with the conventional algorithm.As a result,average PESQ by the proposed algorithm was higher than the average PESQ by the conventional noise reduction algorithm and segSNR was higher as much as 3.2 dB in average than that of the noise reduction algorithm.
文摘Speech recognition or speech to text includes capturing and digitizing the sound waves, transformation of basic linguistic units or phonemes, constructing words from phonemes and contextually analyzing the words to ensure the correct spelling of words that sounds the same. Approach: Studying the possibility of designing a software system using one of the techniques of artificial intelligence applications neuron networks where this system is able to distinguish the sound signals and neural networks of irregular users. Fixed weights are trained on those forms first and then the system gives the output match for each of these formats and high speed. The proposed neural network study is based on solutions of speech recognition tasks, detecting signals using angular modulation and detection of modulated techniques.
基金Supported by the National Natural Science Foundation of China (No. 60971129)the National Research Program of China (973 Program) (No. 2011CB302303)the Scientific Innovation Research Program of College Graduate in Jiangsu Province (No. CXLX11_0408)
文摘Structural and statistical characteristics of signals can improve the performance of Compressed Sensing (CS). Two kinds of features of Discrete Cosine Transform (DCT) coefficients of voiced speech signals are discussed in this paper. The first one is the block sparsity of DCT coefficients of voiced speech formulated from two different aspects which are the distribution of the DCT coefficients of voiced speech and the comparison of reconstruction performance between the mixed program and Basis Pursuit (BP). The block sparsity of DCT coefficients of voiced speech means that some algorithms of block-sparse CS can be used to improve the recovery performance of speech signals. It is proved by the simulation results of the mixed program which is an improved version of the mixed program. The second one is the well known large DCT coefficients of voiced speech focus on low frequency. In line with this feature, a special Gaussian and Partial Identity Joint (GPIJ) matrix is constructed as the sensing matrix for voiced speech signals. Simulation results show that the GPIJ matrix outperforms the classical Gaussian matrix for speech signals of male and female adults.
基金Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2022-166.
文摘Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions.This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes(LED)alert signals and it also keep track of the places like households,hospitals and remote areas,etc.The proposed approach is able to detect seven emotions:worry,surprise,neutral,sadness,happiness,hate and love.The key elements for the implementation of speech emotion recognition are voice processing,and once the emotion is recognized,the machine interface automatically detects the actions by buzzer and LED.The proposed system is trained and tested on various benchmark datasets,i.e.,Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS)database,Acoustic-Phonetic Continuous Speech Corpus(TIMIT)database,Emotional Speech database(Emo-DB)database and evaluated based on various parameters,i.e.,accuracy,error rate,and time.While comparing with existing technologies,the proposed algorithm gave a better error rate and less time.Error rate and time is decreased by 19.79%,5.13 s.for the RAVDEES dataset,15.77%,0.01 s for the Emo-DB dataset and 14.88%,3.62 for the TIMIT database.The proposed model shows better accuracy of 81.02%for the RAVDEES dataset,84.23%for the TIMIT dataset and 85.12%for the Emo-DB dataset compared to Gaussian Mixture Modeling(GMM)and Support Vector Machine(SVM)Model.
文摘Parts of speech conversion refers to transforming the certain words of the source language into another category words of the target language, which is one of the common methods and techniques used in the translation of English and Chinese languages. This paper starts from a relatively significant difference in English and Chinese language— the tendency of static words in English in contrast to that of dynamic words in Chinese, to explore the theoretical basis of transference in parts of speech in English and Chinese language translation. Combining with a large number of examples, the author puts forward some skills on transformation of parts of speech in English-Chinese translation to guide translation practice. The study found that the theoretical basis of the conversion in English and Chinese mostly including:1) there is a boundary ambiguity between lexical category;2)English SV/SVO structure is rigorous, which leads to expressing the dynamic meaning by means of using other lexical category; 3) the development of social culture not only makes new words continue to increase rapidly, but also gives many used words with new meanings; 4)it is acknowledged in Lexical morphology that English words which come from derivation are in larger number and derivations can make the word class of the corresponding original words either the same or different. The dynamic and specific features of Chinese make it more use of verbs in language use. Thus, in the process of translation, appropriate parts of speech conversion can make the translation more in line with their own habits of expression.
基金This work is supported by the National Natural Science Foundation of China(61872231,61701297).
文摘Neural Machine Translation(NMT)based system is an important technology for translation applications.However,there is plenty of rooms for the improvement of NMT.In the process of NMT,traditional word vector cannot distinguish the same words under different parts of speech(POS).Aiming to alleviate this problem,this paper proposed a new word vector training method based on POS feature.It can efficiently improve the quality of translation by adding POS feature to the training process of word vectors.In the experiments,we conducted extensive experiments to evaluate our methods.The experimental result shows that the proposed method is beneficial to improve the quality of translation from English into Chinese.
文摘The co-articulation is one of the main reasons that makes the speech recognition difficult. However, the traditional Hidden Markov Models(HMM) can not model the co-articulation, because they depend on the first-order assumption. In this paper, for modeling the co-articulation, a more perfect HMM than traditional first order HMM is proposed on the basis of the authors’ previous works(1997, 1998) and they give a method in that this HMM is used in continuous speech recognition by means of multilayer perceptrons(MLP), i.e. the hybrid HMM/MLP method with triple MLP structure. The experimental result shows that this new hybrid HMM/MLP method decreases error rate in comparison with authors’ previous works.
文摘Automatic speech recognition, often incorrectly called voice recognition, is a computer based software technique that analyzes audio signals captured by a microphone and translates them into machine interpreted text. Speech processing is based on techniques that need local CPU or cloud computing with an Internet link. An activation word starts the uplink;“OK google”, “Alexa”, … and voice analysis is not usually suitable for autonomous limited CPU system (16 bits microcontroller) with low energy. To achieve this realization, this paper presents specific techniques and details an efficiency voice command method compatible with an embedded IOT low-power device.
文摘BACKGROUND Anti-N-methyl-D-aspartate receptor(anti-NMDAR)encephalitis is a treatable but frequently misdiagnosed autoimmune disease.Speech dysfunction,as one of the common manifestations of anti-NMDAR encephalitis,is usually reported as a symptom secondary to psychiatric symptoms or seizures rather than the initial symptom in a paroxysmal form.We report a case of anti-NMDAR encephalitis with paroxysmal speech disorder as a rare initial manifestation,and hope that it will contribute to the literature.CASE SUMMARY A 39-year-old man with anti-NMDAR encephalitis initially presented with paroxysmal nonfluent aphasia and was misdiagnosed with a transient ischemic attack and cerebral infarction successively.The patient subsequently presented with seizures,but no abnormalities were found on brain magnetic resonance imaging or electroencephalogram.Cerebrospinal fluid(CSF)analysis revealed mild pleocytosis and increased protein levels.Anti-NMDAR antibodies in serum and CSF were detected for a conclusive diagnosis.After immunotherapy,the patient made a full recovery.CONCLUSION This case suggests that paroxysmal speech disorder may be the presenting symptom of anti-NMDAR encephalitis in a young patient.
文摘Purpose: Based on the dilemma faced by the digitization of Zhuang Brocade intangible cultural heritage in Guangxi and the analysis of the advantages of artificial intelligence art in the digitization and innovation of intangible cultural heritage, this study explores the application path of the digital inheritance and dissemination of Zhuang Brocade in Guangxi by relying on the current theory and practice of artificial intelligence art, and provides reference significance for the inheritance and dissemination of intangible cultural heritage through artificial intelligence art. Method: Through in-depth analysis of the types, characteristics, and cultural connotations of Zhuang Brocade patterns in Guangxi, machine learning is performed using StyleGAN’s adversarial neural network, and digital art works are generated by applying Clip-style. The feasibility of developing digital resources for Zhuang Brocade intangible cultural heritage is explored through artistic practice, and an application process and implementation strategy for digital art innovation are proposed. Result: It is feasible to create NFT digital collections through artificial intelligence art to achieve the application scenarios of digital inheritance, innovation, cross-regional dissemination, and even industrialization of Zhuang Brocade in Guangxi. Conclusion: Artificial intelligence art creation can provide new opportunities for digital cultural dissemination and inheritance of Zhuang Brocade while reflecting its cultural connotations and characteristics, and ensure traceable development while ensuring intellectual property rights. It realizes the continuation and revival of the value of Zhuang Brocade in Guangxi, and provides a certain reference for the inheritance and development of other intangible cultural heritage in the current context of rapid media updates and iterations.
文摘The study in this paper belongs to the field of the grammar of metaphor. Six types of animal words were investigated in order to find the connection between their parts of speech and metaphorical relationship by applying the British National Corpus in combination with Collins Dictionary. It is found that not all verbs formed via conversion from nominal animal words carry the conceptual features of the animal entities, and in this case, the resultant verbs do not form metaphors. Where the verbs carry over the conceptual features of the nominal animal words, metaphors will occur. Of course, there are a few exceptions, like fish in fish for, which forms a metaphor though fish as a verb here does not carry over the conceptual features of the nominal fish.As to adjectives and adverbs derived from nominal animal words, they form metaphors in most cases by carrying over the conceptual features of the relevant animals. The study can be broadened to other entity nouns in order to find regularities about when their derivatives form metaphoricity.
文摘Judging from the references, Chinese scholars have dozens of terminology translations on the British analytic philosopher Austin’s locutionary act, illocutionary act and perlocutionary act. There is no unified translation for it, and scholars did not cause enough attention to this. Thus teachers use the terms in the classroom inconsistently, resulting in doctoral students, graduates and undergraduates writing dissertations in confusion and chaos. This paper reviews and trims 25 frequently -used terminology translations of Speech Act Tripartite Model since 1955. Furthermore, some suggestions on Chinese terminology translations of it are put forward as well.
文摘It analyzes the father image of the hotel keeper in Hemingway's short story Cat in the Rain from the perspective of Speech Act Theoy.Firstly,it presents the frame work of Speech Act Theory.According to Austin,there are two types of sentences: performatives and constatives.Yule further presents the action performed by producing an utterance will consist of three related acts: locutionary act,illocutionary act,and perlocutionary act.Then it analyzes respectively the speech act of the husband,the hotel keeper,and the wife.Through the analysis,the daughter image of the wife and the coldness of the husband the reveal and accentuate the father image of the hotel keeper who is considerate to his daughter and gives his daughter selfless love.Furthermore,it unfolds a broader horizon for readers to understand and appreciate the story.
文摘In this paper, we have presented an effective method for recognizing partial speech with the help of Non Audible Murmur (NAM) microphone which is robust against noise. NAM is a kind of soft murmur that is so weak that even people nearby the speaker cannot hear it. We can recognize this NAM from the mastoid of humans. It can be detected only with the help of a special type of microphone termed as NAM microphone. We can use this approach for impaired people who can hear sound but can speak only partial words (semi-mute) or incomplete words. We can record and recognize partial speech using NAM microphone. This approach can be used to solve problems for paralysed people who use voice controlled wheelchair which helps them to move around without the help of others. The present voice controlled wheelchair systems can recognize only fully spoken words and can’t recognise words spoken by semi-mute or partially speech impaired people. Further it uses normal microphone which hassevere degradation and external noise influence when used for recognizing partial speech inputs from impaired people. To overcome this problem, we can use NAM microphone along with Tamil Speech Recognition Engine (TSRE) to improve the accuracy of the results. The proposed method was designed and implemented in a wheelchair like model using Arduino microcontroller kit. Experimental results have shown that 80% accuracy can be obtained in this method and also proved that recognizing partially spoken words using NAM microphone was much efficient compared to the normal microphone.