Marine protected areas(MPAs)across various countries have contributed to safeguarding coastal and marine environments.Despite these efforts,marine non-native species(NNS)continue to threaten biodiversity and ecosystem...Marine protected areas(MPAs)across various countries have contributed to safeguarding coastal and marine environments.Despite these efforts,marine non-native species(NNS)continue to threaten biodiversity and ecosystems,even within MPAs.Currently,there is a lack of comprehensive studies on the inventories,distribution patterns,and effect factors of NNS within MPAs.Here we show a database containing over 15,000 occurrence records of 2714 marine NNS across 16,401 national or regional MPAs worldwide.To identify the primary mechanisms driving the occurrence of NNS,we utilize model selection with proxies representing colonization pressure,environmental variables,and MPA characteristics.Among the environmental predictors analyzed,sea surface temperature emerged as the sole factor strongly associated with NNS richness.Higher sea surface temperatures are linked to increased NNS richness,aligning with global marine biodiversity trends.Furthermore,human activities help species overcome geographical barriers and migration constraints.Consequently,this influences the distribution patterns of marine introduced species and associated environmental factors.As global climate change continues to alter sea temperatures,it is crucial to protect marine regions that are increasingly vulnerable to intense human activities and biological invasions.展开更多
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hate...Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.展开更多
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona...Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.展开更多
In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p...In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.展开更多
Reporting is essential in language use,including the re-expression of other people’s or self’s words,opinions,psychological activities,etc.Grasping the translation methods of reported speech in German academic paper...Reporting is essential in language use,including the re-expression of other people’s or self’s words,opinions,psychological activities,etc.Grasping the translation methods of reported speech in German academic papers is very important to improve the accuracy of academic paper translation.This study takes the translation of“Internationalization of German Universities”(Die Internationalisierung der deutschen Hochschulen),an academic paper of higher education,as an example to explore the translation methods of reported speech in German academic papers.It is found that the use of word order conversion,part of speech conversion and split translation methods can make the translation more accurate and fluent.This paper helps to grasp the rules and characteristics of the translation of reported speech in German academic papers,and also provides a reference for improving the quality of German-Chinese translation.展开更多
In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that op...In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.展开更多
The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universiti...The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universities.This article takes the theory of second language acquisition as the background,analyzes the important role and value of this theory in English speech teaching in universities,and explores how to apply the theory of second language acquisition in English speech teaching in universities.It aims to strengthen the cultivation of English skilled talents and provide a brief reference for improving English speech teaching in universities.展开更多
This paper attempts to argue that in the age of‘World Englishes', it is not necessary to differentiate native speaker teachers from non-native speaker teachers. It is concluded that non-native speaker teachers ca...This paper attempts to argue that in the age of‘World Englishes', it is not necessary to differentiate native speaker teachers from non-native speaker teachers. It is concluded that non-native speaker teachers can be as effective as their native colleagues and they have equal chance to achieve professional success, even though native speaker teachers have great advantages over non-native teachers in some aspects. It is time for employers, as well as ELT professionals to shut their eyes to the glaring differences between native speaker teachers and non-native speaker teachers and optimize such unique resources.展开更多
We survey non-native insects species in whole territory of Slovenia. Data on non-native species were collected in field, and we also used results of projects in which we participated and with overview of literature da...We survey non-native insects species in whole territory of Slovenia. Data on non-native species were collected in field, and we also used results of projects in which we participated and with overview of literature data in scientific pub-lications. Correspondence Analysis (CA) of data was carried out with the soft-ware Statgraphics Centaurion XVI, U.S.A. Up to 254 non-native insect species are present: around 83% are phytophagous (43% feed on woody plants, 40% on other plants);around 12% are non-phytophagous;and 5% are parasitoids or predators of other insects or mammals. Among the phytophagous species, Hemiptera predominates (with 38.2%), followed by Coleoptera (29.8%) and Lepidoptera (14.5%). Non-native insects that do not feed on plants include Coleoptera (80%), Lepidoptera (6.5%), Hymenoptera (6.5%) and Diptera (6.5%). Most of phytophagous species are associated with introduction of plants on which they are specialists, but some have also shifted from introduced to native plant hosts. 36 non-native phytophagous species (14.17% of all non-native insects) have become harmful plant pests of urban trees and crops. 20 appear on woody plants, but only Dryocosmus kuriphilus, appears in urban forest areas. In the past decades species such as D. kuriphilus, Leptoglossus occidentalis, Xylosandrus germanus, Gnathotrichus materiarius, Dasineura gledichiae, Phyllonorycter issikii, Cinara curvipes, Ophiomyia kwansonis have been recorded in parks and forests. Some non-native species are spreading in Slovenian urban forests and affect economic, ecological and other forest and urban forest functions. The number of harmful insects in forests is extremely small probably due to high diversity of the forest ecosystem, where close-to-nature forest management is practiced, which retains forest’s self-regulatory ability to control pests. Such management enables for example the reduction of D. kuriphilus with expansion of its parasitoid, Torymus sinensis. We attempt to explain this phenomenon: we assume that T. sinensis was introduced in Slovenia as diapaused eggs in its host, D. kuriphilus.展开更多
Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for ver...Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for verbs and nouns.The Arabic language consists of distinct variations utilized in a community and particular situations.Social media sites are a medium for expressing opinions and social phenomena like racism,hatred,offensive language,and all kinds of verbal violence.Such conduct does not impact particular nations,communities,or groups only,extending beyond such areas into people’s everyday lives.This study introduces an Improved Ant Lion Optimizer with Deep Learning Dirven Offensive and Hate Speech Detection(IALODL-OHSD)on Arabic Cross-Corpora.The presented IALODL-OHSD model mainly aims to detect and classify offensive/hate speech expressed on social media.In the IALODL-OHSD model,a threestage process is performed,namely pre-processing,word embedding,and classification.Primarily,data pre-processing is performed to transform the Arabic social media text into a useful format.In addition,the word2vec word embedding process is utilized to produce word embeddings.The attentionbased cascaded long short-term memory(ACLSTM)model is utilized for the classification process.Finally,the IALO algorithm is exploited as a hyperparameter optimizer to boost classifier results.To illustrate a brief result analysis of the IALODL-OHSD model,a detailed set of simulations were performed.The extensive comparison study portrayed the enhanced performance of the IALODL-OHSD model over other approaches.展开更多
SUMMARY Postpartum psychosis is a condition characterised by rapid onset of psychotic symptoms several weeks after childbirth. Outside of its timing and descriptions of psychotic features, minimal research exists due ...SUMMARY Postpartum psychosis is a condition characterised by rapid onset of psychotic symptoms several weeks after childbirth. Outside of its timing and descriptions of psychotic features, minimal research exists due to its relative rarity (1 to 2 per 1000 births in the USA), with greater emphasis on postpartum sadness and depression. With the existi叩 literature, cultural differences and language barriers previously have not been taken into consideration as there are no documented cases of postpartum psychosis in a non-English-speaki叩 patient. Correctly differentiating postpartum psychosis from other postpartum psychiatric disorders requires adeptly evaluating for the presence of psychotic symptoms with in-depth history taking.展开更多
Automatic Speech Emotion Recognition(SER)is used to recognize emotion from speech automatically.Speech Emotion recognition is working well in a laboratory environment but real-time emotion recognition has been influen...Automatic Speech Emotion Recognition(SER)is used to recognize emotion from speech automatically.Speech Emotion recognition is working well in a laboratory environment but real-time emotion recognition has been influenced by the variations in gender,age,the cultural and acoustical background of the speaker.The acoustical resemblance between emotional expressions further increases the complexity of recognition.Many recent research works are concentrated to address these effects individually.Instead of addressing every influencing attribute individually,we would like to design a system,which reduces the effect that arises on any factor.We propose a two-level Hierarchical classifier named Interpreter of responses(IR).Thefirst level of IR has been realized using Support Vector Machine(SVM)and Gaussian Mixer Model(GMM)classifiers.In the second level of IR,a discriminative SVM classifier has been trained and tested with meta information offirst-level classifiers along with the input acoustical feature vector which is used in primary classifiers.To train the system with a corpus of versatile nature,an integrated emotion corpus has been composed using emotion samples of 5 speech corpora,namely;EMO-DB,IITKGP-SESC,SAVEE Corpus,Spanish emotion corpus,CMU's Woogle corpus.The hierarchical classifier has been trained and tested using MFCC and Low-Level Descriptors(LLD).The empirical analysis shows that the proposed classifier outperforms the traditional classifiers.The proposed ensemble design is very generic and can be adapted even when the number and nature of features change.Thefirst-level classifiers GMM or SVM may be replaced with any other learning algorithm.展开更多
Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect huma...Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions.This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes(LED)alert signals and it also keep track of the places like households,hospitals and remote areas,etc.The proposed approach is able to detect seven emotions:worry,surprise,neutral,sadness,happiness,hate and love.The key elements for the implementation of speech emotion recognition are voice processing,and once the emotion is recognized,the machine interface automatically detects the actions by buzzer and LED.The proposed system is trained and tested on various benchmark datasets,i.e.,Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS)database,Acoustic-Phonetic Continuous Speech Corpus(TIMIT)database,Emotional Speech database(Emo-DB)database and evaluated based on various parameters,i.e.,accuracy,error rate,and time.While comparing with existing technologies,the proposed algorithm gave a better error rate and less time.Error rate and time is decreased by 19.79%,5.13 s.for the RAVDEES dataset,15.77%,0.01 s for the Emo-DB dataset and 14.88%,3.62 for the TIMIT database.The proposed model shows better accuracy of 81.02%for the RAVDEES dataset,84.23%for the TIMIT dataset and 85.12%for the Emo-DB dataset compared to Gaussian Mixture Modeling(GMM)and Support Vector Machine(SVM)Model.展开更多
Patients with age-related hearing loss face hearing difficulties in daily life.The causes of age-related hearing loss are complex and include changes in peripheral hearing,central processing,and cognitive-related abil...Patients with age-related hearing loss face hearing difficulties in daily life.The causes of age-related hearing loss are complex and include changes in peripheral hearing,central processing,and cognitive-related abilities.Furthermore,the factors by which aging relates to hearing loss via changes in audito ry processing ability are still unclear.In this cross-sectional study,we evaluated 27 older adults(over 60 years old) with age-related hearing loss,21 older adults(over 60years old) with normal hearing,and 30 younger subjects(18-30 years old) with normal hearing.We used the outcome of the uppe r-threshold test,including the time-compressed thres h old and the speech recognition threshold in noisy conditions,as a behavioral indicator of auditory processing ability.We also used electroencephalogra p hy to identify presbycusis-related abnormalities in the brain while the participants were in a spontaneous resting state.The timecompressed threshold and speech recognition threshold data indicated significant diffe rences among the groups.In patients with age-related hearing loss,information masking(babble noise) had a greater effect than energy masking(speech-shaped noise) on processing difficulties.In terms of resting-state electroencephalography signals,we observed enhanced fro ntal lobe(Brodmann’s area,BA11) activation in the older adults with normal hearing compared with the younger participants with normal hearing,and greater activation in the parietal(BA7) and occipital(BA19) lobes in the individuals with age-related hearing loss compared with the younger adults.Our functional connection analysis suggested that compared with younger people,the older adults with normal hearing exhibited enhanced connections among networks,including the default mode network,sensorimotor network,cingulo-opercular network,occipital network,and frontoparietal network.These results suggest that both normal aging and the development of age-related hearing loss have a negative effect on advanced audito ry processing capabilities and that hearing loss accele rates the decline in speech comprehension,especially in speech competition situations.Older adults with normal hearing may have increased compensatory attentional resource recruitment represented by the to p-down active listening mechanism,while those with age-related hearing loss exhibit decompensation of network connections involving multisensory integration.展开更多
Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due...Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach.展开更多
This paper will investigate the English language needs of non-native foreign students who are studying at the University of Otago, New Zealand. These students comprise of students from Asia (China, Thailand, Korea, Q...This paper will investigate the English language needs of non-native foreign students who are studying at the University of Otago, New Zealand. These students comprise of students from Asia (China, Thailand, Korea, Qatar, Saudi Arabia, to name a few) and Europe. For these students, English is their second or third language and they face proficiency problems. To them, mastery of English is most important because all the courses in the university are taught in English and unless the students are proficient to operate in English, they will lose out and face difficulties in securing good grades for their subjects. Findings of this research will provide insights for the curriculum developers and English teachers at public universities especially in Malaysia that have been accepting students from Asia and the Middle East. The present curriculum need to be reviewed or a new curriculum need to be designed to meet the English needs of their non-native foreign students展开更多
This article is devoted to the study of the composition, diversity and distribution of non-native plant elements to the intercontinental regions of Asia on an example Trans-Baikal territory. The number of non-native p...This article is devoted to the study of the composition, diversity and distribution of non-native plant elements to the intercontinental regions of Asia on an example Trans-Baikal territory. The number of non-native plants in the Trans-Baikal areas is determined by the degree of urbanization, favorable climate and the availability of skidding ways proximal to their vicinity.展开更多
Grounded upon the interactive relationship between intercultural communication(IC)and foreign language education and the recent gradual salience of communicative language teaching(CLT)in foreign language grammar learn...Grounded upon the interactive relationship between intercultural communication(IC)and foreign language education and the recent gradual salience of communicative language teaching(CLT)in foreign language grammar learning sectors,the study reported in this paper deals with the issue of teaching Korean grammar to non-native speakers in terms of teaching Korean as a foreign language(TKFL).This paper attempts to examine and analyze several Korean language textbooks prepared for foreign learners of Korean,which is used overseas,especially in Hong Kong(HK).It is also attempted to evaluate the textbooks in terms of CLT and communicative competence.By doing so,we can further understand the methods of Korean grammar instruction provided to foreigners as a second language or a foreign language.展开更多
基金Second Tibetan Plateau Scientific Expedition and Research(STEP)program[grant number 2019QZKK0501]Third Xinjiang Scientific Expedition Program[grant number 2021xjkk0600]+1 种基金Biodiversity Survey,Monitoring and Assessment Project of Ministry of Ecology and Environment,China[grant number 2019HB2096001006]Fundamental Research Funds for the Central Public-interest Scientific Institution[grant number 2020YSKY-008].
文摘Marine protected areas(MPAs)across various countries have contributed to safeguarding coastal and marine environments.Despite these efforts,marine non-native species(NNS)continue to threaten biodiversity and ecosystems,even within MPAs.Currently,there is a lack of comprehensive studies on the inventories,distribution patterns,and effect factors of NNS within MPAs.Here we show a database containing over 15,000 occurrence records of 2714 marine NNS across 16,401 national or regional MPAs worldwide.To identify the primary mechanisms driving the occurrence of NNS,we utilize model selection with proxies representing colonization pressure,environmental variables,and MPA characteristics.Among the environmental predictors analyzed,sea surface temperature emerged as the sole factor strongly associated with NNS richness.Higher sea surface temperatures are linked to increased NNS richness,aligning with global marine biodiversity trends.Furthermore,human activities help species overcome geographical barriers and migration constraints.Consequently,this influences the distribution patterns of marine introduced species and associated environmental factors.As global climate change continues to alter sea temperatures,it is crucial to protect marine regions that are increasingly vulnerable to intense human activities and biological invasions.
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
文摘Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.
文摘Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.
基金This research was funded by Shenzhen Science and Technology Program(Grant No.RCBS20221008093121051)the General Higher Education Project of Guangdong Provincial Education Department(Grant No.2020ZDZX3085)+1 种基金China Postdoctoral Science Foundation(Grant No.2021M703371)the Post-Doctoral Foundation Project of Shenzhen Polytechnic(Grant No.6021330002K).
文摘In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management.
文摘Reporting is essential in language use,including the re-expression of other people’s or self’s words,opinions,psychological activities,etc.Grasping the translation methods of reported speech in German academic papers is very important to improve the accuracy of academic paper translation.This study takes the translation of“Internationalization of German Universities”(Die Internationalisierung der deutschen Hochschulen),an academic paper of higher education,as an example to explore the translation methods of reported speech in German academic papers.It is found that the use of word order conversion,part of speech conversion and split translation methods can make the translation more accurate and fluent.This paper helps to grasp the rules and characteristics of the translation of reported speech in German academic papers,and also provides a reference for improving the quality of German-Chinese translation.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.This study is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘In recent years,the usage of social networking sites has considerably increased in the Arab world.It has empowered individuals to express their opinions,especially in politics.Furthermore,various organizations that operate in the Arab countries have embraced social media in their day-to-day business activities at different scales.This is attributed to business owners’understanding of social media’s importance for business development.However,the Arabic morphology is too complicated to understand due to the availability of nearly 10,000 roots and more than 900 patterns that act as the basis for verbs and nouns.Hate speech over online social networking sites turns out to be a worldwide issue that reduces the cohesion of civil societies.In this background,the current study develops a Chaotic Elephant Herd Optimization with Machine Learning for Hate Speech Detection(CEHOML-HSD)model in the context of the Arabic language.The presented CEHOML-HSD model majorly concentrates on identifying and categorising the Arabic text into hate speech and normal.To attain this,the CEHOML-HSD model follows different sub-processes as discussed herewith.At the initial stage,the CEHOML-HSD model undergoes data pre-processing with the help of the TF-IDF vectorizer.Secondly,the Support Vector Machine(SVM)model is utilized to detect and classify the hate speech texts made in the Arabic language.Lastly,the CEHO approach is employed for fine-tuning the parameters involved in SVM.This CEHO approach is developed by combining the chaotic functions with the classical EHO algorithm.The design of the CEHO algorithm for parameter tuning shows the novelty of the work.A widespread experimental analysis was executed to validate the enhanced performance of the proposed CEHOML-HSD approach.The comparative study outcomes established the supremacy of the proposed CEHOML-HSD model over other approaches.
文摘The teaching of English speeches in universities aims to enhance oral communication ability,improve English communication skills,and expand English knowledge,occupying a core position in English teaching in universities.This article takes the theory of second language acquisition as the background,analyzes the important role and value of this theory in English speech teaching in universities,and explores how to apply the theory of second language acquisition in English speech teaching in universities.It aims to strengthen the cultivation of English skilled talents and provide a brief reference for improving English speech teaching in universities.
文摘This paper attempts to argue that in the age of‘World Englishes', it is not necessary to differentiate native speaker teachers from non-native speaker teachers. It is concluded that non-native speaker teachers can be as effective as their native colleagues and they have equal chance to achieve professional success, even though native speaker teachers have great advantages over non-native teachers in some aspects. It is time for employers, as well as ELT professionals to shut their eyes to the glaring differences between native speaker teachers and non-native speaker teachers and optimize such unique resources.
基金part of the project V4-1439 Development of new methods of detection,diagnostics and prognosis for non-native organisms harmful to forest 2014-2017programme groups P4-0059 Forest,forestry and renewable forest resources and P4-0107 Forest biology,ecology and technology.
文摘We survey non-native insects species in whole territory of Slovenia. Data on non-native species were collected in field, and we also used results of projects in which we participated and with overview of literature data in scientific pub-lications. Correspondence Analysis (CA) of data was carried out with the soft-ware Statgraphics Centaurion XVI, U.S.A. Up to 254 non-native insect species are present: around 83% are phytophagous (43% feed on woody plants, 40% on other plants);around 12% are non-phytophagous;and 5% are parasitoids or predators of other insects or mammals. Among the phytophagous species, Hemiptera predominates (with 38.2%), followed by Coleoptera (29.8%) and Lepidoptera (14.5%). Non-native insects that do not feed on plants include Coleoptera (80%), Lepidoptera (6.5%), Hymenoptera (6.5%) and Diptera (6.5%). Most of phytophagous species are associated with introduction of plants on which they are specialists, but some have also shifted from introduced to native plant hosts. 36 non-native phytophagous species (14.17% of all non-native insects) have become harmful plant pests of urban trees and crops. 20 appear on woody plants, but only Dryocosmus kuriphilus, appears in urban forest areas. In the past decades species such as D. kuriphilus, Leptoglossus occidentalis, Xylosandrus germanus, Gnathotrichus materiarius, Dasineura gledichiae, Phyllonorycter issikii, Cinara curvipes, Ophiomyia kwansonis have been recorded in parks and forests. Some non-native species are spreading in Slovenian urban forests and affect economic, ecological and other forest and urban forest functions. The number of harmful insects in forests is extremely small probably due to high diversity of the forest ecosystem, where close-to-nature forest management is practiced, which retains forest’s self-regulatory ability to control pests. Such management enables for example the reduction of D. kuriphilus with expansion of its parasitoid, Torymus sinensis. We attempt to explain this phenomenon: we assume that T. sinensis was introduced in Slovenia as diapaused eggs in its host, D. kuriphilus.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R263)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:22UQU4340237DSR43.
文摘Arabic is the world’s first language,categorized by its rich and complicated grammatical formats.Furthermore,the Arabic morphology can be perplexing because nearly 10,000 roots and 900 patterns were the basis for verbs and nouns.The Arabic language consists of distinct variations utilized in a community and particular situations.Social media sites are a medium for expressing opinions and social phenomena like racism,hatred,offensive language,and all kinds of verbal violence.Such conduct does not impact particular nations,communities,or groups only,extending beyond such areas into people’s everyday lives.This study introduces an Improved Ant Lion Optimizer with Deep Learning Dirven Offensive and Hate Speech Detection(IALODL-OHSD)on Arabic Cross-Corpora.The presented IALODL-OHSD model mainly aims to detect and classify offensive/hate speech expressed on social media.In the IALODL-OHSD model,a threestage process is performed,namely pre-processing,word embedding,and classification.Primarily,data pre-processing is performed to transform the Arabic social media text into a useful format.In addition,the word2vec word embedding process is utilized to produce word embeddings.The attentionbased cascaded long short-term memory(ACLSTM)model is utilized for the classification process.Finally,the IALO algorithm is exploited as a hyperparameter optimizer to boost classifier results.To illustrate a brief result analysis of the IALODL-OHSD model,a detailed set of simulations were performed.The extensive comparison study portrayed the enhanced performance of the IALODL-OHSD model over other approaches.
文摘SUMMARY Postpartum psychosis is a condition characterised by rapid onset of psychotic symptoms several weeks after childbirth. Outside of its timing and descriptions of psychotic features, minimal research exists due to its relative rarity (1 to 2 per 1000 births in the USA), with greater emphasis on postpartum sadness and depression. With the existi叩 literature, cultural differences and language barriers previously have not been taken into consideration as there are no documented cases of postpartum psychosis in a non-English-speaki叩 patient. Correctly differentiating postpartum psychosis from other postpartum psychiatric disorders requires adeptly evaluating for the presence of psychotic symptoms with in-depth history taking.
文摘Automatic Speech Emotion Recognition(SER)is used to recognize emotion from speech automatically.Speech Emotion recognition is working well in a laboratory environment but real-time emotion recognition has been influenced by the variations in gender,age,the cultural and acoustical background of the speaker.The acoustical resemblance between emotional expressions further increases the complexity of recognition.Many recent research works are concentrated to address these effects individually.Instead of addressing every influencing attribute individually,we would like to design a system,which reduces the effect that arises on any factor.We propose a two-level Hierarchical classifier named Interpreter of responses(IR).Thefirst level of IR has been realized using Support Vector Machine(SVM)and Gaussian Mixer Model(GMM)classifiers.In the second level of IR,a discriminative SVM classifier has been trained and tested with meta information offirst-level classifiers along with the input acoustical feature vector which is used in primary classifiers.To train the system with a corpus of versatile nature,an integrated emotion corpus has been composed using emotion samples of 5 speech corpora,namely;EMO-DB,IITKGP-SESC,SAVEE Corpus,Spanish emotion corpus,CMU's Woogle corpus.The hierarchical classifier has been trained and tested using MFCC and Low-Level Descriptors(LLD).The empirical analysis shows that the proposed classifier outperforms the traditional classifiers.The proposed ensemble design is very generic and can be adapted even when the number and nature of features change.Thefirst-level classifiers GMM or SVM may be replaced with any other learning algorithm.
基金Deanship of Scientific Research at Majmaah University for supporting this work under Project No.R-2022-166.
文摘Day by day,biometric-based systems play a vital role in our daily lives.This paper proposed an intelligent assistant intended to identify emotions via voice message.A biometric system has been developed to detect human emotions based on voice recognition and control a few electronic peripherals for alert actions.This proposed smart assistant aims to provide a support to the people through buzzer and light emitting diodes(LED)alert signals and it also keep track of the places like households,hospitals and remote areas,etc.The proposed approach is able to detect seven emotions:worry,surprise,neutral,sadness,happiness,hate and love.The key elements for the implementation of speech emotion recognition are voice processing,and once the emotion is recognized,the machine interface automatically detects the actions by buzzer and LED.The proposed system is trained and tested on various benchmark datasets,i.e.,Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS)database,Acoustic-Phonetic Continuous Speech Corpus(TIMIT)database,Emotional Speech database(Emo-DB)database and evaluated based on various parameters,i.e.,accuracy,error rate,and time.While comparing with existing technologies,the proposed algorithm gave a better error rate and less time.Error rate and time is decreased by 19.79%,5.13 s.for the RAVDEES dataset,15.77%,0.01 s for the Emo-DB dataset and 14.88%,3.62 for the TIMIT database.The proposed model shows better accuracy of 81.02%for the RAVDEES dataset,84.23%for the TIMIT dataset and 85.12%for the Emo-DB dataset compared to Gaussian Mixture Modeling(GMM)and Support Vector Machine(SVM)Model.
基金supported by the National Natural Science Foundation of China,Nos.82171138 (to YQZ),82071 062 (to YXC)the Natural Science Foundation of Guangdong Province,No.2021A1515012038 (to YXC)+1 种基金the Fundamental Research Funds for the Central Universities,No.20ykpy91 (to YXC)the Sun Yat-Sen Clinical Research Cultivating Program,No.SYS-Q-201903 (to YXC)。
文摘Patients with age-related hearing loss face hearing difficulties in daily life.The causes of age-related hearing loss are complex and include changes in peripheral hearing,central processing,and cognitive-related abilities.Furthermore,the factors by which aging relates to hearing loss via changes in audito ry processing ability are still unclear.In this cross-sectional study,we evaluated 27 older adults(over 60 years old) with age-related hearing loss,21 older adults(over 60years old) with normal hearing,and 30 younger subjects(18-30 years old) with normal hearing.We used the outcome of the uppe r-threshold test,including the time-compressed thres h old and the speech recognition threshold in noisy conditions,as a behavioral indicator of auditory processing ability.We also used electroencephalogra p hy to identify presbycusis-related abnormalities in the brain while the participants were in a spontaneous resting state.The timecompressed threshold and speech recognition threshold data indicated significant diffe rences among the groups.In patients with age-related hearing loss,information masking(babble noise) had a greater effect than energy masking(speech-shaped noise) on processing difficulties.In terms of resting-state electroencephalography signals,we observed enhanced fro ntal lobe(Brodmann’s area,BA11) activation in the older adults with normal hearing compared with the younger participants with normal hearing,and greater activation in the parietal(BA7) and occipital(BA19) lobes in the individuals with age-related hearing loss compared with the younger adults.Our functional connection analysis suggested that compared with younger people,the older adults with normal hearing exhibited enhanced connections among networks,including the default mode network,sensorimotor network,cingulo-opercular network,occipital network,and frontoparietal network.These results suggest that both normal aging and the development of age-related hearing loss have a negative effect on advanced audito ry processing capabilities and that hearing loss accele rates the decline in speech comprehension,especially in speech competition situations.Older adults with normal hearing may have increased compensatory attentional resource recruitment represented by the to p-down active listening mechanism,while those with age-related hearing loss exhibit decompensation of network connections involving multisensory integration.
基金the National Natural Science Foundation of China(No.61872231)the National Key Research and Development Program of China(No.2021YFC2801000)the Major Research plan of the National Social Science Foundation of China(No.2000&ZD130).
文摘Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach.
文摘This paper will investigate the English language needs of non-native foreign students who are studying at the University of Otago, New Zealand. These students comprise of students from Asia (China, Thailand, Korea, Qatar, Saudi Arabia, to name a few) and Europe. For these students, English is their second or third language and they face proficiency problems. To them, mastery of English is most important because all the courses in the university are taught in English and unless the students are proficient to operate in English, they will lose out and face difficulties in securing good grades for their subjects. Findings of this research will provide insights for the curriculum developers and English teachers at public universities especially in Malaysia that have been accepting students from Asia and the Middle East. The present curriculum need to be reviewed or a new curriculum need to be designed to meet the English needs of their non-native foreign students
文摘This article is devoted to the study of the composition, diversity and distribution of non-native plant elements to the intercontinental regions of Asia on an example Trans-Baikal territory. The number of non-native plants in the Trans-Baikal areas is determined by the degree of urbanization, favorable climate and the availability of skidding ways proximal to their vicinity.
文摘Grounded upon the interactive relationship between intercultural communication(IC)and foreign language education and the recent gradual salience of communicative language teaching(CLT)in foreign language grammar learning sectors,the study reported in this paper deals with the issue of teaching Korean grammar to non-native speakers in terms of teaching Korean as a foreign language(TKFL).This paper attempts to examine and analyze several Korean language textbooks prepared for foreign learners of Korean,which is used overseas,especially in Hong Kong(HK).It is also attempted to evaluate the textbooks in terms of CLT and communicative competence.By doing so,we can further understand the methods of Korean grammar instruction provided to foreigners as a second language or a foreign language.