Nowadays, millions of users use many social media systems every day. These services produce massive messages, which play a vital role in the social networking paradigm. As we see, an intelligent learning emotion syste...Nowadays, millions of users use many social media systems every day. These services produce massive messages, which play a vital role in the social networking paradigm. As we see, an intelligent learning emotion system is desperately needed for detecting emotion among these messages. This system could be suitable in understanding users’ feelings towards particular discussion. This paper proposes a text-based emotion recognition approach that uses personal text data to recognize user’s current emotion. The proposed approach applies Dominant Meaning Technique to recognize user’s emotion. The paper reports promising experiential results on the tested dataset based on the proposed algorithm.展开更多
This article presents an analysis of the patterns of interactions resulting from the positive and negative emotional events that occur in cities,considering them as complex systems.It explores,from the imaginaries,how...This article presents an analysis of the patterns of interactions resulting from the positive and negative emotional events that occur in cities,considering them as complex systems.It explores,from the imaginaries,how certain urban objects can act as emotional agents and how these events affect the urban system as a whole.An adaptive complex systems perspective is used to analyze these patterns.The results show patterns in the processes and dynamics that occur in cities based on the objects that affect the emotions of the people who live there.These patterns depend on the characteristics of the emotional charge of urban objects,but they can be generalized in the following process:(1)immediate reaction by some individuals;(2)emotions are generated at the individual level which begins to generalize,permuting to a collective emotion;(3)a process of reflection is detonated in some individuals from the reading of collective emotions;(4)integration/significance in the community both at the individual and collective level,on the concepts,roles and/or functions that give rise to the process in the system.Therefore,it is clear that emotions play a significant role in the development of cities and these aspects should be considered in the design strategies of all kinds of projects for the city.Future extensions of this work could include a deeper analysis of specific emotional events in urban environments,as well as possible implications for urban policy and decision making.展开更多
With the development of intelligent agents pursuing humanisation,artificial intelligence must consider emotion,the most basic spiritual need in human interaction.Traditional emotional dialogue systems usually use an e...With the development of intelligent agents pursuing humanisation,artificial intelligence must consider emotion,the most basic spiritual need in human interaction.Traditional emotional dialogue systems usually use an external emotional dictionary to select appropriate emotional words to add to the response or concatenate emotional tags and semantic features in the decoding step to generate appropriate responses.However,selecting emotional words from a fixed emotional dictionary may result in loss of the diversity and consistency of the response.We propose a semantic and emotion-based dual latent variable generation model(Dual-LVG)for dialogue systems,which is able to generate appropriate emotional responses without an emotional dictionary.Different from previous work,the conditional variational autoencoder(CVAE)adopts the standard transformer structure.Then,Dual-LVG regularises the CVAE latent space by introducing a dual latent space of semantics and emotion.The content diversity and emotional accuracy of the generated responses are improved by learning emotion and semantic features respectively.Moreover,the average attention mechanism is adopted to better extract semantic features at the sequence level,and the semi-supervised attention mechanism is used in the decoding step to strengthen the fusion of emotional features of the model.Experimental results show that Dual-LVG can successfully achieve the effect of generating different content by controlling emotional factors.展开更多
An important factor in the course of daily medical diagnosis and treatment is understanding patients’ emotional states by the caregiver physicians. However, patients usually avoid speaking out their emotions when exp...An important factor in the course of daily medical diagnosis and treatment is understanding patients’ emotional states by the caregiver physicians. However, patients usually avoid speaking out their emotions when expressing their somatic symptoms and complaints to their non-psychiatrist doctor. On the other hand, clinicians usually lack the required expertise(or time) and have a deficit in mining various verbal and non-verbal emotional signals of the patients. As a result, in many cases, there is an emotion recognition barrier between the clinician and the patients making all patients seem the same except for their different somatic symptoms. In particular, we aim to identify and combine three major disciplines(psychology, linguistics, and data science) approaches for detecting emotions from verbal communication and propose an integrated solution for emotion recognition support. Such a platform may give emotional guides and indices to the clinician based on verbal communication at the consultation time.展开更多
BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitiv...BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitive function,anxiety,and depression in patients undergoing this procedure.AIM To compare the effects of propofol and sevoflurane anesthesia on postoperative cognitive function,anxiety,depression,and organ function in patients undergoing radical resection of gastric cancer.METHODS A total of 80 patients were involved in this research.The subjects were divided into two groups:Propofol group and sevoflurane group.The evaluation scale for cognitive function was the Loewenstein occupational therapy cognitive assessment(LOTCA),and anxiety and depression were assessed with the aid of the self-rating anxiety scale(SAS)and self-rating depression scale(SDS).Hemodynamic indicators,oxidative stress levels,and pulmonary function were also measured.RESULTS The LOTCA score at 1 d after surgery was significantly lower in the propofol group than in the sevoflurane group.Additionally,the SAS and SDS scores of the sevoflurane group were significantly lower than those of the propofol group.The sevoflurane group showed greater stability in heart rate as well as the mean arterial pressure compared to the propofol group.Moreover,the sevoflurane group displayed better pulmonary function and less lung injury than the propofol group.CONCLUSION Both propofol and sevoflurane could be utilized as maintenance anesthesia during radical resection of gastric cancer.Propofol anesthesia has a minimal effect on patients'pulmonary function,consequently enhancing their postoperative recovery.Sevoflurane anesthesia causes less impairment on patients'cognitive function and mitigates negative emotions,leading to an improved postoperative mental state.Therefore,the selection of anesthetic agents should be based on the individual patient's specific circumstances.展开更多
Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of sui...Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of suicide.Psychological,social,family,individual,and environmental factors are important risk factors for suicidal behavior among teenagers and may contribute to suicide risk through various direct,indirect,or combined pathways.Social-emotional learning is considered a powerful intervention measure for addressing the crisis of adolescent suicide.When deliberately cultivated,fostered,and enhanced,selfawareness,self-management,social awareness,interpersonal skills,and responsible decision-making,as the five core competencies of social-emotional learning,can be used to effectively target various risk factors for adolescent suicide and provide necessary mental and interpersonal support.Among numerous suicide intervention methods,school-based interventions based on social-emotional competence have shown great potential in preventing and addressing suicide risk factors in adolescents.The characteristics of school-based interventions based on social-emotional competence,including their appropriateness,necessity,cost-effectiveness,comprehensiveness,and effectiveness,make these interventions an important means of addressing the crisis of adolescent suicide.To further determine the potential of school-based interventions based on social-emotional competence and better address the issue of adolescent suicide,additional financial support should be provided,the combination of socialemotional learning and other suicide prevention programs within schools should be fully leveraged,and cooperation between schools and families,society,and other environments should be maximized.These efforts should be considered future research directions.展开更多
Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image t...Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model.In contrast,this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions,especially around the eyes,eyebrows,nose,andmouth.Then,we apply a newclassifier using an ensemble network to increase emotion recognition accuracy.The emotion recognition performance was compared with the conventional algorithms using public databases.The results indicated that the proposed method achieved higher accuracy than the traditional based on facial expressions for emotion recognition.In particular,our experiments with the FER2013 database show that our proposed method is robust to lighting conditions and backgrounds,with an average of 25% higher performance than previous studies.Consequently,the proposed method is expected to recognize facial expressions,especially fear and anger,to help prevent severe accidents by detecting security-related or dangerous actions in advance.展开更多
Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines.Speech Emotion Recognition(SER)is one of the critical sources for human evaluatio...Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines.Speech Emotion Recognition(SER)is one of the critical sources for human evaluation,which is applicable in many real-world applications such as healthcare,call centers,robotics,safety,and virtual reality.This work developed a novel TCN-based emotion recognition system using speech signals through a spatial-temporal convolution network to recognize the speaker’s emotional state.The authors designed a Temporal Convolutional Network(TCN)core block to recognize long-term dependencies in speech signals and then feed these temporal cues to a dense network to fuse the spatial features and recognize global information for final classification.The proposed network extracts valid sequential cues automatically from speech signals,which performed better than state-of-the-art(SOTA)and traditional machine learning algorithms.Results of the proposed method show a high recognition rate compared with SOTAmethods.The final unweighted accuracy of 80.84%,and 92.31%,for interactive emotional dyadic motion captures(IEMOCAP)and berlin emotional dataset(EMO-DB),indicate the robustness and efficiency of the designed model.展开更多
Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han...Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect anal...In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect analysis,thereby continuouslypromotingthe improvementof teaching quality.However,most existingmulti-face expressionrecognition methods adopt a multi-stage approach, with an overall complex process, poor real-time performance,and insufficient generalization ability. In addition, the existing facial expression datasets are mostly single faceimages, which are of low quality and lack specificity, also restricting the development of this research. This paperaims to propose an end-to-end high-performance multi-face expression recognition algorithm model suitable forsmart classrooms, construct a high-quality multi-face expression dataset to support algorithm research, and applythe model to group emotion assessment to expand its application value. To this end, we propose an end-to-endmulti-face expression recognition algorithm model for smart classrooms (E2E-MFERC). In order to provide highqualityand highly targeted data support for model research, we constructed a multi-face expression dataset inreal classrooms (MFED), containing 2,385 images and a total of 18,712 expression labels, collected from smartclassrooms. In constructing E2E-MFERC, by introducing Re-parameterization visual geometry group (RepVGG)block and symmetric positive definite convolution (SPD-Conv) modules to enhance representational capability;combined with the cross stage partial network fusion module optimized by attention mechanism (C2f_Attention),it strengthens the ability to extract key information;adopts asymptotic feature pyramid network (AFPN) featurefusion tailored to classroomscenes and optimizes the head prediction output size;achieves high-performance endto-end multi-face expression detection. Finally, we apply the model to smart classroom group emotion assessmentand provide design references for classroom effect analysis evaluation metrics. Experiments based on MFED showthat the mAP and F1-score of E2E-MFERC on classroom evaluation data reach 83.6% and 0.77, respectively,improving the mAP of same-scale You Only Look Once version 5 (YOLOv5) and You Only Look Once version8 (YOLOv8) by 6.8% and 2.5%, respectively, and the F1-score by 0.06 and 0.04, respectively. E2E-MFERC modelhas obvious advantages in both detection speed and accuracy, which can meet the practical needs of real-timemulti-face expression analysis in classrooms, and serve the application of teaching effect assessment very well.展开更多
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona...Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.展开更多
With the development of modern society and the improvement of living standards,care for special needs children has been increasingly highlighted,and numerous corresponding measures such as welfare homes,special educat...With the development of modern society and the improvement of living standards,care for special needs children has been increasingly highlighted,and numerous corresponding measures such as welfare homes,special education schools,and youth care centers have emerged.Due to the lack of systematic emotional companionship,the mental health of special needs children are bound to be affected.Nowadays,emotional education,analysis,and evaluation are mostly done by psychologists and emotional analysts,and these measures are unpopular.Therefore,many researchers at home and abroad have focused on the solution of psychological issues and the psychological assessment and emotional analysis of such children in their daily lives.In this paper,a special children’s psychological emotional analysis based on neural network is proposed,where the system sends the voice information to a cloud platform through intelligent wearable devices.To ensure that the data collected are valid,a series of pretreatments such as Chinese word segmentation,de-emphasis,and so on are put into the neural network model.The model is based on the further research of transfer learning and Bi-GRU model,which can meet the needs of Chinese text sentiment analysis.The completion rate of the final model test has reached 97%,which means that it is ready for use.Finally,a web page is designed,which can evaluate and detect abnormal psychological state,and at the same time,a personal emotion database can also be established.展开更多
This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent ...This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.展开更多
With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature t...With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.展开更多
BACKGROUND Stroke frequently results in oropharyngeal dysfunction(OD),leading to difficulties in swallowing and eating,as well as triggering negative emotions,malnutrition,and aspiration pneumonia,which can be detrime...BACKGROUND Stroke frequently results in oropharyngeal dysfunction(OD),leading to difficulties in swallowing and eating,as well as triggering negative emotions,malnutrition,and aspiration pneumonia,which can be detrimental to patients.However,routine nursing interventions often fail to address these issues adequately.Systemic and psychological interventions can improve dysphagia symptoms,relieve negative emotions,and improve quality of life.However,there are few clinical reports of systemic interventions combined with psychological interventions for stroke patients with OD.AIM To explore the effects of combining systemic and psychological interventions in stroke patients with OD.METHODS This retrospective study included 90 stroke patients with OD,admitted to the Second Affiliated Hospital of Qiqihar Medical College(January 2022–December 2023),who were divided into two groups:regular and coalition.Swallowing function grading(using a water swallow test),swallowing function[using the standardized swallowing assessment(SSA)],negative emotions[using the selfrating anxiety scale(SAS)and self-rating depression scale(SDS)],and quality of life(SWAL-QOL)were compared between groups before and after the intervention;aspiration pneumonia incidence was recorded.RESULTS Post-intervention,the coalition group had a greater number of patients with grade 1 swallowing function compared to the regular group,while the number of patients with grade 5 swallowing function was lower than that in the regular group(P<0.05).Post-intervention,the SSA,SAS,and SDS scores of both groups decreased,with a more significant decrease observed in the coalition group(P<0.05).Additionally,the total SWAL-QOL score in both groups increased,with a more significant increase observed in the coalition group(P<0.05).During the intervention period,the total incidence of aspiration and aspiration pneumonia in the coalition group was lower than that in the control group(4.44%vs 20.00%;P<0.05).CONCLUSION Systemic intervention combined with psychological intervention can improve dysphagia symptoms,alleviate negative emotions,enhance quality of life,and reduce the incidence of aspiration pneumonia in patients with OD.展开更多
Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,an...Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,and text data,do not always indicate true emotions,as users can falsify them.Among the physiological methods of emotion detection,Electrocardiogram(ECG)is a reliable and efficient way of detecting emotions.ECG-enabled smart bands have proven effective in collecting emotional data in uncontrolled environments.Researchers use deep machine learning techniques for emotion recognition using ECG signals,but there is a need to develop efficient models by tuning the hyperparameters.Furthermore,most researchers focus on detecting emotions in individual settings,but there is a need to extend this research to group settings aswell since most of the emotions are experienced in groups.In this study,we have developed a novel lightweight one dimensional(1D)Convolutional Neural Network(CNN)model by reducing the number of convolution,max pooling,and classification layers.This optimization has led to more efficient emotion classification using ECG.We tested the proposed model’s performance using ECG data from the AMIGOS(A Dataset for Affect,Personality and Mood Research on Individuals andGroups)dataset for both individual and group settings.The results showed that themodel achieved an accuracy of 82.21%and 85.62%for valence and arousal classification,respectively,in individual settings.In group settings,the accuracy was even higher,at 99.56%and 99.68%for valence and arousal classification,respectively.By reducing the number of layers,the lightweight CNNmodel can process data more quickly and with less complexity in the hardware,making it suitable for the implementation on the mobile phone devices to detect emotions with improved accuracy and speed.展开更多
BACKGROUND Panic disorder(PD)involves emotion dysregulation,but its underlying mechanisms remain poorly understood.Previous research suggests that implicit emotion regulation may play a central role in PD-related emot...BACKGROUND Panic disorder(PD)involves emotion dysregulation,but its underlying mechanisms remain poorly understood.Previous research suggests that implicit emotion regulation may play a central role in PD-related emotion dysregulation and symptom maintenance.However,there is a lack of studies exploring the neural mechanisms of implicit emotion regulation in PD using neurophysiological indicators.AIM To study the neural mechanisms of implicit emotion regulation in PD with eventrelated potentials(ERP).METHODS A total of 25 PD patients and 20 healthy controls(HC)underwent clinical evaluations.The study utilized a case-control design with random sampling,selecting participants for the case group from March to December 2018.Participants performed an affect labeling task,using affect labeling as the experimental condition and gender labeling as the control condition.ERP and behavioral data were recorded to compare the late positive potential(LPP)within and between the groups.RESULTS Both PD and HC groups showed longer reaction times and decreased accuracy under the affect labeling.In the HC group,late LPP amplitudes exhibited a dynamic pattern of initial increase followed by decrease.Importantly,a significant group×condition interaction effect was observed.Simple effect analysis revealed a reduction in the differences of late LPP amplitudes between the affect labeling and gender labeling conditions in the PD group compared to the HC group.Furthermore,among PD patients under the affect labeling,the late LPP was negatively correlated with disease severity,symptom frequency,and intensity.CONCLUSION PD patients demonstrate abnormalities in implicit emotion regulation,hampering their ability to mobilize cognitive resources for downregulating negative emotions.The late LPP amplitude in response to affect labeling may serve as a potentially valuable clinical indicator of PD severity.展开更多
BACKGROUND Acute pancreatitis(AP),as a common acute abdomen disease,has a high incidence rate worldwide and is often accompanied by severe complications.Negative emotions lead to increased secretion of stress hormones...BACKGROUND Acute pancreatitis(AP),as a common acute abdomen disease,has a high incidence rate worldwide and is often accompanied by severe complications.Negative emotions lead to increased secretion of stress hormones,elevated blood sugar levels,and enhanced insulin resistance,which in turn increases the risk of AP and significantly affects the patient's quality of life.Therefore,exploring the intervention effects of narrative nursing programs on the negative emotions of patients with AP is not only helpful in alleviating psychological stress and improving quality of life but also has significant implications for improving disease outcomes and prognosis.AIM To construct a narrative nursing model for negative emotions in patients with AP and verify its efficacy in application.METHODS Through Delphi expert consultation,a narrative nursing model for negative emotions in patients with AP was constructed.A non-randomized quasi-experimental study design was used in this study.A total of 92 patients with AP with negative emotions admitted to a tertiary hospital in Nantong City of Jiangsu Province,China from September 2022 to August 2023 were recruited by convenience sampling,among whom 46 patients admitted from September 2022 to February 2023 were included in the observation group,and 46 patients from March to August 2023 were selected as control group.The observation group received narrative nursing plan,while the control group was given with routine nursing.Self-rating anxiety scale(SAS),self-rating depression scale(SDS),positive and negative affect scale(PANAS),caring behavior scale,patient satisfaction scale and 36-item short form health survey questionnaire(SF-36)were used to evaluate their emotions,satisfaction and caring behaviors in the two groups on the day of discharge,1-and 3-month following discharge.RESULTS According to the inclusion and exclusion criteria,a total of 45 cases in the intervention group and 44 cases in the control group eventually recruited and completed in the study.On the day of discharge,the intervention group showed significantly lower scores of SAS,SDS and negative emotion(28.57±4.52 vs 17.4±4.44,P<0.001),whereas evidently higher outcomes in the positive emotion score,Caring behavior scale score and satisfaction score compared to the control group(P<0.05).Repeated measurement analysis of variance showed that significant between-group differences were found in time effect,inter-group effect and interaction effect of SAS and PANAS scores as well as in time effect and inter-group effect of SF-36 scores(P<0.05);the SF-36 scores of two groups at 3 months after discharge were higher than those at 1 month after discharge(P<0.05).CONCLUSION The application of narrative nursing protocols has demonstrated significant effectiveness in alleviating anxiety,ameliorating negative emotions,and enhancing satisfaction among patients with AP.展开更多
BACKGROUND Studies have revealed that Children's psychological,behavioral,and emotional problems are easily influenced by the family environment.In recent years,the family structure in China has undergone signific...BACKGROUND Studies have revealed that Children's psychological,behavioral,and emotional problems are easily influenced by the family environment.In recent years,the family structure in China has undergone significant changes,with more families having two or three children.AIM To explore the relationship between emotional behavior and parental job stress in only preschool and non-only preschool children.METHODS Children aged 3-6 in kindergartens in four main urban areas of Shijiazhuang were selected by stratified sampling for a questionnaire and divided into only and nononly child groups.Their emotional behaviors and parental pressure were compared.Only and non-only children were paired in a 1:1 ratio by class and age(difference less than or equal to 6 months),and the matched data were compared.The relationship between children's emotional behavior and parents'job stress before and after matching was analyzed.RESULTS Before matching,the mother's occupation,children's personality characteristics,and children's rearing patterns differed between the groups(P<0.05).After matching 550 pairs,differences in the children's parenting styles remained.There were significant differences in children's gender and parents'attitudes toward children between the two groups.The Strengths and Difficulties Questionnaire(SDQ)scores of children in the only child group and the Parenting Stress Index-Short Form(PSI-SF)scores of parents were significantly lower than those in the non-only child group(P<0.05).Pearson’s correlation analysis showed that after matching,there was a positive correlation between children's parenting style and parents'attitudes toward their children(r=0.096,P<0.01),and the PSI-SF score was positively correlated with children's gender,parents'attitudes toward their children,and SDQ scores(r=0.077,0.193,0.172,0.222).CONCLUSION Preschool children's emotional behavior and parental pressure were significantly higher in multi-child families.Parental pressure in differently structured families was associated with many factors,and preschool children's emotional behavior was positively correlated with parental pressure.展开更多
文摘Nowadays, millions of users use many social media systems every day. These services produce massive messages, which play a vital role in the social networking paradigm. As we see, an intelligent learning emotion system is desperately needed for detecting emotion among these messages. This system could be suitable in understanding users’ feelings towards particular discussion. This paper proposes a text-based emotion recognition approach that uses personal text data to recognize user’s current emotion. The proposed approach applies Dominant Meaning Technique to recognize user’s emotion. The paper reports promising experiential results on the tested dataset based on the proposed algorithm.
文摘This article presents an analysis of the patterns of interactions resulting from the positive and negative emotional events that occur in cities,considering them as complex systems.It explores,from the imaginaries,how certain urban objects can act as emotional agents and how these events affect the urban system as a whole.An adaptive complex systems perspective is used to analyze these patterns.The results show patterns in the processes and dynamics that occur in cities based on the objects that affect the emotions of the people who live there.These patterns depend on the characteristics of the emotional charge of urban objects,but they can be generalized in the following process:(1)immediate reaction by some individuals;(2)emotions are generated at the individual level which begins to generalize,permuting to a collective emotion;(3)a process of reflection is detonated in some individuals from the reading of collective emotions;(4)integration/significance in the community both at the individual and collective level,on the concepts,roles and/or functions that give rise to the process in the system.Therefore,it is clear that emotions play a significant role in the development of cities and these aspects should be considered in the design strategies of all kinds of projects for the city.Future extensions of this work could include a deeper analysis of specific emotional events in urban environments,as well as possible implications for urban policy and decision making.
基金Fundamental Research Funds for the Central Universities of China,Grant/Award Number:CUC220B009National Natural Science Foundation of China,Grant/Award Numbers:62207029,62271454,72274182。
文摘With the development of intelligent agents pursuing humanisation,artificial intelligence must consider emotion,the most basic spiritual need in human interaction.Traditional emotional dialogue systems usually use an external emotional dictionary to select appropriate emotional words to add to the response or concatenate emotional tags and semantic features in the decoding step to generate appropriate responses.However,selecting emotional words from a fixed emotional dictionary may result in loss of the diversity and consistency of the response.We propose a semantic and emotion-based dual latent variable generation model(Dual-LVG)for dialogue systems,which is able to generate appropriate emotional responses without an emotional dictionary.Different from previous work,the conditional variational autoencoder(CVAE)adopts the standard transformer structure.Then,Dual-LVG regularises the CVAE latent space by introducing a dual latent space of semantics and emotion.The content diversity and emotional accuracy of the generated responses are improved by learning emotion and semantic features respectively.Moreover,the average attention mechanism is adopted to better extract semantic features at the sequence level,and the semi-supervised attention mechanism is used in the decoding step to strengthen the fusion of emotional features of the model.Experimental results show that Dual-LVG can successfully achieve the effect of generating different content by controlling emotional factors.
文摘An important factor in the course of daily medical diagnosis and treatment is understanding patients’ emotional states by the caregiver physicians. However, patients usually avoid speaking out their emotions when expressing their somatic symptoms and complaints to their non-psychiatrist doctor. On the other hand, clinicians usually lack the required expertise(or time) and have a deficit in mining various verbal and non-verbal emotional signals of the patients. As a result, in many cases, there is an emotion recognition barrier between the clinician and the patients making all patients seem the same except for their different somatic symptoms. In particular, we aim to identify and combine three major disciplines(psychology, linguistics, and data science) approaches for detecting emotions from verbal communication and propose an integrated solution for emotion recognition support. Such a platform may give emotional guides and indices to the clinician based on verbal communication at the consultation time.
文摘BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitive function,anxiety,and depression in patients undergoing this procedure.AIM To compare the effects of propofol and sevoflurane anesthesia on postoperative cognitive function,anxiety,depression,and organ function in patients undergoing radical resection of gastric cancer.METHODS A total of 80 patients were involved in this research.The subjects were divided into two groups:Propofol group and sevoflurane group.The evaluation scale for cognitive function was the Loewenstein occupational therapy cognitive assessment(LOTCA),and anxiety and depression were assessed with the aid of the self-rating anxiety scale(SAS)and self-rating depression scale(SDS).Hemodynamic indicators,oxidative stress levels,and pulmonary function were also measured.RESULTS The LOTCA score at 1 d after surgery was significantly lower in the propofol group than in the sevoflurane group.Additionally,the SAS and SDS scores of the sevoflurane group were significantly lower than those of the propofol group.The sevoflurane group showed greater stability in heart rate as well as the mean arterial pressure compared to the propofol group.Moreover,the sevoflurane group displayed better pulmonary function and less lung injury than the propofol group.CONCLUSION Both propofol and sevoflurane could be utilized as maintenance anesthesia during radical resection of gastric cancer.Propofol anesthesia has a minimal effect on patients'pulmonary function,consequently enhancing their postoperative recovery.Sevoflurane anesthesia causes less impairment on patients'cognitive function and mitigates negative emotions,leading to an improved postoperative mental state.Therefore,the selection of anesthetic agents should be based on the individual patient's specific circumstances.
文摘Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of suicide.Psychological,social,family,individual,and environmental factors are important risk factors for suicidal behavior among teenagers and may contribute to suicide risk through various direct,indirect,or combined pathways.Social-emotional learning is considered a powerful intervention measure for addressing the crisis of adolescent suicide.When deliberately cultivated,fostered,and enhanced,selfawareness,self-management,social awareness,interpersonal skills,and responsible decision-making,as the five core competencies of social-emotional learning,can be used to effectively target various risk factors for adolescent suicide and provide necessary mental and interpersonal support.Among numerous suicide intervention methods,school-based interventions based on social-emotional competence have shown great potential in preventing and addressing suicide risk factors in adolescents.The characteristics of school-based interventions based on social-emotional competence,including their appropriateness,necessity,cost-effectiveness,comprehensiveness,and effectiveness,make these interventions an important means of addressing the crisis of adolescent suicide.To further determine the potential of school-based interventions based on social-emotional competence and better address the issue of adolescent suicide,additional financial support should be provided,the combination of socialemotional learning and other suicide prevention programs within schools should be fully leveraged,and cooperation between schools and families,society,and other environments should be maximized.These efforts should be considered future research directions.
基金supported by the Healthcare AI Convergence R&D Program through the National IT Industry Promotion Agency of Korea(NIPA)funded by the Ministry of Science and ICT(No.S0102-23-1007)the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2017R1A6A1A03015496).
文摘Emotion recognition based on facial expressions is one of the most critical elements of human-machine interfaces.Most conventional methods for emotion recognition using facial expressions use the entire facial image to extract features and then recognize specific emotions through a pre-trained model.In contrast,this paper proposes a novel feature vector extraction method using the Euclidean distance between the landmarks changing their positions according to facial expressions,especially around the eyes,eyebrows,nose,andmouth.Then,we apply a newclassifier using an ensemble network to increase emotion recognition accuracy.The emotion recognition performance was compared with the conventional algorithms using public databases.The results indicated that the proposed method achieved higher accuracy than the traditional based on facial expressions for emotion recognition.In particular,our experiments with the FER2013 database show that our proposed method is robust to lighting conditions and backgrounds,with an average of 25% higher performance than previous studies.Consequently,the proposed method is expected to recognize facial expressions,especially fear and anger,to help prevent severe accidents by detecting security-related or dangerous actions in advance.
文摘Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines.Speech Emotion Recognition(SER)is one of the critical sources for human evaluation,which is applicable in many real-world applications such as healthcare,call centers,robotics,safety,and virtual reality.This work developed a novel TCN-based emotion recognition system using speech signals through a spatial-temporal convolution network to recognize the speaker’s emotional state.The authors designed a Temporal Convolutional Network(TCN)core block to recognize long-term dependencies in speech signals and then feed these temporal cues to a dense network to fuse the spatial features and recognize global information for final classification.The proposed network extracts valid sequential cues automatically from speech signals,which performed better than state-of-the-art(SOTA)and traditional machine learning algorithms.Results of the proposed method show a high recognition rate compared with SOTAmethods.The final unweighted accuracy of 80.84%,and 92.31%,for interactive emotional dyadic motion captures(IEMOCAP)and berlin emotional dataset(EMO-DB),indicate the robustness and efficiency of the designed model.
文摘Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金the Science and Technology Project of State Grid Corporation of China under Grant No.5700-202318292A-1-1-ZN.
文摘In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect analysis,thereby continuouslypromotingthe improvementof teaching quality.However,most existingmulti-face expressionrecognition methods adopt a multi-stage approach, with an overall complex process, poor real-time performance,and insufficient generalization ability. In addition, the existing facial expression datasets are mostly single faceimages, which are of low quality and lack specificity, also restricting the development of this research. This paperaims to propose an end-to-end high-performance multi-face expression recognition algorithm model suitable forsmart classrooms, construct a high-quality multi-face expression dataset to support algorithm research, and applythe model to group emotion assessment to expand its application value. To this end, we propose an end-to-endmulti-face expression recognition algorithm model for smart classrooms (E2E-MFERC). In order to provide highqualityand highly targeted data support for model research, we constructed a multi-face expression dataset inreal classrooms (MFED), containing 2,385 images and a total of 18,712 expression labels, collected from smartclassrooms. In constructing E2E-MFERC, by introducing Re-parameterization visual geometry group (RepVGG)block and symmetric positive definite convolution (SPD-Conv) modules to enhance representational capability;combined with the cross stage partial network fusion module optimized by attention mechanism (C2f_Attention),it strengthens the ability to extract key information;adopts asymptotic feature pyramid network (AFPN) featurefusion tailored to classroomscenes and optimizes the head prediction output size;achieves high-performance endto-end multi-face expression detection. Finally, we apply the model to smart classroom group emotion assessmentand provide design references for classroom effect analysis evaluation metrics. Experiments based on MFED showthat the mAP and F1-score of E2E-MFERC on classroom evaluation data reach 83.6% and 0.77, respectively,improving the mAP of same-scale You Only Look Once version 5 (YOLOv5) and You Only Look Once version8 (YOLOv8) by 6.8% and 2.5%, respectively, and the F1-score by 0.06 and 0.04, respectively. E2E-MFERC modelhas obvious advantages in both detection speed and accuracy, which can meet the practical needs of real-timemulti-face expression analysis in classrooms, and serve the application of teaching effect assessment very well.
文摘Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.
文摘With the development of modern society and the improvement of living standards,care for special needs children has been increasingly highlighted,and numerous corresponding measures such as welfare homes,special education schools,and youth care centers have emerged.Due to the lack of systematic emotional companionship,the mental health of special needs children are bound to be affected.Nowadays,emotional education,analysis,and evaluation are mostly done by psychologists and emotional analysts,and these measures are unpopular.Therefore,many researchers at home and abroad have focused on the solution of psychological issues and the psychological assessment and emotional analysis of such children in their daily lives.In this paper,a special children’s psychological emotional analysis based on neural network is proposed,where the system sends the voice information to a cloud platform through intelligent wearable devices.To ensure that the data collected are valid,a series of pretreatments such as Chinese word segmentation,de-emphasis,and so on are put into the neural network model.The model is based on the further research of transfer learning and Bi-GRU model,which can meet the needs of Chinese text sentiment analysis.The completion rate of the final model test has reached 97%,which means that it is ready for use.Finally,a web page is designed,which can evaluate and detect abnormal psychological state,and at the same time,a personal emotion database can also be established.
基金Supported by Education and Teaching Reform Project of the First Clinical College of Chongqing Medical University,No.CMER202305Natural Science Foundation of Tibet Autonomous Region,No.XZ2024ZR-ZY100(Z).
文摘This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.
基金The authors are highly thankful to the National Social Science Foundation of China(20BXW101,18XXW015)Innovation Research Project for the Cultivation of High-Level Scientific and Technological Talents(Top-Notch Talents of theDiscipline)(ZZKY2022303)+3 种基金National Natural Science Foundation of China(Nos.62102451,62202496)Basic Frontier Innovation Project of Engineering University of People’s Armed Police(WJX202316)This work is also supported by National Natural Science Foundation of China(No.62172436)Engineering University of PAP’s Funding for Scientific Research Innovation Team,Engineering University of PAP’s Funding for Basic Scientific Research,and Engineering University of PAP’s Funding for Education and Teaching.Natural Science Foundation of Shaanxi Province(No.2023-JCYB-584).
文摘With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.
基金Supported by Qiqihar City Science and Technology Plan Joint Guidance Project,No.LSFGG-2022085.
文摘BACKGROUND Stroke frequently results in oropharyngeal dysfunction(OD),leading to difficulties in swallowing and eating,as well as triggering negative emotions,malnutrition,and aspiration pneumonia,which can be detrimental to patients.However,routine nursing interventions often fail to address these issues adequately.Systemic and psychological interventions can improve dysphagia symptoms,relieve negative emotions,and improve quality of life.However,there are few clinical reports of systemic interventions combined with psychological interventions for stroke patients with OD.AIM To explore the effects of combining systemic and psychological interventions in stroke patients with OD.METHODS This retrospective study included 90 stroke patients with OD,admitted to the Second Affiliated Hospital of Qiqihar Medical College(January 2022–December 2023),who were divided into two groups:regular and coalition.Swallowing function grading(using a water swallow test),swallowing function[using the standardized swallowing assessment(SSA)],negative emotions[using the selfrating anxiety scale(SAS)and self-rating depression scale(SDS)],and quality of life(SWAL-QOL)were compared between groups before and after the intervention;aspiration pneumonia incidence was recorded.RESULTS Post-intervention,the coalition group had a greater number of patients with grade 1 swallowing function compared to the regular group,while the number of patients with grade 5 swallowing function was lower than that in the regular group(P<0.05).Post-intervention,the SSA,SAS,and SDS scores of both groups decreased,with a more significant decrease observed in the coalition group(P<0.05).Additionally,the total SWAL-QOL score in both groups increased,with a more significant increase observed in the coalition group(P<0.05).During the intervention period,the total incidence of aspiration and aspiration pneumonia in the coalition group was lower than that in the control group(4.44%vs 20.00%;P<0.05).CONCLUSION Systemic intervention combined with psychological intervention can improve dysphagia symptoms,alleviate negative emotions,enhance quality of life,and reduce the incidence of aspiration pneumonia in patients with OD.
文摘Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,and text data,do not always indicate true emotions,as users can falsify them.Among the physiological methods of emotion detection,Electrocardiogram(ECG)is a reliable and efficient way of detecting emotions.ECG-enabled smart bands have proven effective in collecting emotional data in uncontrolled environments.Researchers use deep machine learning techniques for emotion recognition using ECG signals,but there is a need to develop efficient models by tuning the hyperparameters.Furthermore,most researchers focus on detecting emotions in individual settings,but there is a need to extend this research to group settings aswell since most of the emotions are experienced in groups.In this study,we have developed a novel lightweight one dimensional(1D)Convolutional Neural Network(CNN)model by reducing the number of convolution,max pooling,and classification layers.This optimization has led to more efficient emotion classification using ECG.We tested the proposed model’s performance using ECG data from the AMIGOS(A Dataset for Affect,Personality and Mood Research on Individuals andGroups)dataset for both individual and group settings.The results showed that themodel achieved an accuracy of 82.21%and 85.62%for valence and arousal classification,respectively,in individual settings.In group settings,the accuracy was even higher,at 99.56%and 99.68%for valence and arousal classification,respectively.By reducing the number of layers,the lightweight CNNmodel can process data more quickly and with less complexity in the hardware,making it suitable for the implementation on the mobile phone devices to detect emotions with improved accuracy and speed.
基金Supported by The National Natural Science Foundation of China,No.81871080the Key R&D Program of Jining(Major Program),No.2023YXNS004+2 种基金the National Natural Science Foundation of China,No.81401486the Natural Science Foundation of Liaoning Province of China,No.20170540276the Medicine and Health Science Technology Development Program of Shandong Province,No.202003070713.
文摘BACKGROUND Panic disorder(PD)involves emotion dysregulation,but its underlying mechanisms remain poorly understood.Previous research suggests that implicit emotion regulation may play a central role in PD-related emotion dysregulation and symptom maintenance.However,there is a lack of studies exploring the neural mechanisms of implicit emotion regulation in PD using neurophysiological indicators.AIM To study the neural mechanisms of implicit emotion regulation in PD with eventrelated potentials(ERP).METHODS A total of 25 PD patients and 20 healthy controls(HC)underwent clinical evaluations.The study utilized a case-control design with random sampling,selecting participants for the case group from March to December 2018.Participants performed an affect labeling task,using affect labeling as the experimental condition and gender labeling as the control condition.ERP and behavioral data were recorded to compare the late positive potential(LPP)within and between the groups.RESULTS Both PD and HC groups showed longer reaction times and decreased accuracy under the affect labeling.In the HC group,late LPP amplitudes exhibited a dynamic pattern of initial increase followed by decrease.Importantly,a significant group×condition interaction effect was observed.Simple effect analysis revealed a reduction in the differences of late LPP amplitudes between the affect labeling and gender labeling conditions in the PD group compared to the HC group.Furthermore,among PD patients under the affect labeling,the late LPP was negatively correlated with disease severity,symptom frequency,and intensity.CONCLUSION PD patients demonstrate abnormalities in implicit emotion regulation,hampering their ability to mobilize cognitive resources for downregulating negative emotions.The late LPP amplitude in response to affect labeling may serve as a potentially valuable clinical indicator of PD severity.
文摘BACKGROUND Acute pancreatitis(AP),as a common acute abdomen disease,has a high incidence rate worldwide and is often accompanied by severe complications.Negative emotions lead to increased secretion of stress hormones,elevated blood sugar levels,and enhanced insulin resistance,which in turn increases the risk of AP and significantly affects the patient's quality of life.Therefore,exploring the intervention effects of narrative nursing programs on the negative emotions of patients with AP is not only helpful in alleviating psychological stress and improving quality of life but also has significant implications for improving disease outcomes and prognosis.AIM To construct a narrative nursing model for negative emotions in patients with AP and verify its efficacy in application.METHODS Through Delphi expert consultation,a narrative nursing model for negative emotions in patients with AP was constructed.A non-randomized quasi-experimental study design was used in this study.A total of 92 patients with AP with negative emotions admitted to a tertiary hospital in Nantong City of Jiangsu Province,China from September 2022 to August 2023 were recruited by convenience sampling,among whom 46 patients admitted from September 2022 to February 2023 were included in the observation group,and 46 patients from March to August 2023 were selected as control group.The observation group received narrative nursing plan,while the control group was given with routine nursing.Self-rating anxiety scale(SAS),self-rating depression scale(SDS),positive and negative affect scale(PANAS),caring behavior scale,patient satisfaction scale and 36-item short form health survey questionnaire(SF-36)were used to evaluate their emotions,satisfaction and caring behaviors in the two groups on the day of discharge,1-and 3-month following discharge.RESULTS According to the inclusion and exclusion criteria,a total of 45 cases in the intervention group and 44 cases in the control group eventually recruited and completed in the study.On the day of discharge,the intervention group showed significantly lower scores of SAS,SDS and negative emotion(28.57±4.52 vs 17.4±4.44,P<0.001),whereas evidently higher outcomes in the positive emotion score,Caring behavior scale score and satisfaction score compared to the control group(P<0.05).Repeated measurement analysis of variance showed that significant between-group differences were found in time effect,inter-group effect and interaction effect of SAS and PANAS scores as well as in time effect and inter-group effect of SF-36 scores(P<0.05);the SF-36 scores of two groups at 3 months after discharge were higher than those at 1 month after discharge(P<0.05).CONCLUSION The application of narrative nursing protocols has demonstrated significant effectiveness in alleviating anxiety,ameliorating negative emotions,and enhancing satisfaction among patients with AP.
基金Shijiazhuang City Science and Technology Research and Development Self Raised Plan,No.221460383。
文摘BACKGROUND Studies have revealed that Children's psychological,behavioral,and emotional problems are easily influenced by the family environment.In recent years,the family structure in China has undergone significant changes,with more families having two or three children.AIM To explore the relationship between emotional behavior and parental job stress in only preschool and non-only preschool children.METHODS Children aged 3-6 in kindergartens in four main urban areas of Shijiazhuang were selected by stratified sampling for a questionnaire and divided into only and nononly child groups.Their emotional behaviors and parental pressure were compared.Only and non-only children were paired in a 1:1 ratio by class and age(difference less than or equal to 6 months),and the matched data were compared.The relationship between children's emotional behavior and parents'job stress before and after matching was analyzed.RESULTS Before matching,the mother's occupation,children's personality characteristics,and children's rearing patterns differed between the groups(P<0.05).After matching 550 pairs,differences in the children's parenting styles remained.There were significant differences in children's gender and parents'attitudes toward children between the two groups.The Strengths and Difficulties Questionnaire(SDQ)scores of children in the only child group and the Parenting Stress Index-Short Form(PSI-SF)scores of parents were significantly lower than those in the non-only child group(P<0.05).Pearson’s correlation analysis showed that after matching,there was a positive correlation between children's parenting style and parents'attitudes toward their children(r=0.096,P<0.01),and the PSI-SF score was positively correlated with children's gender,parents'attitudes toward their children,and SDQ scores(r=0.077,0.193,0.172,0.222).CONCLUSION Preschool children's emotional behavior and parental pressure were significantly higher in multi-child families.Parental pressure in differently structured families was associated with many factors,and preschool children's emotional behavior was positively correlated with parental pressure.