Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
Breast cancer(BC)is the most common malignant tumor in women,and the treatment process not only results in physical pain but also significant psychological distress in patients.Psychological intervention(PI)has been r...Breast cancer(BC)is the most common malignant tumor in women,and the treatment process not only results in physical pain but also significant psychological distress in patients.Psychological intervention(PI)has been recognized as an important approach in treating postoperative psychological disorders in BC patients.It has been proven that PI has a significant therapeutic effect on postoperative psychological disorders,improving patients'negative emotions,enhancing their psychological resilience,and effectively enhancing their quality of life and treatment compliance.展开更多
BACKGROUND Breast cancer is among the most common malignancies worldwide.With progress in treatment methods and levels,the overall survival period has been prolonged,and the demand for quality care has increased.AIM T...BACKGROUND Breast cancer is among the most common malignancies worldwide.With progress in treatment methods and levels,the overall survival period has been prolonged,and the demand for quality care has increased.AIM To investigate the effect of individualized and continuous care intervention in patients with breast cancer.METHODS Two hundred patients with breast cancer who received systemic therapy at The First Affiliated Hospital of Hebei North University(January 2021 to July 2023)were retrospectively selected as research participants.Among them,134 received routine care intervention(routing group)and 66 received personalized and continuous care(intervention group).Self-rating anxiety scale(SAS),self-rating depression scale(SDS),and Functional Assessment of Cancer Therapy-Breast(FACT-B)scores,including limb shoulder joint activity,complication rate,and care satisfaction,were compared between both groups after care.RESULTS SAS and SDS scores were lower in the intervention group than in the routing group at one and three months after care.The total FACT-B scores and five dimensions in the intervention group were higher than those in the routing group at three months of care.The range of motion of shoulder anteflexion,posterior extension,abduction,internal rotation,and external rotation in the intervention group was higher than that in the routing group one month after care.The incidence of postoperative complications was 18.18%lower in the intervention group than in the routing group(34.33%;P<0.05).Satisfaction with care was 90.91% higher in the intervention group than in the routing group(78.36%;P<0.05).CONCLUSION Personalized and continuous care can alleviate negative emotions in patients with breast cancer,quicken rehabilitation of limb function,decrease the incidence of complications,and improve living quality and care satisfaction.展开更多
BACKGROUND Gastric cancer is a malignant digestive tract tumor that originates from the epithelium of the gastric mucosa and occurs in the gastric antrum,particularly in the lower curvature of the stomach.AIM To evalu...BACKGROUND Gastric cancer is a malignant digestive tract tumor that originates from the epithelium of the gastric mucosa and occurs in the gastric antrum,particularly in the lower curvature of the stomach.AIM To evaluate the impact of a positive web-based psychological intervention on emotions,psychological capital,and quality of survival in gastric cancer patients on chemotherapy.METHODS From January 2020 to October 2023,121 cases of gastric cancer patients on chemotherapy admitted to our hospital were collected and divided into a control group(n=60)and an observation group(n=61)according to the admission order.They were given either conventional nursing care alone and conventional nursing care combined with web-based positive psychological interventions,respectively.The two groups were compared in terms of negative emotions,psychological capital,degree of cancer-caused fatigue,and quality of survival.RESULTS After intervention,the number of patients in the observation group who had negative feelings toward chemotherapy treatment was significantly lower than that of the control group(P<0.05);the Positive Psychological Capital Questionnaire score was considerably higher than that of the control group(P<0.05);the degree of cancer-caused fatigue was significantly lower than that of the control group(P<0.05);and the Quality of Life Scale for Cancer Patients(QLQ-30)score was significantly higher than that of the control group(P<0.05).CONCLUSION Implementing a web-based positive psychological intervention for gastric cancer chemotherapy patients can effectively improve negative emotions,enhance psychological capital,and improve the quality of survival.展开更多
Objective:To explore the effect of outpatient nursing interventions on the hypoglycemic treatment and psychological emotions of diabetic patients.Methods:148 patients who came to our hospital for outpatient treatment ...Objective:To explore the effect of outpatient nursing interventions on the hypoglycemic treatment and psychological emotions of diabetic patients.Methods:148 patients who came to our hospital for outpatient treatment from February 2022 to October 2023 were selected and divided into a control group and an observation group,with 74 cases per group,according to the random number table method.The control group received routine nursing intervention,and the observation group received outpatient nursing intervention based on the control group.The two groups were observed for their effects of hypoglycemic treatment and psychological and emotional improvement before and after outpatient nursing intervention.Results:The health behavior scores of the control group were lower than that of the observation group;the post-intervention fasting blood glucose,2h postprandial blood glucose,anxiety self-rating scale(SAS),and the depression self-rating scale(SDS)of the control group were significantly higher than that of the observation group,and the difference was statistically significant(P<0.01).Conclusion:Outpatient nursing intervention encouraged patients to comply with healthy behaviors and helped control blood sugar levels.Patients’anxiety,depression,and other adverse psychological states were also improved hence the outpatient nursing intervention is worthy of further promotion.展开更多
STEAM(science,technology,engineering,arts,and mathematics)education aims to cultivate innovative talents with multidimensional literacy through interdisciplinary integration and innovative practice.However,lack of stu...STEAM(science,technology,engineering,arts,and mathematics)education aims to cultivate innovative talents with multidimensional literacy through interdisciplinary integration and innovative practice.However,lack of student motivation has emerged as a key factor hindering its effectiveness.This study explores the integrated application of positive emotions and flow experience in STEAM education from the perspective of positive psychology.It systematically explains how these factors enhance learning motivation and promote knowledge internalization,proposing feasible pathways for instructional design,resource provision,environment creation,and team building.The study provides theoretical insights and practical guidance for transforming STEAM education in the new era.展开更多
BACKGROUND Primiparas are usually at high risk of experiencing perinatal depression,which may cause prolonged labor,increased blood loss,and intensified pain,affecting maternal and fetal outcomes.Therefore,interventio...BACKGROUND Primiparas are usually at high risk of experiencing perinatal depression,which may cause prolonged labor,increased blood loss,and intensified pain,affecting maternal and fetal outcomes.Therefore,interventions are necessary to improve maternal and fetal outcomes and alleviate primiparas’negative emotions(NEs).AIM To discusses the impact of nursing responsibility in midwifery and postural and psychological interventions on maternal and fetal outcomes as well as primiparas’NEs.METHODS As participants,115 primiparas admitted to Quanzhou Maternity and Child Healthcare Hospital between May 2020 and May 2022 were selected.Among them,56 primiparas(control group,Con)were subjected to conventional midwifery and routine nursing.The remaining 59(research group,Res)were subjected to the nursing model of midwifery and postural and psychological interventions.Both groups were comparatively analyzed from the perspectives of delivery mode(cesarean,natural,or forceps-assisted),maternal and fetal outcomes(uterine inertia,postpartum hemorrhage,placental abruption,neonatal pulmonary injury,and neonatal asphyxia),NEs(Hamilton Anxiety/Depressionrating Scale,HAMA/HAMD),labor duration,and nursing satisfaction.RESULTS The Res exhibited a markedly higher natural delivery rate and nursing satisfaction than the Con.Additionally,the Res indicated a lower incidence of adverse events(e.g.,uterine inertia,postpartum hemorrhage,placental abruption,neonatal lung injury,and neonatal asphyxia)and shortened duration of various stages of labor.It also showed statistically lower post-interventional HAMA and HAMD scores than the Con and pre-interventional values.CONCLUSION The nursing model of midwifery and postural and psychological interventions increase the natural delivery rate and reduce the duration of each labor stage.These are also conducive to improving maternal and fetal outcomes and mitigating primiparas’NEs and thus deserve popularity in clinical practice.展开更多
In this case study, we hypothesized that sympathetic nerve activity would be higher during conversation with PALRO robot, and that conversation would result in an increase in cerebral blood flow near the Broca’s area...In this case study, we hypothesized that sympathetic nerve activity would be higher during conversation with PALRO robot, and that conversation would result in an increase in cerebral blood flow near the Broca’s area. The facial expressions of a human subject were recorded, and cerebral blood flow and heart rate variability were measured during interactions with the humanoid robot. These multimodal data were time-synchronized to quantitatively verify the change from the resting baseline by testing facial expression analysis, cerebral blood flow, and heart rate variability. In conclusion, this subject indicated that sympathetic nervous activity was dominant, suggesting that the subject may have enjoyed and been excited while talking to the robot (normalized High Frequency < normalized Low Frequency: 0.22 ± 0.16 < 0.78 ± 0.16). Cerebral blood flow values were higher during conversation and in the resting state after the experiment than in the resting state before the experiment. Talking increased cerebral blood flow in the frontal region. As the subject was left-handed, it was confirmed that the right side of the brain, where the Broca’s area is located, was particularly activated (Left < right: 0.15 ± 0.21 < 1.25 ± 0.17). In the sections where a “happy” facial emotion was recognized, the examiner-judged “happy” faces and the MTCNN “happy” results were also generally consistent.展开更多
BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitiv...BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitive function,anxiety,and depression in patients undergoing this procedure.AIM To compare the effects of propofol and sevoflurane anesthesia on postoperative cognitive function,anxiety,depression,and organ function in patients undergoing radical resection of gastric cancer.METHODS A total of 80 patients were involved in this research.The subjects were divided into two groups:Propofol group and sevoflurane group.The evaluation scale for cognitive function was the Loewenstein occupational therapy cognitive assessment(LOTCA),and anxiety and depression were assessed with the aid of the self-rating anxiety scale(SAS)and self-rating depression scale(SDS).Hemodynamic indicators,oxidative stress levels,and pulmonary function were also measured.RESULTS The LOTCA score at 1 d after surgery was significantly lower in the propofol group than in the sevoflurane group.Additionally,the SAS and SDS scores of the sevoflurane group were significantly lower than those of the propofol group.The sevoflurane group showed greater stability in heart rate as well as the mean arterial pressure compared to the propofol group.Moreover,the sevoflurane group displayed better pulmonary function and less lung injury than the propofol group.CONCLUSION Both propofol and sevoflurane could be utilized as maintenance anesthesia during radical resection of gastric cancer.Propofol anesthesia has a minimal effect on patients'pulmonary function,consequently enhancing their postoperative recovery.Sevoflurane anesthesia causes less impairment on patients'cognitive function and mitigates negative emotions,leading to an improved postoperative mental state.Therefore,the selection of anesthetic agents should be based on the individual patient's specific circumstances.展开更多
Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of sui...Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of suicide.Psychological,social,family,individual,and environmental factors are important risk factors for suicidal behavior among teenagers and may contribute to suicide risk through various direct,indirect,or combined pathways.Social-emotional learning is considered a powerful intervention measure for addressing the crisis of adolescent suicide.When deliberately cultivated,fostered,and enhanced,selfawareness,self-management,social awareness,interpersonal skills,and responsible decision-making,as the five core competencies of social-emotional learning,can be used to effectively target various risk factors for adolescent suicide and provide necessary mental and interpersonal support.Among numerous suicide intervention methods,school-based interventions based on social-emotional competence have shown great potential in preventing and addressing suicide risk factors in adolescents.The characteristics of school-based interventions based on social-emotional competence,including their appropriateness,necessity,cost-effectiveness,comprehensiveness,and effectiveness,make these interventions an important means of addressing the crisis of adolescent suicide.To further determine the potential of school-based interventions based on social-emotional competence and better address the issue of adolescent suicide,additional financial support should be provided,the combination of socialemotional learning and other suicide prevention programs within schools should be fully leveraged,and cooperation between schools and families,society,and other environments should be maximized.These efforts should be considered future research directions.展开更多
Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han...Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.展开更多
BACKGROUND Patients with chronic hepatitis B(CHB)experience various problems,including low psychological flexibility,negative emotions,and poor sleep quality.Therefore,effective nursing interventions are required to r...BACKGROUND Patients with chronic hepatitis B(CHB)experience various problems,including low psychological flexibility,negative emotions,and poor sleep quality.Therefore,effective nursing interventions are required to reduce adverse events.Acceptance and commitment therapy(ACT)combined with enabling cognitivebehavioral education(ECBE)can improve patients'psychological and sleep.Therefore,we speculate that this may also be effective in patients with CHB.AIM To investigate the effects of different intervention methods on psychological flexibility,negative emotions,and sleep quality in patients with CHB.METHODS This retrospective study examined clinical and evaluation data of 129 patients with CHB.Intervention methods were divided into a conventional group(routine nursing,n=69)and a combination group(ACT combined with ECBE,n=60).We observed changes in psychological flexibility,negative emotions,sleep quality,and self-care ability in both groups.Observation items were evaluated using the Acceptance and Action Questionnaire-2nd Edition(AAQ-II),Self-Rating Anxiety Scale(SAS),Self-Rating Depression Scale(SDS),Pittsburgh Sleep Quality Index(PSQI),and Exercise of Self-Care Agency Scale(ESCA).RESULTS Compared with the conventional group,the AAQ-II score of the combined group was lower(F_(between-group effect)=8.548;F_(time effects)=25.020;F_(interaction effects)=52.930;all P<0.001),the SAS score(t=5.445)and SDS score(t=7.076)were lower(all P<0.001),as were the PSQI dimensions(tsleep quality=4.581,tfall sleep time=2.826,tsleep time=2.436,tsleep efficiency=5.787,tsleep disorder=5.008,thypnotic drugs=3.786,tdaytime dysfunction=4.812);all P<0.05).The ESCA scores for all dimensions were higher(thealth knowledge level=6.994,t self-concept=5.902,tself-responsibility=19.820,tself-care skills=8.470;all P<0.001).CONCLUSION ACT combined with ECBE in patients with CHB can improve psychological flexibility and sleep quality,alleviate negative emotions,and improve self-care.展开更多
experience shared by human beings,but it is often expressed in the form of conceptual metaphor which is pervasive in the concrete language.Drawing upon the theory of conceptual metaphor and emotion metaphors,this pape...experience shared by human beings,but it is often expressed in the form of conceptual metaphor which is pervasive in the concrete language.Drawing upon the theory of conceptual metaphor and emotion metaphors,this paper aims to study the translation of positive emotions in David Hawkes’version of The Story of the Stone and the related daily expressions as the corpus.According to the theories,this paper categorizes the positive emotions metaphors into body metaphors,sensory metaphors,entity metaphors,and orientational metaphors to discuss the process of English translation and the metaphorical mechanism,analyzes the similarities and differences between Chinese and English expressions of positive emotions and their cultural,physiological,and psychological motives,and lastly attempts to summarize the translation strategies of emotional expressions.The analysis of metaphorical mechanisms reveals that the common physiological and psychological experiences of human beings and cultural exchanges give rise to similarities,while the unique cultural environments,antecedents,and historical development of each nation give rise to their distinctiveness.This paper concludes that translators need to make full use of their subjectivity,understand the surface and underlying meaning of the text,and establish a high level of mental compatibility between themselves,the readers,and the author.展开更多
In recent years, research on the estimation of human emotions has been active, and its application is expected in various fields. Biological reactions, such as electroencephalography (EEG) and root mean square success...In recent years, research on the estimation of human emotions has been active, and its application is expected in various fields. Biological reactions, such as electroencephalography (EEG) and root mean square successive difference (RMSSD), are indicators that are less influenced by individual arbitrariness. The present study used EEG and RMSSD signals to assess the emotions aroused by emotion-stimulating images in order to investigate whether various emotions are associated with characteristic biometric signal fluctuations. The participants underwent EEG and RMSSD while viewing emotionally stimulating images and answering the questionnaires. The emotions aroused by emotionally stimulating images were assessed by measuring the EEG signals and RMSSD values to determine whether different emotions are associated with characteristic biometric signal variations. Real-time emotion analysis software was used to identify the evoked emotions by describing them in the Circumplex Model of Affect based on the EEG signals and RMSSD values. Emotions other than happiness did not follow the Circumplex Model of Affect in this study. However, ventral attentional activity may have increased the RMSSD value for disgust as the β/θ value increased in right-sided brain waves. Therefore, the right-sided brain wave results are necessary when measuring disgust. Happiness can be assessed easily using the Circumplex Model of Affect for positive scene analysis. Improving the current analysis methods may facilitate the investigation of face-to-face communication in the future using biometric signals.展开更多
Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,an...Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,and text data,do not always indicate true emotions,as users can falsify them.Among the physiological methods of emotion detection,Electrocardiogram(ECG)is a reliable and efficient way of detecting emotions.ECG-enabled smart bands have proven effective in collecting emotional data in uncontrolled environments.Researchers use deep machine learning techniques for emotion recognition using ECG signals,but there is a need to develop efficient models by tuning the hyperparameters.Furthermore,most researchers focus on detecting emotions in individual settings,but there is a need to extend this research to group settings aswell since most of the emotions are experienced in groups.In this study,we have developed a novel lightweight one dimensional(1D)Convolutional Neural Network(CNN)model by reducing the number of convolution,max pooling,and classification layers.This optimization has led to more efficient emotion classification using ECG.We tested the proposed model’s performance using ECG data from the AMIGOS(A Dataset for Affect,Personality and Mood Research on Individuals andGroups)dataset for both individual and group settings.The results showed that themodel achieved an accuracy of 82.21%and 85.62%for valence and arousal classification,respectively,in individual settings.In group settings,the accuracy was even higher,at 99.56%and 99.68%for valence and arousal classification,respectively.By reducing the number of layers,the lightweight CNNmodel can process data more quickly and with less complexity in the hardware,making it suitable for the implementation on the mobile phone devices to detect emotions with improved accuracy and speed.展开更多
Introduction: Emotional intelligence, or the capacity to cope one’s emotions, makes it simpler to form good connections with others and do caring duties. Nursing students can enroll a health team in a helpful and ben...Introduction: Emotional intelligence, or the capacity to cope one’s emotions, makes it simpler to form good connections with others and do caring duties. Nursing students can enroll a health team in a helpful and beneficial way with the use of emotional intelligence. Nurses who can identify, control, and interpret both their own emotions and those of their patients provide better patient care. The purpose of this study was to assess the emotional intelligence and to investigate the relationship and differences between emotional intelligence and demographic characteristics of nursing students. Methods: A cross-sectional study was carried out on 381 nursing students. Data collection was completed by “Schutte Self Report Emotional Intelligence Test”. Data were analyzed with the Statistical Package for Social Science. An independent t test, ANOVA, and Pearson correlation, multiple linear regression were used. Results: The results revealed that the emotional intelligence mean was 143.1 ± 21.6 (ranging from 33 to 165), which is high. Also, the analysis revealed that most of the participants 348 (91.3%) had higher emotional intelligence level. This finding suggests that nursing students are emotionally intelligent and may be able to notice, analyze, control, manage, and harness emotion in an adaptive manner. Also, academic year of nursing students was a predictor of emotional intelligence. Furthermore, there was positive relationship between the age and emotional intelligence (p < 0.05). The students’ ability to use their EI increased as they rose through the nursing grades. Conclusion: This study confirmed that the emotional intelligence score of the nursing students was high. Also, academic year of nursing students was a predictor of emotional intelligence. In addition, a positive relationship was confirmed between the emotional intelligence and age of nursing students. .展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect anal...In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect analysis,thereby continuouslypromotingthe improvementof teaching quality.However,most existingmulti-face expressionrecognition methods adopt a multi-stage approach, with an overall complex process, poor real-time performance,and insufficient generalization ability. In addition, the existing facial expression datasets are mostly single faceimages, which are of low quality and lack specificity, also restricting the development of this research. This paperaims to propose an end-to-end high-performance multi-face expression recognition algorithm model suitable forsmart classrooms, construct a high-quality multi-face expression dataset to support algorithm research, and applythe model to group emotion assessment to expand its application value. To this end, we propose an end-to-endmulti-face expression recognition algorithm model for smart classrooms (E2E-MFERC). In order to provide highqualityand highly targeted data support for model research, we constructed a multi-face expression dataset inreal classrooms (MFED), containing 2,385 images and a total of 18,712 expression labels, collected from smartclassrooms. In constructing E2E-MFERC, by introducing Re-parameterization visual geometry group (RepVGG)block and symmetric positive definite convolution (SPD-Conv) modules to enhance representational capability;combined with the cross stage partial network fusion module optimized by attention mechanism (C2f_Attention),it strengthens the ability to extract key information;adopts asymptotic feature pyramid network (AFPN) featurefusion tailored to classroomscenes and optimizes the head prediction output size;achieves high-performance endto-end multi-face expression detection. Finally, we apply the model to smart classroom group emotion assessmentand provide design references for classroom effect analysis evaluation metrics. Experiments based on MFED showthat the mAP and F1-score of E2E-MFERC on classroom evaluation data reach 83.6% and 0.77, respectively,improving the mAP of same-scale You Only Look Once version 5 (YOLOv5) and You Only Look Once version8 (YOLOv8) by 6.8% and 2.5%, respectively, and the F1-score by 0.06 and 0.04, respectively. E2E-MFERC modelhas obvious advantages in both detection speed and accuracy, which can meet the practical needs of real-timemulti-face expression analysis in classrooms, and serve the application of teaching effect assessment very well.展开更多
This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent ...This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.展开更多
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘Breast cancer(BC)is the most common malignant tumor in women,and the treatment process not only results in physical pain but also significant psychological distress in patients.Psychological intervention(PI)has been recognized as an important approach in treating postoperative psychological disorders in BC patients.It has been proven that PI has a significant therapeutic effect on postoperative psychological disorders,improving patients'negative emotions,enhancing their psychological resilience,and effectively enhancing their quality of life and treatment compliance.
基金Supported by Zhangjiakou Science and Technology Plan Project,No.2322112D.
文摘BACKGROUND Breast cancer is among the most common malignancies worldwide.With progress in treatment methods and levels,the overall survival period has been prolonged,and the demand for quality care has increased.AIM To investigate the effect of individualized and continuous care intervention in patients with breast cancer.METHODS Two hundred patients with breast cancer who received systemic therapy at The First Affiliated Hospital of Hebei North University(January 2021 to July 2023)were retrospectively selected as research participants.Among them,134 received routine care intervention(routing group)and 66 received personalized and continuous care(intervention group).Self-rating anxiety scale(SAS),self-rating depression scale(SDS),and Functional Assessment of Cancer Therapy-Breast(FACT-B)scores,including limb shoulder joint activity,complication rate,and care satisfaction,were compared between both groups after care.RESULTS SAS and SDS scores were lower in the intervention group than in the routing group at one and three months after care.The total FACT-B scores and five dimensions in the intervention group were higher than those in the routing group at three months of care.The range of motion of shoulder anteflexion,posterior extension,abduction,internal rotation,and external rotation in the intervention group was higher than that in the routing group one month after care.The incidence of postoperative complications was 18.18%lower in the intervention group than in the routing group(34.33%;P<0.05).Satisfaction with care was 90.91% higher in the intervention group than in the routing group(78.36%;P<0.05).CONCLUSION Personalized and continuous care can alleviate negative emotions in patients with breast cancer,quicken rehabilitation of limb function,decrease the incidence of complications,and improve living quality and care satisfaction.
文摘BACKGROUND Gastric cancer is a malignant digestive tract tumor that originates from the epithelium of the gastric mucosa and occurs in the gastric antrum,particularly in the lower curvature of the stomach.AIM To evaluate the impact of a positive web-based psychological intervention on emotions,psychological capital,and quality of survival in gastric cancer patients on chemotherapy.METHODS From January 2020 to October 2023,121 cases of gastric cancer patients on chemotherapy admitted to our hospital were collected and divided into a control group(n=60)and an observation group(n=61)according to the admission order.They were given either conventional nursing care alone and conventional nursing care combined with web-based positive psychological interventions,respectively.The two groups were compared in terms of negative emotions,psychological capital,degree of cancer-caused fatigue,and quality of survival.RESULTS After intervention,the number of patients in the observation group who had negative feelings toward chemotherapy treatment was significantly lower than that of the control group(P<0.05);the Positive Psychological Capital Questionnaire score was considerably higher than that of the control group(P<0.05);the degree of cancer-caused fatigue was significantly lower than that of the control group(P<0.05);and the Quality of Life Scale for Cancer Patients(QLQ-30)score was significantly higher than that of the control group(P<0.05).CONCLUSION Implementing a web-based positive psychological intervention for gastric cancer chemotherapy patients can effectively improve negative emotions,enhance psychological capital,and improve the quality of survival.
文摘Objective:To explore the effect of outpatient nursing interventions on the hypoglycemic treatment and psychological emotions of diabetic patients.Methods:148 patients who came to our hospital for outpatient treatment from February 2022 to October 2023 were selected and divided into a control group and an observation group,with 74 cases per group,according to the random number table method.The control group received routine nursing intervention,and the observation group received outpatient nursing intervention based on the control group.The two groups were observed for their effects of hypoglycemic treatment and psychological and emotional improvement before and after outpatient nursing intervention.Results:The health behavior scores of the control group were lower than that of the observation group;the post-intervention fasting blood glucose,2h postprandial blood glucose,anxiety self-rating scale(SAS),and the depression self-rating scale(SDS)of the control group were significantly higher than that of the observation group,and the difference was statistically significant(P<0.01).Conclusion:Outpatient nursing intervention encouraged patients to comply with healthy behaviors and helped control blood sugar levels.Patients’anxiety,depression,and other adverse psychological states were also improved hence the outpatient nursing intervention is worthy of further promotion.
基金Key Scientific Research Project of Henan Provincial Colleges and Universities“Construction of an Innovation and Entrepreneurship Education Ecosystem Model in Colleges and Universities Based on Ecological Theory”(24B880048)Research and Practice Project on Education and Teaching Reform in Henan Provincial Colleges and Universities(Employment and Innovation and Entrepreneurship Education)“Construction and Practice of a‘3+N’Practical Education System Based on Employment and Education Orientation”(2024SJGLX1083)+1 种基金Research and Practice Project on Teaching Reform in Higher Education in Henan Province“Practical Exploration of the‘3+3+X’Collaborative Education Model for Mental Health Education in Medical Schools”(2024SJGLX0142)Research and Practice Project on Education and Teaching Reform at Xinxiang Medical University“Practical Exploration of Conflicts and Countermeasures in Medical Students’Internships,Postgraduate Entrance Exams,and Employment from the Perspective of the Conflict Between Work and Study”(2021-XYJG-98)。
文摘STEAM(science,technology,engineering,arts,and mathematics)education aims to cultivate innovative talents with multidimensional literacy through interdisciplinary integration and innovative practice.However,lack of student motivation has emerged as a key factor hindering its effectiveness.This study explores the integrated application of positive emotions and flow experience in STEAM education from the perspective of positive psychology.It systematically explains how these factors enhance learning motivation and promote knowledge internalization,proposing feasible pathways for instructional design,resource provision,environment creation,and team building.The study provides theoretical insights and practical guidance for transforming STEAM education in the new era.
文摘BACKGROUND Primiparas are usually at high risk of experiencing perinatal depression,which may cause prolonged labor,increased blood loss,and intensified pain,affecting maternal and fetal outcomes.Therefore,interventions are necessary to improve maternal and fetal outcomes and alleviate primiparas’negative emotions(NEs).AIM To discusses the impact of nursing responsibility in midwifery and postural and psychological interventions on maternal and fetal outcomes as well as primiparas’NEs.METHODS As participants,115 primiparas admitted to Quanzhou Maternity and Child Healthcare Hospital between May 2020 and May 2022 were selected.Among them,56 primiparas(control group,Con)were subjected to conventional midwifery and routine nursing.The remaining 59(research group,Res)were subjected to the nursing model of midwifery and postural and psychological interventions.Both groups were comparatively analyzed from the perspectives of delivery mode(cesarean,natural,or forceps-assisted),maternal and fetal outcomes(uterine inertia,postpartum hemorrhage,placental abruption,neonatal pulmonary injury,and neonatal asphyxia),NEs(Hamilton Anxiety/Depressionrating Scale,HAMA/HAMD),labor duration,and nursing satisfaction.RESULTS The Res exhibited a markedly higher natural delivery rate and nursing satisfaction than the Con.Additionally,the Res indicated a lower incidence of adverse events(e.g.,uterine inertia,postpartum hemorrhage,placental abruption,neonatal lung injury,and neonatal asphyxia)and shortened duration of various stages of labor.It also showed statistically lower post-interventional HAMA and HAMD scores than the Con and pre-interventional values.CONCLUSION The nursing model of midwifery and postural and psychological interventions increase the natural delivery rate and reduce the duration of each labor stage.These are also conducive to improving maternal and fetal outcomes and mitigating primiparas’NEs and thus deserve popularity in clinical practice.
文摘In this case study, we hypothesized that sympathetic nerve activity would be higher during conversation with PALRO robot, and that conversation would result in an increase in cerebral blood flow near the Broca’s area. The facial expressions of a human subject were recorded, and cerebral blood flow and heart rate variability were measured during interactions with the humanoid robot. These multimodal data were time-synchronized to quantitatively verify the change from the resting baseline by testing facial expression analysis, cerebral blood flow, and heart rate variability. In conclusion, this subject indicated that sympathetic nervous activity was dominant, suggesting that the subject may have enjoyed and been excited while talking to the robot (normalized High Frequency < normalized Low Frequency: 0.22 ± 0.16 < 0.78 ± 0.16). Cerebral blood flow values were higher during conversation and in the resting state after the experiment than in the resting state before the experiment. Talking increased cerebral blood flow in the frontal region. As the subject was left-handed, it was confirmed that the right side of the brain, where the Broca’s area is located, was particularly activated (Left < right: 0.15 ± 0.21 < 1.25 ± 0.17). In the sections where a “happy” facial emotion was recognized, the examiner-judged “happy” faces and the MTCNN “happy” results were also generally consistent.
文摘BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitive function,anxiety,and depression in patients undergoing this procedure.AIM To compare the effects of propofol and sevoflurane anesthesia on postoperative cognitive function,anxiety,depression,and organ function in patients undergoing radical resection of gastric cancer.METHODS A total of 80 patients were involved in this research.The subjects were divided into two groups:Propofol group and sevoflurane group.The evaluation scale for cognitive function was the Loewenstein occupational therapy cognitive assessment(LOTCA),and anxiety and depression were assessed with the aid of the self-rating anxiety scale(SAS)and self-rating depression scale(SDS).Hemodynamic indicators,oxidative stress levels,and pulmonary function were also measured.RESULTS The LOTCA score at 1 d after surgery was significantly lower in the propofol group than in the sevoflurane group.Additionally,the SAS and SDS scores of the sevoflurane group were significantly lower than those of the propofol group.The sevoflurane group showed greater stability in heart rate as well as the mean arterial pressure compared to the propofol group.Moreover,the sevoflurane group displayed better pulmonary function and less lung injury than the propofol group.CONCLUSION Both propofol and sevoflurane could be utilized as maintenance anesthesia during radical resection of gastric cancer.Propofol anesthesia has a minimal effect on patients'pulmonary function,consequently enhancing their postoperative recovery.Sevoflurane anesthesia causes less impairment on patients'cognitive function and mitigates negative emotions,leading to an improved postoperative mental state.Therefore,the selection of anesthetic agents should be based on the individual patient's specific circumstances.
文摘Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of suicide.Psychological,social,family,individual,and environmental factors are important risk factors for suicidal behavior among teenagers and may contribute to suicide risk through various direct,indirect,or combined pathways.Social-emotional learning is considered a powerful intervention measure for addressing the crisis of adolescent suicide.When deliberately cultivated,fostered,and enhanced,selfawareness,self-management,social awareness,interpersonal skills,and responsible decision-making,as the five core competencies of social-emotional learning,can be used to effectively target various risk factors for adolescent suicide and provide necessary mental and interpersonal support.Among numerous suicide intervention methods,school-based interventions based on social-emotional competence have shown great potential in preventing and addressing suicide risk factors in adolescents.The characteristics of school-based interventions based on social-emotional competence,including their appropriateness,necessity,cost-effectiveness,comprehensiveness,and effectiveness,make these interventions an important means of addressing the crisis of adolescent suicide.To further determine the potential of school-based interventions based on social-emotional competence and better address the issue of adolescent suicide,additional financial support should be provided,the combination of socialemotional learning and other suicide prevention programs within schools should be fully leveraged,and cooperation between schools and families,society,and other environments should be maximized.These efforts should be considered future research directions.
文摘Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.
文摘BACKGROUND Patients with chronic hepatitis B(CHB)experience various problems,including low psychological flexibility,negative emotions,and poor sleep quality.Therefore,effective nursing interventions are required to reduce adverse events.Acceptance and commitment therapy(ACT)combined with enabling cognitivebehavioral education(ECBE)can improve patients'psychological and sleep.Therefore,we speculate that this may also be effective in patients with CHB.AIM To investigate the effects of different intervention methods on psychological flexibility,negative emotions,and sleep quality in patients with CHB.METHODS This retrospective study examined clinical and evaluation data of 129 patients with CHB.Intervention methods were divided into a conventional group(routine nursing,n=69)and a combination group(ACT combined with ECBE,n=60).We observed changes in psychological flexibility,negative emotions,sleep quality,and self-care ability in both groups.Observation items were evaluated using the Acceptance and Action Questionnaire-2nd Edition(AAQ-II),Self-Rating Anxiety Scale(SAS),Self-Rating Depression Scale(SDS),Pittsburgh Sleep Quality Index(PSQI),and Exercise of Self-Care Agency Scale(ESCA).RESULTS Compared with the conventional group,the AAQ-II score of the combined group was lower(F_(between-group effect)=8.548;F_(time effects)=25.020;F_(interaction effects)=52.930;all P<0.001),the SAS score(t=5.445)and SDS score(t=7.076)were lower(all P<0.001),as were the PSQI dimensions(tsleep quality=4.581,tfall sleep time=2.826,tsleep time=2.436,tsleep efficiency=5.787,tsleep disorder=5.008,thypnotic drugs=3.786,tdaytime dysfunction=4.812);all P<0.05).The ESCA scores for all dimensions were higher(thealth knowledge level=6.994,t self-concept=5.902,tself-responsibility=19.820,tself-care skills=8.470;all P<0.001).CONCLUSION ACT combined with ECBE in patients with CHB can improve psychological flexibility and sleep quality,alleviate negative emotions,and improve self-care.
基金“Comparative Study of the Cultivation of Chinese-English Translation Ability in China Mainland,Hong Kong SAR,and China Taiwan”of Hunan Provincial Philosophy and Social Science Foundation(Project number:18JD71)“Research on Xi Jinping’s Overseas Signed Articles Based on Comparable Corpus”of Hunan Provincial Philosophy and Social Science Foundation(Project number:21YBA016)“A Practical Exploration of Translation Theory Talents in Universities Based on Enterprise Training”of Industry-Academy Cooperative Educational Project of Ministry of Education in 2023。
文摘experience shared by human beings,but it is often expressed in the form of conceptual metaphor which is pervasive in the concrete language.Drawing upon the theory of conceptual metaphor and emotion metaphors,this paper aims to study the translation of positive emotions in David Hawkes’version of The Story of the Stone and the related daily expressions as the corpus.According to the theories,this paper categorizes the positive emotions metaphors into body metaphors,sensory metaphors,entity metaphors,and orientational metaphors to discuss the process of English translation and the metaphorical mechanism,analyzes the similarities and differences between Chinese and English expressions of positive emotions and their cultural,physiological,and psychological motives,and lastly attempts to summarize the translation strategies of emotional expressions.The analysis of metaphorical mechanisms reveals that the common physiological and psychological experiences of human beings and cultural exchanges give rise to similarities,while the unique cultural environments,antecedents,and historical development of each nation give rise to their distinctiveness.This paper concludes that translators need to make full use of their subjectivity,understand the surface and underlying meaning of the text,and establish a high level of mental compatibility between themselves,the readers,and the author.
文摘In recent years, research on the estimation of human emotions has been active, and its application is expected in various fields. Biological reactions, such as electroencephalography (EEG) and root mean square successive difference (RMSSD), are indicators that are less influenced by individual arbitrariness. The present study used EEG and RMSSD signals to assess the emotions aroused by emotion-stimulating images in order to investigate whether various emotions are associated with characteristic biometric signal fluctuations. The participants underwent EEG and RMSSD while viewing emotionally stimulating images and answering the questionnaires. The emotions aroused by emotionally stimulating images were assessed by measuring the EEG signals and RMSSD values to determine whether different emotions are associated with characteristic biometric signal variations. Real-time emotion analysis software was used to identify the evoked emotions by describing them in the Circumplex Model of Affect based on the EEG signals and RMSSD values. Emotions other than happiness did not follow the Circumplex Model of Affect in this study. However, ventral attentional activity may have increased the RMSSD value for disgust as the β/θ value increased in right-sided brain waves. Therefore, the right-sided brain wave results are necessary when measuring disgust. Happiness can be assessed easily using the Circumplex Model of Affect for positive scene analysis. Improving the current analysis methods may facilitate the investigation of face-to-face communication in the future using biometric signals.
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,and text data,do not always indicate true emotions,as users can falsify them.Among the physiological methods of emotion detection,Electrocardiogram(ECG)is a reliable and efficient way of detecting emotions.ECG-enabled smart bands have proven effective in collecting emotional data in uncontrolled environments.Researchers use deep machine learning techniques for emotion recognition using ECG signals,but there is a need to develop efficient models by tuning the hyperparameters.Furthermore,most researchers focus on detecting emotions in individual settings,but there is a need to extend this research to group settings aswell since most of the emotions are experienced in groups.In this study,we have developed a novel lightweight one dimensional(1D)Convolutional Neural Network(CNN)model by reducing the number of convolution,max pooling,and classification layers.This optimization has led to more efficient emotion classification using ECG.We tested the proposed model’s performance using ECG data from the AMIGOS(A Dataset for Affect,Personality and Mood Research on Individuals andGroups)dataset for both individual and group settings.The results showed that themodel achieved an accuracy of 82.21%and 85.62%for valence and arousal classification,respectively,in individual settings.In group settings,the accuracy was even higher,at 99.56%and 99.68%for valence and arousal classification,respectively.By reducing the number of layers,the lightweight CNNmodel can process data more quickly and with less complexity in the hardware,making it suitable for the implementation on the mobile phone devices to detect emotions with improved accuracy and speed.
文摘Introduction: Emotional intelligence, or the capacity to cope one’s emotions, makes it simpler to form good connections with others and do caring duties. Nursing students can enroll a health team in a helpful and beneficial way with the use of emotional intelligence. Nurses who can identify, control, and interpret both their own emotions and those of their patients provide better patient care. The purpose of this study was to assess the emotional intelligence and to investigate the relationship and differences between emotional intelligence and demographic characteristics of nursing students. Methods: A cross-sectional study was carried out on 381 nursing students. Data collection was completed by “Schutte Self Report Emotional Intelligence Test”. Data were analyzed with the Statistical Package for Social Science. An independent t test, ANOVA, and Pearson correlation, multiple linear regression were used. Results: The results revealed that the emotional intelligence mean was 143.1 ± 21.6 (ranging from 33 to 165), which is high. Also, the analysis revealed that most of the participants 348 (91.3%) had higher emotional intelligence level. This finding suggests that nursing students are emotionally intelligent and may be able to notice, analyze, control, manage, and harness emotion in an adaptive manner. Also, academic year of nursing students was a predictor of emotional intelligence. Furthermore, there was positive relationship between the age and emotional intelligence (p < 0.05). The students’ ability to use their EI increased as they rose through the nursing grades. Conclusion: This study confirmed that the emotional intelligence score of the nursing students was high. Also, academic year of nursing students was a predictor of emotional intelligence. In addition, a positive relationship was confirmed between the emotional intelligence and age of nursing students. .
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金the Science and Technology Project of State Grid Corporation of China under Grant No.5700-202318292A-1-1-ZN.
文摘In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect analysis,thereby continuouslypromotingthe improvementof teaching quality.However,most existingmulti-face expressionrecognition methods adopt a multi-stage approach, with an overall complex process, poor real-time performance,and insufficient generalization ability. In addition, the existing facial expression datasets are mostly single faceimages, which are of low quality and lack specificity, also restricting the development of this research. This paperaims to propose an end-to-end high-performance multi-face expression recognition algorithm model suitable forsmart classrooms, construct a high-quality multi-face expression dataset to support algorithm research, and applythe model to group emotion assessment to expand its application value. To this end, we propose an end-to-endmulti-face expression recognition algorithm model for smart classrooms (E2E-MFERC). In order to provide highqualityand highly targeted data support for model research, we constructed a multi-face expression dataset inreal classrooms (MFED), containing 2,385 images and a total of 18,712 expression labels, collected from smartclassrooms. In constructing E2E-MFERC, by introducing Re-parameterization visual geometry group (RepVGG)block and symmetric positive definite convolution (SPD-Conv) modules to enhance representational capability;combined with the cross stage partial network fusion module optimized by attention mechanism (C2f_Attention),it strengthens the ability to extract key information;adopts asymptotic feature pyramid network (AFPN) featurefusion tailored to classroomscenes and optimizes the head prediction output size;achieves high-performance endto-end multi-face expression detection. Finally, we apply the model to smart classroom group emotion assessmentand provide design references for classroom effect analysis evaluation metrics. Experiments based on MFED showthat the mAP and F1-score of E2E-MFERC on classroom evaluation data reach 83.6% and 0.77, respectively,improving the mAP of same-scale You Only Look Once version 5 (YOLOv5) and You Only Look Once version8 (YOLOv8) by 6.8% and 2.5%, respectively, and the F1-score by 0.06 and 0.04, respectively. E2E-MFERC modelhas obvious advantages in both detection speed and accuracy, which can meet the practical needs of real-timemulti-face expression analysis in classrooms, and serve the application of teaching effect assessment very well.
基金Supported by Education and Teaching Reform Project of the First Clinical College of Chongqing Medical University,No.CMER202305Natural Science Foundation of Tibet Autonomous Region,No.XZ2024ZR-ZY100(Z).
文摘This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.