The relapse of methamphetamine (meth) is associated with decision-making dysfunction. The present study aims to investigate theimpact of different emotions on the decision-making behavior of meth users. We used 2 (gen...The relapse of methamphetamine (meth) is associated with decision-making dysfunction. The present study aims to investigate theimpact of different emotions on the decision-making behavior of meth users. We used 2 (gender: male, female) × 3 (emotion:positive, negative, neutral) × 5 (block: 1, 2, 3, 4, 5) mixed experiment design. The study involved 168 meth users who weredivided into three groups: positive emotion, negative emotion and neutral emotion group, and tested by the emotional IowaGambling Task (IGT). The IGT performance of male users exhibited a decreasing trend from Block 1 to Block 3. Female methusers in positive emotion had the best performance in IGT than females in the other two groups. In positive emotion, the IGTperformance of female meth users was significantly better than that of men. Female meth users in positive emotion had betterdecision-making than those in negative or neutral emotion. Female meth users in positive emotion had better decision-makingperformance than males in positive emotion. In negative and neutral emotions, there was no significant gender difference indecision-making.展开更多
BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitiv...BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitive function,anxiety,and depression in patients undergoing this procedure.AIM To compare the effects of propofol and sevoflurane anesthesia on postoperative cognitive function,anxiety,depression,and organ function in patients undergoing radical resection of gastric cancer.METHODS A total of 80 patients were involved in this research.The subjects were divided into two groups:Propofol group and sevoflurane group.The evaluation scale for cognitive function was the Loewenstein occupational therapy cognitive assessment(LOTCA),and anxiety and depression were assessed with the aid of the self-rating anxiety scale(SAS)and self-rating depression scale(SDS).Hemodynamic indicators,oxidative stress levels,and pulmonary function were also measured.RESULTS The LOTCA score at 1 d after surgery was significantly lower in the propofol group than in the sevoflurane group.Additionally,the SAS and SDS scores of the sevoflurane group were significantly lower than those of the propofol group.The sevoflurane group showed greater stability in heart rate as well as the mean arterial pressure compared to the propofol group.Moreover,the sevoflurane group displayed better pulmonary function and less lung injury than the propofol group.CONCLUSION Both propofol and sevoflurane could be utilized as maintenance anesthesia during radical resection of gastric cancer.Propofol anesthesia has a minimal effect on patients'pulmonary function,consequently enhancing their postoperative recovery.Sevoflurane anesthesia causes less impairment on patients'cognitive function and mitigates negative emotions,leading to an improved postoperative mental state.Therefore,the selection of anesthetic agents should be based on the individual patient's specific circumstances.展开更多
Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of sui...Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of suicide.Psychological,social,family,individual,and environmental factors are important risk factors for suicidal behavior among teenagers and may contribute to suicide risk through various direct,indirect,or combined pathways.Social-emotional learning is considered a powerful intervention measure for addressing the crisis of adolescent suicide.When deliberately cultivated,fostered,and enhanced,selfawareness,self-management,social awareness,interpersonal skills,and responsible decision-making,as the five core competencies of social-emotional learning,can be used to effectively target various risk factors for adolescent suicide and provide necessary mental and interpersonal support.Among numerous suicide intervention methods,school-based interventions based on social-emotional competence have shown great potential in preventing and addressing suicide risk factors in adolescents.The characteristics of school-based interventions based on social-emotional competence,including their appropriateness,necessity,cost-effectiveness,comprehensiveness,and effectiveness,make these interventions an important means of addressing the crisis of adolescent suicide.To further determine the potential of school-based interventions based on social-emotional competence and better address the issue of adolescent suicide,additional financial support should be provided,the combination of socialemotional learning and other suicide prevention programs within schools should be fully leveraged,and cooperation between schools and families,society,and other environments should be maximized.These efforts should be considered future research directions.展开更多
Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han...Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.展开更多
In recent years, research on the estimation of human emotions has been active, and its application is expected in various fields. Biological reactions, such as electroencephalography (EEG) and root mean square success...In recent years, research on the estimation of human emotions has been active, and its application is expected in various fields. Biological reactions, such as electroencephalography (EEG) and root mean square successive difference (RMSSD), are indicators that are less influenced by individual arbitrariness. The present study used EEG and RMSSD signals to assess the emotions aroused by emotion-stimulating images in order to investigate whether various emotions are associated with characteristic biometric signal fluctuations. The participants underwent EEG and RMSSD while viewing emotionally stimulating images and answering the questionnaires. The emotions aroused by emotionally stimulating images were assessed by measuring the EEG signals and RMSSD values to determine whether different emotions are associated with characteristic biometric signal variations. Real-time emotion analysis software was used to identify the evoked emotions by describing them in the Circumplex Model of Affect based on the EEG signals and RMSSD values. Emotions other than happiness did not follow the Circumplex Model of Affect in this study. However, ventral attentional activity may have increased the RMSSD value for disgust as the β/θ value increased in right-sided brain waves. Therefore, the right-sided brain wave results are necessary when measuring disgust. Happiness can be assessed easily using the Circumplex Model of Affect for positive scene analysis. Improving the current analysis methods may facilitate the investigation of face-to-face communication in the future using biometric signals.展开更多
Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
Introduction: The epidemiology of both hepatitis B virus (HBV) and hepatitis C virus (HCV) infections among drug users (DUs) is little known in West Africa. The study aimed to assess the prevalence of hepatitis B and ...Introduction: The epidemiology of both hepatitis B virus (HBV) and hepatitis C virus (HCV) infections among drug users (DUs) is little known in West Africa. The study aimed to assess the prevalence of hepatitis B and C viruses among drug users in Burkina Faso. Methodology: This was a cross-sectional biological and behavioral survey conducted between June and August 2022, among drug users in Ouagadougou and Bobo Dioulasso, the two main cities of Burkina Faso. A respondent-driven sampling (RDS) was used to recruit drug users. Hepatitis B surface antigen was determined using lateral flow rapid test kits and antibodies to hepatitis C virus in serum determined using an Enzyme-Linked Immunosorbent Assay. Data were entered and analyzed using Stata 17 software. Weighted binary logistic regression was used to identify the associated factors of hepatitis B and C infections and a p-value Results: A total of 323 drug users were recruited with 97.5% males. The mean age was 32.7 years old. The inhaled or smoked mode was the most used by drug users. The adjusted hepatitis B and hepatitis C prevalence among study participants were 11.1% and 2.3% respectively. The marital status (p = 0.001), and the nationality (p = 0.011) were significantly associated with hepatitis B infection. The type of drug used was not significantly associated with hepatitis B infection or hepatitis C infection. Conclusion: The prevalence of HBsAg and anti-HCV antibodies among DUs are comparable to those reported in the general population in Burkina Faso. This result suggests that the main routes of contamination by HBV and HCV among DUs are similar to those in the population, and could be explained by the low use of the injectable route by DUs in Burkina Faso.展开更多
Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,an...Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,and text data,do not always indicate true emotions,as users can falsify them.Among the physiological methods of emotion detection,Electrocardiogram(ECG)is a reliable and efficient way of detecting emotions.ECG-enabled smart bands have proven effective in collecting emotional data in uncontrolled environments.Researchers use deep machine learning techniques for emotion recognition using ECG signals,but there is a need to develop efficient models by tuning the hyperparameters.Furthermore,most researchers focus on detecting emotions in individual settings,but there is a need to extend this research to group settings aswell since most of the emotions are experienced in groups.In this study,we have developed a novel lightweight one dimensional(1D)Convolutional Neural Network(CNN)model by reducing the number of convolution,max pooling,and classification layers.This optimization has led to more efficient emotion classification using ECG.We tested the proposed model’s performance using ECG data from the AMIGOS(A Dataset for Affect,Personality and Mood Research on Individuals andGroups)dataset for both individual and group settings.The results showed that themodel achieved an accuracy of 82.21%and 85.62%for valence and arousal classification,respectively,in individual settings.In group settings,the accuracy was even higher,at 99.56%and 99.68%for valence and arousal classification,respectively.By reducing the number of layers,the lightweight CNNmodel can process data more quickly and with less complexity in the hardware,making it suitable for the implementation on the mobile phone devices to detect emotions with improved accuracy and speed.展开更多
BACKGROUND Acute pancreatitis(AP),as a common acute abdomen disease,has a high incidence rate worldwide and is often accompanied by severe complications.Negative emotions lead to increased secretion of stress hormones...BACKGROUND Acute pancreatitis(AP),as a common acute abdomen disease,has a high incidence rate worldwide and is often accompanied by severe complications.Negative emotions lead to increased secretion of stress hormones,elevated blood sugar levels,and enhanced insulin resistance,which in turn increases the risk of AP and significantly affects the patient's quality of life.Therefore,exploring the intervention effects of narrative nursing programs on the negative emotions of patients with AP is not only helpful in alleviating psychological stress and improving quality of life but also has significant implications for improving disease outcomes and prognosis.AIM To construct a narrative nursing model for negative emotions in patients with AP and verify its efficacy in application.METHODS Through Delphi expert consultation,a narrative nursing model for negative emotions in patients with AP was constructed.A non-randomized quasi-experimental study design was used in this study.A total of 92 patients with AP with negative emotions admitted to a tertiary hospital in Nantong City of Jiangsu Province,China from September 2022 to August 2023 were recruited by convenience sampling,among whom 46 patients admitted from September 2022 to February 2023 were included in the observation group,and 46 patients from March to August 2023 were selected as control group.The observation group received narrative nursing plan,while the control group was given with routine nursing.Self-rating anxiety scale(SAS),self-rating depression scale(SDS),positive and negative affect scale(PANAS),caring behavior scale,patient satisfaction scale and 36-item short form health survey questionnaire(SF-36)were used to evaluate their emotions,satisfaction and caring behaviors in the two groups on the day of discharge,1-and 3-month following discharge.RESULTS According to the inclusion and exclusion criteria,a total of 45 cases in the intervention group and 44 cases in the control group eventually recruited and completed in the study.On the day of discharge,the intervention group showed significantly lower scores of SAS,SDS and negative emotion(28.57±4.52 vs 17.4±4.44,P<0.001),whereas evidently higher outcomes in the positive emotion score,Caring behavior scale score and satisfaction score compared to the control group(P<0.05).Repeated measurement analysis of variance showed that significant between-group differences were found in time effect,inter-group effect and interaction effect of SAS and PANAS scores as well as in time effect and inter-group effect of SF-36 scores(P<0.05);the SF-36 scores of two groups at 3 months after discharge were higher than those at 1 month after discharge(P<0.05).CONCLUSION The application of narrative nursing protocols has demonstrated significant effectiveness in alleviating anxiety,ameliorating negative emotions,and enhancing satisfaction among patients with AP.展开更多
Introduction: Emotional intelligence, or the capacity to cope one’s emotions, makes it simpler to form good connections with others and do caring duties. Nursing students can enroll a health team in a helpful and ben...Introduction: Emotional intelligence, or the capacity to cope one’s emotions, makes it simpler to form good connections with others and do caring duties. Nursing students can enroll a health team in a helpful and beneficial way with the use of emotional intelligence. Nurses who can identify, control, and interpret both their own emotions and those of their patients provide better patient care. The purpose of this study was to assess the emotional intelligence and to investigate the relationship and differences between emotional intelligence and demographic characteristics of nursing students. Methods: A cross-sectional study was carried out on 381 nursing students. Data collection was completed by “Schutte Self Report Emotional Intelligence Test”. Data were analyzed with the Statistical Package for Social Science. An independent t test, ANOVA, and Pearson correlation, multiple linear regression were used. Results: The results revealed that the emotional intelligence mean was 143.1 ± 21.6 (ranging from 33 to 165), which is high. Also, the analysis revealed that most of the participants 348 (91.3%) had higher emotional intelligence level. This finding suggests that nursing students are emotionally intelligent and may be able to notice, analyze, control, manage, and harness emotion in an adaptive manner. Also, academic year of nursing students was a predictor of emotional intelligence. Furthermore, there was positive relationship between the age and emotional intelligence (p < 0.05). The students’ ability to use their EI increased as they rose through the nursing grades. Conclusion: This study confirmed that the emotional intelligence score of the nursing students was high. Also, academic year of nursing students was a predictor of emotional intelligence. In addition, a positive relationship was confirmed between the emotional intelligence and age of nursing students. .展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect anal...In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect analysis,thereby continuouslypromotingthe improvementof teaching quality.However,most existingmulti-face expressionrecognition methods adopt a multi-stage approach, with an overall complex process, poor real-time performance,and insufficient generalization ability. In addition, the existing facial expression datasets are mostly single faceimages, which are of low quality and lack specificity, also restricting the development of this research. This paperaims to propose an end-to-end high-performance multi-face expression recognition algorithm model suitable forsmart classrooms, construct a high-quality multi-face expression dataset to support algorithm research, and applythe model to group emotion assessment to expand its application value. To this end, we propose an end-to-endmulti-face expression recognition algorithm model for smart classrooms (E2E-MFERC). In order to provide highqualityand highly targeted data support for model research, we constructed a multi-face expression dataset inreal classrooms (MFED), containing 2,385 images and a total of 18,712 expression labels, collected from smartclassrooms. In constructing E2E-MFERC, by introducing Re-parameterization visual geometry group (RepVGG)block and symmetric positive definite convolution (SPD-Conv) modules to enhance representational capability;combined with the cross stage partial network fusion module optimized by attention mechanism (C2f_Attention),it strengthens the ability to extract key information;adopts asymptotic feature pyramid network (AFPN) featurefusion tailored to classroomscenes and optimizes the head prediction output size;achieves high-performance endto-end multi-face expression detection. Finally, we apply the model to smart classroom group emotion assessmentand provide design references for classroom effect analysis evaluation metrics. Experiments based on MFED showthat the mAP and F1-score of E2E-MFERC on classroom evaluation data reach 83.6% and 0.77, respectively,improving the mAP of same-scale You Only Look Once version 5 (YOLOv5) and You Only Look Once version8 (YOLOv8) by 6.8% and 2.5%, respectively, and the F1-score by 0.06 and 0.04, respectively. E2E-MFERC modelhas obvious advantages in both detection speed and accuracy, which can meet the practical needs of real-timemulti-face expression analysis in classrooms, and serve the application of teaching effect assessment very well.展开更多
With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature t...With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.展开更多
This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent ...This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.展开更多
Autonomous vehicles (AVs) hold immense promises in revolutionizing transportation, and their potential benefits extend to individuals with impairments, particularly those with vision and hearing impairments. However, ...Autonomous vehicles (AVs) hold immense promises in revolutionizing transportation, and their potential benefits extend to individuals with impairments, particularly those with vision and hearing impairments. However, the accommodation of these individuals in AVs requires developing advanced user interfaces. This paper describes an explorative study of a multimodal user interface for autonomous vehicles, specifically developed for passengers with sensory (vision and/or hearing) impairments. In a driving simulator, 32 volunteers with simulated sensory impairments, were exposed to multiple drives in an autonomous vehicle while freely interacting with standard and inclusive variants of the infotainment and navigation system interface. The two user interfaces differed in graphical layout and voice messages, which adopted inclusive design principles for the inclusive variant. Questionnaires and structured interviews were conducted to collect participants’ impressions. The data analysis reports positive user experiences, but also identifies technical challenges. Verified guidelines are provided for further development of inclusive user interface solutions.展开更多
BACKGROUND With an estimated 121 million abortions following unwanted pregnancies occurring worldwide each year,many countries are now committed to protecting women’s reproductive rights.AIM To analyze the impact of ...BACKGROUND With an estimated 121 million abortions following unwanted pregnancies occurring worldwide each year,many countries are now committed to protecting women’s reproductive rights.AIM To analyze the impact of emotional management and care on anxiety and contraceptive knowledge mastery in painless induced abortion(IA)patients.METHODS This study was retrospective analysis of 84 patients with IA at our hospital.According to different nursing methods,the patients were divided into a control group and an observation group,with 42 cases in each group.Degree of pain,rate of postoperative uterine relaxation,surgical bleeding volume,and postoperative bleeding volume at 1 h between the two groups of patients;nursing satisfaction;and mastery of contraceptive knowledge were analyzed.RESULTS After nursing,Self-Assessment Scale,Depression Self-Assessment Scale,and Hamilton Anxiety Scale scores were 39.18±2.18,30.27±2.64,6.69±2.15,respectively,vs 45.63±2.66,38.61±2.17,13.45±2.12,respectively,with the observation group being lower than the control group(P<0.05).Comparing visual analog scales,the observation group was lower than the control group(4.55±0.22 vs 3.23±0.41;P<0.05).The relaxation rate of the cervix after nursing,surgical bleeding volume,and 1-h postoperative bleeding volumes were 25(59.5),31.72±2.23,and 22.41±1.23,respectively,vs 36(85.7),42.39±3.53,28.51±3.34,respec tively,for the observation group compared to the control group.The observation group had a better nursing situation(P<0.05),and higher nursing satisfaction and contraceptive knowledge mastery scores compared to the control group(P<0.05).CONCLUSION The application of emotional management in postoperative care of IA has an ideal effect.展开更多
During the 1980s, as part of a policy of liberalization, following budgetary cuts linked to the implementation of structural adjustment programs, management responsibilities for AHAs were transferred from ONAHA to coo...During the 1980s, as part of a policy of liberalization, following budgetary cuts linked to the implementation of structural adjustment programs, management responsibilities for AHAs were transferred from ONAHA to cooperatives concerned. Due to lack of financial resources, but also because of poor management, everywhere in Niger we are witnessing an accelerated deterioration of the irrigation infrastructure of hydro-agricultural developments. Institutional studies carried out on this situation led the State of Niger to initiate a reform of the governance of hydro-agricultural developments, by streng-thening the status of ONAHA, by creating an Association of Irrigation Water Users (AUEI) and by restructuring the old cooperatives. Indeed, this research aims to analyze the creation of functional and sustainable Irrigation Water User Associations (AUEI) in Niger in a context of reform of the irrigation sector, and based on the experience of the Konni AHA. It is based on a methodological approach which takes into account documentary research and the collection of data from 115 farmers, selected by reasoned choice and directly concerned by the management of the irrigated area. The data collected was analyzed and the results were analyzed using the systemic approach and the diagnostic process. The results show that the main mission of the AUEI is to ensure better management of water, hydraulic equipment and infrastructure on the hydro-agricultural developments of Konni. The creation of the Konni AUEI was possible thanks to massive support from the populations and authorities in the implementation process. After its establishment, the AUEI experienced a certain lethargy for some time due to the rehabilitation work of the AHA but currently it is functional and operational in terms of associative life and governance. Thus, the constraints linked to the legal system, the delay in the completion of the work, the uncertainties of access to irrigation water but also the problems linked to the change in mentality of certain ONAHA agents constitute the challenges that must be resolved in the short term for the operationalization of the Konni AUEI.展开更多
BACKGROUND The risks associated with negative doctor-patient relationships have seriously hindered the healthy development of medical and healthcare and aroused wide-spread concern in society.The number of public comm...BACKGROUND The risks associated with negative doctor-patient relationships have seriously hindered the healthy development of medical and healthcare and aroused wide-spread concern in society.The number of public comments on doctor-patient relationship risk events reflects the degree to which the public pays attention to such events.Thirty incidents of doctor-patient disputes were collected from Weibo and TikTok,and 3655 related comments were extracted.The number of comment sentiment words was extracted,and the comment sentiment value was calculated.The Kruskal-Wallis H test was used to compare differences between each variable group at different levels of incidence.Spearman’s correlation analysis was used to examine associations between variables.Regression analysis was used to explore factors influencing scores of comments on incidents.RESULTS The study results showed that public comments on media reports of doctor-patient disputes at all levels are mainly dominated by“good”and“disgust”emotional states.There was a significant difference in the comment scores and the number of partial emotion words between comments on varying levels of severity of doctor-patient disputes.The comment score was positively correlated with the number of emotion words related to positive,good,and happy)and negatively correlated with the number of emotion words related to negative,anger,disgust,fear,and sadness.CONCLUSION The number of emotion words related to negative,anger,disgust,fear,and sadness directly influences comment scores,and the severity of the incident level indirectly influences comment scores.展开更多
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona...Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.展开更多
基金supported by grants from the National Social Science Foundation of China(19BGL230)the Key Project of Social Science Planning in Jiangxi Province(23JY01).
文摘The relapse of methamphetamine (meth) is associated with decision-making dysfunction. The present study aims to investigate theimpact of different emotions on the decision-making behavior of meth users. We used 2 (gender: male, female) × 3 (emotion:positive, negative, neutral) × 5 (block: 1, 2, 3, 4, 5) mixed experiment design. The study involved 168 meth users who weredivided into three groups: positive emotion, negative emotion and neutral emotion group, and tested by the emotional IowaGambling Task (IGT). The IGT performance of male users exhibited a decreasing trend from Block 1 to Block 3. Female methusers in positive emotion had the best performance in IGT than females in the other two groups. In positive emotion, the IGTperformance of female meth users was significantly better than that of men. Female meth users in positive emotion had betterdecision-making than those in negative or neutral emotion. Female meth users in positive emotion had better decision-makingperformance than males in positive emotion. In negative and neutral emotions, there was no significant gender difference indecision-making.
文摘BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitive function,anxiety,and depression in patients undergoing this procedure.AIM To compare the effects of propofol and sevoflurane anesthesia on postoperative cognitive function,anxiety,depression,and organ function in patients undergoing radical resection of gastric cancer.METHODS A total of 80 patients were involved in this research.The subjects were divided into two groups:Propofol group and sevoflurane group.The evaluation scale for cognitive function was the Loewenstein occupational therapy cognitive assessment(LOTCA),and anxiety and depression were assessed with the aid of the self-rating anxiety scale(SAS)and self-rating depression scale(SDS).Hemodynamic indicators,oxidative stress levels,and pulmonary function were also measured.RESULTS The LOTCA score at 1 d after surgery was significantly lower in the propofol group than in the sevoflurane group.Additionally,the SAS and SDS scores of the sevoflurane group were significantly lower than those of the propofol group.The sevoflurane group showed greater stability in heart rate as well as the mean arterial pressure compared to the propofol group.Moreover,the sevoflurane group displayed better pulmonary function and less lung injury than the propofol group.CONCLUSION Both propofol and sevoflurane could be utilized as maintenance anesthesia during radical resection of gastric cancer.Propofol anesthesia has a minimal effect on patients'pulmonary function,consequently enhancing their postoperative recovery.Sevoflurane anesthesia causes less impairment on patients'cognitive function and mitigates negative emotions,leading to an improved postoperative mental state.Therefore,the selection of anesthetic agents should be based on the individual patient's specific circumstances.
文摘Adolescents are considered one of the most vulnerable groups affected by suicide.Rapid changes in adolescents’physical and mental states,as well as in their lives,significantly and undeniably increase the risk of suicide.Psychological,social,family,individual,and environmental factors are important risk factors for suicidal behavior among teenagers and may contribute to suicide risk through various direct,indirect,or combined pathways.Social-emotional learning is considered a powerful intervention measure for addressing the crisis of adolescent suicide.When deliberately cultivated,fostered,and enhanced,selfawareness,self-management,social awareness,interpersonal skills,and responsible decision-making,as the five core competencies of social-emotional learning,can be used to effectively target various risk factors for adolescent suicide and provide necessary mental and interpersonal support.Among numerous suicide intervention methods,school-based interventions based on social-emotional competence have shown great potential in preventing and addressing suicide risk factors in adolescents.The characteristics of school-based interventions based on social-emotional competence,including their appropriateness,necessity,cost-effectiveness,comprehensiveness,and effectiveness,make these interventions an important means of addressing the crisis of adolescent suicide.To further determine the potential of school-based interventions based on social-emotional competence and better address the issue of adolescent suicide,additional financial support should be provided,the combination of socialemotional learning and other suicide prevention programs within schools should be fully leveraged,and cooperation between schools and families,society,and other environments should be maximized.These efforts should be considered future research directions.
文摘Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.
文摘In recent years, research on the estimation of human emotions has been active, and its application is expected in various fields. Biological reactions, such as electroencephalography (EEG) and root mean square successive difference (RMSSD), are indicators that are less influenced by individual arbitrariness. The present study used EEG and RMSSD signals to assess the emotions aroused by emotion-stimulating images in order to investigate whether various emotions are associated with characteristic biometric signal fluctuations. The participants underwent EEG and RMSSD while viewing emotionally stimulating images and answering the questionnaires. The emotions aroused by emotionally stimulating images were assessed by measuring the EEG signals and RMSSD values to determine whether different emotions are associated with characteristic biometric signal variations. Real-time emotion analysis software was used to identify the evoked emotions by describing them in the Circumplex Model of Affect based on the EEG signals and RMSSD values. Emotions other than happiness did not follow the Circumplex Model of Affect in this study. However, ventral attentional activity may have increased the RMSSD value for disgust as the β/θ value increased in right-sided brain waves. Therefore, the right-sided brain wave results are necessary when measuring disgust. Happiness can be assessed easily using the Circumplex Model of Affect for positive scene analysis. Improving the current analysis methods may facilitate the investigation of face-to-face communication in the future using biometric signals.
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘Introduction: The epidemiology of both hepatitis B virus (HBV) and hepatitis C virus (HCV) infections among drug users (DUs) is little known in West Africa. The study aimed to assess the prevalence of hepatitis B and C viruses among drug users in Burkina Faso. Methodology: This was a cross-sectional biological and behavioral survey conducted between June and August 2022, among drug users in Ouagadougou and Bobo Dioulasso, the two main cities of Burkina Faso. A respondent-driven sampling (RDS) was used to recruit drug users. Hepatitis B surface antigen was determined using lateral flow rapid test kits and antibodies to hepatitis C virus in serum determined using an Enzyme-Linked Immunosorbent Assay. Data were entered and analyzed using Stata 17 software. Weighted binary logistic regression was used to identify the associated factors of hepatitis B and C infections and a p-value Results: A total of 323 drug users were recruited with 97.5% males. The mean age was 32.7 years old. The inhaled or smoked mode was the most used by drug users. The adjusted hepatitis B and hepatitis C prevalence among study participants were 11.1% and 2.3% respectively. The marital status (p = 0.001), and the nationality (p = 0.011) were significantly associated with hepatitis B infection. The type of drug used was not significantly associated with hepatitis B infection or hepatitis C infection. Conclusion: The prevalence of HBsAg and anti-HCV antibodies among DUs are comparable to those reported in the general population in Burkina Faso. This result suggests that the main routes of contamination by HBV and HCV among DUs are similar to those in the population, and could be explained by the low use of the injectable route by DUs in Burkina Faso.
文摘Emotion recognition is a growing field that has numerous applications in smart healthcare systems and Human-Computer Interaction(HCI).However,physical methods of emotion recognition such as facial expressions,voice,and text data,do not always indicate true emotions,as users can falsify them.Among the physiological methods of emotion detection,Electrocardiogram(ECG)is a reliable and efficient way of detecting emotions.ECG-enabled smart bands have proven effective in collecting emotional data in uncontrolled environments.Researchers use deep machine learning techniques for emotion recognition using ECG signals,but there is a need to develop efficient models by tuning the hyperparameters.Furthermore,most researchers focus on detecting emotions in individual settings,but there is a need to extend this research to group settings aswell since most of the emotions are experienced in groups.In this study,we have developed a novel lightweight one dimensional(1D)Convolutional Neural Network(CNN)model by reducing the number of convolution,max pooling,and classification layers.This optimization has led to more efficient emotion classification using ECG.We tested the proposed model’s performance using ECG data from the AMIGOS(A Dataset for Affect,Personality and Mood Research on Individuals andGroups)dataset for both individual and group settings.The results showed that themodel achieved an accuracy of 82.21%and 85.62%for valence and arousal classification,respectively,in individual settings.In group settings,the accuracy was even higher,at 99.56%and 99.68%for valence and arousal classification,respectively.By reducing the number of layers,the lightweight CNNmodel can process data more quickly and with less complexity in the hardware,making it suitable for the implementation on the mobile phone devices to detect emotions with improved accuracy and speed.
文摘BACKGROUND Acute pancreatitis(AP),as a common acute abdomen disease,has a high incidence rate worldwide and is often accompanied by severe complications.Negative emotions lead to increased secretion of stress hormones,elevated blood sugar levels,and enhanced insulin resistance,which in turn increases the risk of AP and significantly affects the patient's quality of life.Therefore,exploring the intervention effects of narrative nursing programs on the negative emotions of patients with AP is not only helpful in alleviating psychological stress and improving quality of life but also has significant implications for improving disease outcomes and prognosis.AIM To construct a narrative nursing model for negative emotions in patients with AP and verify its efficacy in application.METHODS Through Delphi expert consultation,a narrative nursing model for negative emotions in patients with AP was constructed.A non-randomized quasi-experimental study design was used in this study.A total of 92 patients with AP with negative emotions admitted to a tertiary hospital in Nantong City of Jiangsu Province,China from September 2022 to August 2023 were recruited by convenience sampling,among whom 46 patients admitted from September 2022 to February 2023 were included in the observation group,and 46 patients from March to August 2023 were selected as control group.The observation group received narrative nursing plan,while the control group was given with routine nursing.Self-rating anxiety scale(SAS),self-rating depression scale(SDS),positive and negative affect scale(PANAS),caring behavior scale,patient satisfaction scale and 36-item short form health survey questionnaire(SF-36)were used to evaluate their emotions,satisfaction and caring behaviors in the two groups on the day of discharge,1-and 3-month following discharge.RESULTS According to the inclusion and exclusion criteria,a total of 45 cases in the intervention group and 44 cases in the control group eventually recruited and completed in the study.On the day of discharge,the intervention group showed significantly lower scores of SAS,SDS and negative emotion(28.57±4.52 vs 17.4±4.44,P<0.001),whereas evidently higher outcomes in the positive emotion score,Caring behavior scale score and satisfaction score compared to the control group(P<0.05).Repeated measurement analysis of variance showed that significant between-group differences were found in time effect,inter-group effect and interaction effect of SAS and PANAS scores as well as in time effect and inter-group effect of SF-36 scores(P<0.05);the SF-36 scores of two groups at 3 months after discharge were higher than those at 1 month after discharge(P<0.05).CONCLUSION The application of narrative nursing protocols has demonstrated significant effectiveness in alleviating anxiety,ameliorating negative emotions,and enhancing satisfaction among patients with AP.
文摘Introduction: Emotional intelligence, or the capacity to cope one’s emotions, makes it simpler to form good connections with others and do caring duties. Nursing students can enroll a health team in a helpful and beneficial way with the use of emotional intelligence. Nurses who can identify, control, and interpret both their own emotions and those of their patients provide better patient care. The purpose of this study was to assess the emotional intelligence and to investigate the relationship and differences between emotional intelligence and demographic characteristics of nursing students. Methods: A cross-sectional study was carried out on 381 nursing students. Data collection was completed by “Schutte Self Report Emotional Intelligence Test”. Data were analyzed with the Statistical Package for Social Science. An independent t test, ANOVA, and Pearson correlation, multiple linear regression were used. Results: The results revealed that the emotional intelligence mean was 143.1 ± 21.6 (ranging from 33 to 165), which is high. Also, the analysis revealed that most of the participants 348 (91.3%) had higher emotional intelligence level. This finding suggests that nursing students are emotionally intelligent and may be able to notice, analyze, control, manage, and harness emotion in an adaptive manner. Also, academic year of nursing students was a predictor of emotional intelligence. Furthermore, there was positive relationship between the age and emotional intelligence (p < 0.05). The students’ ability to use their EI increased as they rose through the nursing grades. Conclusion: This study confirmed that the emotional intelligence score of the nursing students was high. Also, academic year of nursing students was a predictor of emotional intelligence. In addition, a positive relationship was confirmed between the emotional intelligence and age of nursing students. .
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金the Science and Technology Project of State Grid Corporation of China under Grant No.5700-202318292A-1-1-ZN.
文摘In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect analysis,thereby continuouslypromotingthe improvementof teaching quality.However,most existingmulti-face expressionrecognition methods adopt a multi-stage approach, with an overall complex process, poor real-time performance,and insufficient generalization ability. In addition, the existing facial expression datasets are mostly single faceimages, which are of low quality and lack specificity, also restricting the development of this research. This paperaims to propose an end-to-end high-performance multi-face expression recognition algorithm model suitable forsmart classrooms, construct a high-quality multi-face expression dataset to support algorithm research, and applythe model to group emotion assessment to expand its application value. To this end, we propose an end-to-endmulti-face expression recognition algorithm model for smart classrooms (E2E-MFERC). In order to provide highqualityand highly targeted data support for model research, we constructed a multi-face expression dataset inreal classrooms (MFED), containing 2,385 images and a total of 18,712 expression labels, collected from smartclassrooms. In constructing E2E-MFERC, by introducing Re-parameterization visual geometry group (RepVGG)block and symmetric positive definite convolution (SPD-Conv) modules to enhance representational capability;combined with the cross stage partial network fusion module optimized by attention mechanism (C2f_Attention),it strengthens the ability to extract key information;adopts asymptotic feature pyramid network (AFPN) featurefusion tailored to classroomscenes and optimizes the head prediction output size;achieves high-performance endto-end multi-face expression detection. Finally, we apply the model to smart classroom group emotion assessmentand provide design references for classroom effect analysis evaluation metrics. Experiments based on MFED showthat the mAP and F1-score of E2E-MFERC on classroom evaluation data reach 83.6% and 0.77, respectively,improving the mAP of same-scale You Only Look Once version 5 (YOLOv5) and You Only Look Once version8 (YOLOv8) by 6.8% and 2.5%, respectively, and the F1-score by 0.06 and 0.04, respectively. E2E-MFERC modelhas obvious advantages in both detection speed and accuracy, which can meet the practical needs of real-timemulti-face expression analysis in classrooms, and serve the application of teaching effect assessment very well.
基金The authors are highly thankful to the National Social Science Foundation of China(20BXW101,18XXW015)Innovation Research Project for the Cultivation of High-Level Scientific and Technological Talents(Top-Notch Talents of theDiscipline)(ZZKY2022303)+3 种基金National Natural Science Foundation of China(Nos.62102451,62202496)Basic Frontier Innovation Project of Engineering University of People’s Armed Police(WJX202316)This work is also supported by National Natural Science Foundation of China(No.62172436)Engineering University of PAP’s Funding for Scientific Research Innovation Team,Engineering University of PAP’s Funding for Basic Scientific Research,and Engineering University of PAP’s Funding for Education and Teaching.Natural Science Foundation of Shaanxi Province(No.2023-JCYB-584).
文摘With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.
基金Supported by Education and Teaching Reform Project of the First Clinical College of Chongqing Medical University,No.CMER202305Natural Science Foundation of Tibet Autonomous Region,No.XZ2024ZR-ZY100(Z).
文摘This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.
文摘Autonomous vehicles (AVs) hold immense promises in revolutionizing transportation, and their potential benefits extend to individuals with impairments, particularly those with vision and hearing impairments. However, the accommodation of these individuals in AVs requires developing advanced user interfaces. This paper describes an explorative study of a multimodal user interface for autonomous vehicles, specifically developed for passengers with sensory (vision and/or hearing) impairments. In a driving simulator, 32 volunteers with simulated sensory impairments, were exposed to multiple drives in an autonomous vehicle while freely interacting with standard and inclusive variants of the infotainment and navigation system interface. The two user interfaces differed in graphical layout and voice messages, which adopted inclusive design principles for the inclusive variant. Questionnaires and structured interviews were conducted to collect participants’ impressions. The data analysis reports positive user experiences, but also identifies technical challenges. Verified guidelines are provided for further development of inclusive user interface solutions.
基金The study was reviewed and approved by Wuhan Maternal and Child Healthcare Hospital(Approval No.2024-013).
文摘BACKGROUND With an estimated 121 million abortions following unwanted pregnancies occurring worldwide each year,many countries are now committed to protecting women’s reproductive rights.AIM To analyze the impact of emotional management and care on anxiety and contraceptive knowledge mastery in painless induced abortion(IA)patients.METHODS This study was retrospective analysis of 84 patients with IA at our hospital.According to different nursing methods,the patients were divided into a control group and an observation group,with 42 cases in each group.Degree of pain,rate of postoperative uterine relaxation,surgical bleeding volume,and postoperative bleeding volume at 1 h between the two groups of patients;nursing satisfaction;and mastery of contraceptive knowledge were analyzed.RESULTS After nursing,Self-Assessment Scale,Depression Self-Assessment Scale,and Hamilton Anxiety Scale scores were 39.18±2.18,30.27±2.64,6.69±2.15,respectively,vs 45.63±2.66,38.61±2.17,13.45±2.12,respectively,with the observation group being lower than the control group(P<0.05).Comparing visual analog scales,the observation group was lower than the control group(4.55±0.22 vs 3.23±0.41;P<0.05).The relaxation rate of the cervix after nursing,surgical bleeding volume,and 1-h postoperative bleeding volumes were 25(59.5),31.72±2.23,and 22.41±1.23,respectively,vs 36(85.7),42.39±3.53,28.51±3.34,respec tively,for the observation group compared to the control group.The observation group had a better nursing situation(P<0.05),and higher nursing satisfaction and contraceptive knowledge mastery scores compared to the control group(P<0.05).CONCLUSION The application of emotional management in postoperative care of IA has an ideal effect.
文摘During the 1980s, as part of a policy of liberalization, following budgetary cuts linked to the implementation of structural adjustment programs, management responsibilities for AHAs were transferred from ONAHA to cooperatives concerned. Due to lack of financial resources, but also because of poor management, everywhere in Niger we are witnessing an accelerated deterioration of the irrigation infrastructure of hydro-agricultural developments. Institutional studies carried out on this situation led the State of Niger to initiate a reform of the governance of hydro-agricultural developments, by streng-thening the status of ONAHA, by creating an Association of Irrigation Water Users (AUEI) and by restructuring the old cooperatives. Indeed, this research aims to analyze the creation of functional and sustainable Irrigation Water User Associations (AUEI) in Niger in a context of reform of the irrigation sector, and based on the experience of the Konni AHA. It is based on a methodological approach which takes into account documentary research and the collection of data from 115 farmers, selected by reasoned choice and directly concerned by the management of the irrigated area. The data collected was analyzed and the results were analyzed using the systemic approach and the diagnostic process. The results show that the main mission of the AUEI is to ensure better management of water, hydraulic equipment and infrastructure on the hydro-agricultural developments of Konni. The creation of the Konni AUEI was possible thanks to massive support from the populations and authorities in the implementation process. After its establishment, the AUEI experienced a certain lethargy for some time due to the rehabilitation work of the AHA but currently it is functional and operational in terms of associative life and governance. Thus, the constraints linked to the legal system, the delay in the completion of the work, the uncertainties of access to irrigation water but also the problems linked to the change in mentality of certain ONAHA agents constitute the challenges that must be resolved in the short term for the operationalization of the Konni AUEI.
基金Supported by the National Natural Science Foundation of China,No.72374005Natural Science Foundation for the Higher Education Institutions of Anhui Province of China,No.2023AH050561Cultivation Programme for Young and Middle-aged Excellent Teachers in Anhui Province,No.YQZD2023021.
文摘BACKGROUND The risks associated with negative doctor-patient relationships have seriously hindered the healthy development of medical and healthcare and aroused wide-spread concern in society.The number of public comments on doctor-patient relationship risk events reflects the degree to which the public pays attention to such events.Thirty incidents of doctor-patient disputes were collected from Weibo and TikTok,and 3655 related comments were extracted.The number of comment sentiment words was extracted,and the comment sentiment value was calculated.The Kruskal-Wallis H test was used to compare differences between each variable group at different levels of incidence.Spearman’s correlation analysis was used to examine associations between variables.Regression analysis was used to explore factors influencing scores of comments on incidents.RESULTS The study results showed that public comments on media reports of doctor-patient disputes at all levels are mainly dominated by“good”and“disgust”emotional states.There was a significant difference in the comment scores and the number of partial emotion words between comments on varying levels of severity of doctor-patient disputes.The comment score was positively correlated with the number of emotion words related to positive,good,and happy)and negatively correlated with the number of emotion words related to negative,anger,disgust,fear,and sadness.CONCLUSION The number of emotion words related to negative,anger,disgust,fear,and sadness directly influences comment scores,and the severity of the incident level indirectly influences comment scores.
文摘Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.