This study investigates historical and cultural effects on one component of emotional intelligence,the ability to recognize and report on one’s emotions.This study suggests a novel influence on emotional intelligence...This study investigates historical and cultural effects on one component of emotional intelligence,the ability to recognize and report on one’s emotions.This study suggests a novel influence on emotional intelligence,an individual’s historical context.Samples of young adults,from Kyrgyzstan,former Soviet Republic in Central Asia,and the USA were assessed using the Toronto Alexithymia Scale (TAS-20)(Bagby,Parker,& Taylor,1994) in 2002 and again in 2012,and in 2018.Significant historical cohort effect,significant interaction effect,and gender effects were found.展开更多
BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitiv...BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitive function,anxiety,and depression in patients undergoing this procedure.AIM To compare the effects of propofol and sevoflurane anesthesia on postoperative cognitive function,anxiety,depression,and organ function in patients undergoing radical resection of gastric cancer.METHODS A total of 80 patients were involved in this research.The subjects were divided into two groups:Propofol group and sevoflurane group.The evaluation scale for cognitive function was the Loewenstein occupational therapy cognitive assessment(LOTCA),and anxiety and depression were assessed with the aid of the self-rating anxiety scale(SAS)and self-rating depression scale(SDS).Hemodynamic indicators,oxidative stress levels,and pulmonary function were also measured.RESULTS The LOTCA score at 1 d after surgery was significantly lower in the propofol group than in the sevoflurane group.Additionally,the SAS and SDS scores of the sevoflurane group were significantly lower than those of the propofol group.The sevoflurane group showed greater stability in heart rate as well as the mean arterial pressure compared to the propofol group.Moreover,the sevoflurane group displayed better pulmonary function and less lung injury than the propofol group.CONCLUSION Both propofol and sevoflurane could be utilized as maintenance anesthesia during radical resection of gastric cancer.Propofol anesthesia has a minimal effect on patients'pulmonary function,consequently enhancing their postoperative recovery.Sevoflurane anesthesia causes less impairment on patients'cognitive function and mitigates negative emotions,leading to an improved postoperative mental state.Therefore,the selection of anesthetic agents should be based on the individual patient's specific circumstances.展开更多
Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han...Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.展开更多
Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the i...Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.展开更多
Interdisciplinary research plays a crucial role in addressing complex problems by integrating knowledge from multiple disciplines.This integration fosters innovative solutions and enhances understanding across various...Interdisciplinary research plays a crucial role in addressing complex problems by integrating knowledge from multiple disciplines.This integration fosters innovative solutions and enhances understanding across various fields.This study explores the historical and sociological development of interdisciplinary research and maps its evolution through three distinct phases:pre-disciplinary,disciplinary,and post-disciplinary.It identifies key internal dynamics,such as disciplinary diversification,reorganization,and innovation,as primary drivers of this evolution.Additionally,this study highlights how external factors,particularly the urgency of World War II and the subsequent political and economic changes,have accelerated its advancement.The rise of interdisciplinary research has significantly reshaped traditional educational paradigms,promoting its integration across different educational levels.However,the inherent contradictions within interdisciplinary research present cognitive,emotional,and institutional challenges for researchers.Meanwhile,finding a balance between the breadth and depth of knowledge remains a critical challenge in interdisciplinary education.展开更多
Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is ext...Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.展开更多
In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect anal...In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect analysis,thereby continuouslypromotingthe improvementof teaching quality.However,most existingmulti-face expressionrecognition methods adopt a multi-stage approach, with an overall complex process, poor real-time performance,and insufficient generalization ability. In addition, the existing facial expression datasets are mostly single faceimages, which are of low quality and lack specificity, also restricting the development of this research. This paperaims to propose an end-to-end high-performance multi-face expression recognition algorithm model suitable forsmart classrooms, construct a high-quality multi-face expression dataset to support algorithm research, and applythe model to group emotion assessment to expand its application value. To this end, we propose an end-to-endmulti-face expression recognition algorithm model for smart classrooms (E2E-MFERC). In order to provide highqualityand highly targeted data support for model research, we constructed a multi-face expression dataset inreal classrooms (MFED), containing 2,385 images and a total of 18,712 expression labels, collected from smartclassrooms. In constructing E2E-MFERC, by introducing Re-parameterization visual geometry group (RepVGG)block and symmetric positive definite convolution (SPD-Conv) modules to enhance representational capability;combined with the cross stage partial network fusion module optimized by attention mechanism (C2f_Attention),it strengthens the ability to extract key information;adopts asymptotic feature pyramid network (AFPN) featurefusion tailored to classroomscenes and optimizes the head prediction output size;achieves high-performance endto-end multi-face expression detection. Finally, we apply the model to smart classroom group emotion assessmentand provide design references for classroom effect analysis evaluation metrics. Experiments based on MFED showthat the mAP and F1-score of E2E-MFERC on classroom evaluation data reach 83.6% and 0.77, respectively,improving the mAP of same-scale You Only Look Once version 5 (YOLOv5) and You Only Look Once version8 (YOLOv8) by 6.8% and 2.5%, respectively, and the F1-score by 0.06 and 0.04, respectively. E2E-MFERC modelhas obvious advantages in both detection speed and accuracy, which can meet the practical needs of real-timemulti-face expression analysis in classrooms, and serve the application of teaching effect assessment very well.展开更多
On September 5,2022,a strong earthquake with a magnitude of MS6.8 struck Luding County in Sichuan Province,China,triggering thousands of landslides along the Dadu River in the northwest-southeast(NW-SE)direction.We in...On September 5,2022,a strong earthquake with a magnitude of MS6.8 struck Luding County in Sichuan Province,China,triggering thousands of landslides along the Dadu River in the northwest-southeast(NW-SE)direction.We investigated the reactivation characteristics of historical landslides within the epicentral area of the Luding earthquake to identify the initiation mechanism of earthquake-induced landslides.Records of the two newly triggered and historical landslides were analyzed using manual and threshold methods;the spatial distribution of landslides was assessed in relation to topographical and geological factors using remote sensing images.This study sheds light on the spatial distribution patterns of landslides,especially those that occur above historical landslide areas.Our results revealed a similarity in the spatial distribution trends between historical landslides and new ones induced by earthquakes.These landslides tend to be concentrated within a range of 0.2 km from the river and 2 km from the fault.Notably,both rivers and faults predominantly influenced the reactivation of historical landslides.Remarkably,the reactivated landslides are characterized by their small to medium size and are predominantly situated in historical landslide zones.The number of reactivated landslides surpassed that of previously documented historical landslides within the study area.We provide insights into the critical factors responsible for historical landslides during the 2022 Luding earthquake,thereby enhancing our understanding of the potential implications for future co-seismic hazard assessments and mitigation strategies.展开更多
The 2022 Honghe M_(S)5.0 seismic event is intriguing due to its occurrence in the south of the Red River Fault,an area historically lacking seismic activities greater than M_(S)5.0.To elucidate the seismogenic mechani...The 2022 Honghe M_(S)5.0 seismic event is intriguing due to its occurrence in the south of the Red River Fault,an area historically lacking seismic activities greater than M_(S)5.0.To elucidate the seismogenic mechanism and scrutinize stress-triggered interactions,we calculated co-seismic and post-seismic Coulomb stress alterations induced by nine historical seismic events(M≥6.0).The analysis reveals that these substantial seismic events provoked co-seismic stress augmentations of 1.409 bar and postseismic stress increments of 0.159 bar.Noteworthy seismic events,such as the 1833 Songming,1877Shiping,1913 Eshan,and 1970 Tonghai earthquakes,catalyzed the occurrence of the Honghe earthquake.Areas of heightened future seismic risk include the southern region of the Red River Fault and the eastern segments of the Shiping-Jianshui and Qujiang faults.Additionally,we assessed the correlation between the spatial distribution of aftershocks and the Coulomb stress shift triggered by the mainshock,taking into account the influence of calculation parameter settings.展开更多
This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent ...This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.展开更多
With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature t...With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.展开更多
Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotiona...Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.展开更多
BACKGROUND Studies have revealed that Children's psychological,behavioral,and emotional problems are easily influenced by the family environment.In recent years,the family structure in China has undergone signific...BACKGROUND Studies have revealed that Children's psychological,behavioral,and emotional problems are easily influenced by the family environment.In recent years,the family structure in China has undergone significant changes,with more families having two or three children.AIM To explore the relationship between emotional behavior and parental job stress in only preschool and non-only preschool children.METHODS Children aged 3-6 in kindergartens in four main urban areas of Shijiazhuang were selected by stratified sampling for a questionnaire and divided into only and nononly child groups.Their emotional behaviors and parental pressure were compared.Only and non-only children were paired in a 1:1 ratio by class and age(difference less than or equal to 6 months),and the matched data were compared.The relationship between children's emotional behavior and parents'job stress before and after matching was analyzed.RESULTS Before matching,the mother's occupation,children's personality characteristics,and children's rearing patterns differed between the groups(P<0.05).After matching 550 pairs,differences in the children's parenting styles remained.There were significant differences in children's gender and parents'attitudes toward children between the two groups.The Strengths and Difficulties Questionnaire(SDQ)scores of children in the only child group and the Parenting Stress Index-Short Form(PSI-SF)scores of parents were significantly lower than those in the non-only child group(P<0.05).Pearson’s correlation analysis showed that after matching,there was a positive correlation between children's parenting style and parents'attitudes toward their children(r=0.096,P<0.01),and the PSI-SF score was positively correlated with children's gender,parents'attitudes toward their children,and SDQ scores(r=0.077,0.193,0.172,0.222).CONCLUSION Preschool children's emotional behavior and parental pressure were significantly higher in multi-child families.Parental pressure in differently structured families was associated with many factors,and preschool children's emotional behavior was positively correlated with parental pressure.展开更多
Breast cancer(BC)is the most common malignant tumor in women,and the treatment process not only results in physical pain but also significant psychological distress in patients.Psychological intervention(PI)has been r...Breast cancer(BC)is the most common malignant tumor in women,and the treatment process not only results in physical pain but also significant psychological distress in patients.Psychological intervention(PI)has been recognized as an important approach in treating postoperative psychological disorders in BC patients.It has been proven that PI has a significant therapeutic effect on postoperative psychological disorders,improving patients'negative emotions,enhancing their psychological resilience,and effectively enhancing their quality of life and treatment compliance.展开更多
The Antarctic Ice Sheet harbors more than 90%of the Earth ice mass,with significant losses experienced through dynamic thinning,particularly in West Antarctica.The crucial aspect of investigating ice mass balance in h...The Antarctic Ice Sheet harbors more than 90%of the Earth ice mass,with significant losses experienced through dynamic thinning,particularly in West Antarctica.The crucial aspect of investigating ice mass balance in historical periods preceding 1990 hinges on the utilization of ice velocities derived from optical satellite images.We employed declassified satellite images and Landsat images with normalized cross correlation based image matching,adopting an adaptive combination of skills and methods to overcome challenges encountered during the mapping of historical ice velocity in West Antarctica.A basin-wide synthesis velocity map encompassing the coastal regions of most large-scale glaciers and ice shelves in West Antarctica has already been successfully generated.Our results for historical ice velocities cover over 70%of the grounding line in most of the West Antarctic basins.Through adjustments,we uncovered overestimations in ice velocity measurements over an extended period,transforming our ice velocity map into a spatially deterministic,temporally average version.Among all velocity measurements,Thwaites Glacier exhibited a notable spatial variation in the fastest ice flowline and velocity distribution.Overestimation distributions on Thwaites Glacier displayed a clear consistency with the positions of subsequent front calving events,offering insights into the instabilities of ice shelves.展开更多
BACKGROUND Breast cancer is among the most common malignancies worldwide.With progress in treatment methods and levels,the overall survival period has been prolonged,and the demand for quality care has increased.AIM T...BACKGROUND Breast cancer is among the most common malignancies worldwide.With progress in treatment methods and levels,the overall survival period has been prolonged,and the demand for quality care has increased.AIM To investigate the effect of individualized and continuous care intervention in patients with breast cancer.METHODS Two hundred patients with breast cancer who received systemic therapy at The First Affiliated Hospital of Hebei North University(January 2021 to July 2023)were retrospectively selected as research participants.Among them,134 received routine care intervention(routing group)and 66 received personalized and continuous care(intervention group).Self-rating anxiety scale(SAS),self-rating depression scale(SDS),and Functional Assessment of Cancer Therapy-Breast(FACT-B)scores,including limb shoulder joint activity,complication rate,and care satisfaction,were compared between both groups after care.RESULTS SAS and SDS scores were lower in the intervention group than in the routing group at one and three months after care.The total FACT-B scores and five dimensions in the intervention group were higher than those in the routing group at three months of care.The range of motion of shoulder anteflexion,posterior extension,abduction,internal rotation,and external rotation in the intervention group was higher than that in the routing group one month after care.The incidence of postoperative complications was 18.18%lower in the intervention group than in the routing group(34.33%;P<0.05).Satisfaction with care was 90.91% higher in the intervention group than in the routing group(78.36%;P<0.05).CONCLUSION Personalized and continuous care can alleviate negative emotions in patients with breast cancer,quicken rehabilitation of limb function,decrease the incidence of complications,and improve living quality and care satisfaction.展开更多
BACKGROUND Sepsis is a serious infectious disease caused by various systemic inflammatory responses and is ultimately life-threatening.Patients usually experience depression and anxiety,which affect their sleep qualit...BACKGROUND Sepsis is a serious infectious disease caused by various systemic inflammatory responses and is ultimately life-threatening.Patients usually experience depression and anxiety,which affect their sleep quality and post-traumatic growth levels.AIM To investigate the effects of sepsis,a one-hour bundle(H1B)management was combined with psychological intervention in patients with sepsis.METHODS This retrospective analysis included 300 patients with sepsis who were admitted to Henan Provincial People’s Hospital between June 2022 and June 2023.According to different intervention methods,the participants were divided into a simple group(SG,n=150)and combined group(CG,n=150).H1B management was used in the SG and H1B management combined with psychological intervention was used in the CG.The changes of negative emotion,sleep quality and post-traumatic growth and prognosis were compared between the two groups before(T0)and after(T1)intervention.RESULTS After intervention(T1),the scores of the Hamilton Anxiety scale and Hamilton Depression scale in the CG were significantly lower than those in the SG(P<0.001).Sleep time,sleep quality,sleep efficiency,daytime dysfunction,sleep disturbance dimension score,and the total score in the CG were significantly lower than those in the SG(P<0.001).The appreciation of life,mental changes,relationship with others,personal strength dimension score,and total score of the CG were significantly higher than those of the SG(P<0.001).The scores for mental health,general health status,physiological function,emotional function,physical pain,social function,energy,and physiological function in the CG were significantly higher than those in the SG(P<0.001).The mechanical ventilation time,intensive care unit stay time,and 28-d mortality of the CG were significantly lower than those of the SG(P<0.05).CONCLUSION H1B management combined with psychological intervention can effectively alleviate the negative emotions of patients with sepsis and increase their quality of sleep and life.展开更多
BACKGROUND Gastric cancer is a malignant digestive tract tumor that originates from the epithelium of the gastric mucosa and occurs in the gastric antrum,particularly in the lower curvature of the stomach.AIM To evalu...BACKGROUND Gastric cancer is a malignant digestive tract tumor that originates from the epithelium of the gastric mucosa and occurs in the gastric antrum,particularly in the lower curvature of the stomach.AIM To evaluate the impact of a positive web-based psychological intervention on emotions,psychological capital,and quality of survival in gastric cancer patients on chemotherapy.METHODS From January 2020 to October 2023,121 cases of gastric cancer patients on chemotherapy admitted to our hospital were collected and divided into a control group(n=60)and an observation group(n=61)according to the admission order.They were given either conventional nursing care alone and conventional nursing care combined with web-based positive psychological interventions,respectively.The two groups were compared in terms of negative emotions,psychological capital,degree of cancer-caused fatigue,and quality of survival.RESULTS After intervention,the number of patients in the observation group who had negative feelings toward chemotherapy treatment was significantly lower than that of the control group(P<0.05);the Positive Psychological Capital Questionnaire score was considerably higher than that of the control group(P<0.05);the degree of cancer-caused fatigue was significantly lower than that of the control group(P<0.05);and the Quality of Life Scale for Cancer Patients(QLQ-30)score was significantly higher than that of the control group(P<0.05).CONCLUSION Implementing a web-based positive psychological intervention for gastric cancer chemotherapy patients can effectively improve negative emotions,enhance psychological capital,and improve the quality of survival.展开更多
Using the multimodal metaphor theory,this article studies the multimodal metaphor of emotion.Emotions can be divided into positive emotions and negative emotions.Positive emotion metaphors include happiness metaphors ...Using the multimodal metaphor theory,this article studies the multimodal metaphor of emotion.Emotions can be divided into positive emotions and negative emotions.Positive emotion metaphors include happiness metaphors and love metaphors,while negative emotion metaphors include anger metaphors,fear metaphors and sadness metaphors.They intuitively represent the source domain through physical signs,sensory effects,orientation dynamics and physical presentation close to the actual life,and the emotional multimodal metaphors in emojis have narrative and social functions.展开更多
文摘This study investigates historical and cultural effects on one component of emotional intelligence,the ability to recognize and report on one’s emotions.This study suggests a novel influence on emotional intelligence,an individual’s historical context.Samples of young adults,from Kyrgyzstan,former Soviet Republic in Central Asia,and the USA were assessed using the Toronto Alexithymia Scale (TAS-20)(Bagby,Parker,& Taylor,1994) in 2002 and again in 2012,and in 2018.Significant historical cohort effect,significant interaction effect,and gender effects were found.
文摘BACKGROUND Propofol and sevoflurane are commonly used anesthetic agents for maintenance anesthesia during radical resection of gastric cancer.However,there is a debate concerning their differential effects on cognitive function,anxiety,and depression in patients undergoing this procedure.AIM To compare the effects of propofol and sevoflurane anesthesia on postoperative cognitive function,anxiety,depression,and organ function in patients undergoing radical resection of gastric cancer.METHODS A total of 80 patients were involved in this research.The subjects were divided into two groups:Propofol group and sevoflurane group.The evaluation scale for cognitive function was the Loewenstein occupational therapy cognitive assessment(LOTCA),and anxiety and depression were assessed with the aid of the self-rating anxiety scale(SAS)and self-rating depression scale(SDS).Hemodynamic indicators,oxidative stress levels,and pulmonary function were also measured.RESULTS The LOTCA score at 1 d after surgery was significantly lower in the propofol group than in the sevoflurane group.Additionally,the SAS and SDS scores of the sevoflurane group were significantly lower than those of the propofol group.The sevoflurane group showed greater stability in heart rate as well as the mean arterial pressure compared to the propofol group.Moreover,the sevoflurane group displayed better pulmonary function and less lung injury than the propofol group.CONCLUSION Both propofol and sevoflurane could be utilized as maintenance anesthesia during radical resection of gastric cancer.Propofol anesthesia has a minimal effect on patients'pulmonary function,consequently enhancing their postoperative recovery.Sevoflurane anesthesia causes less impairment on patients'cognitive function and mitigates negative emotions,leading to an improved postoperative mental state.Therefore,the selection of anesthetic agents should be based on the individual patient's specific circumstances.
文摘Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: M.A.R.I.E. enables the rational, quantified measurement of Emotional Visual Acuity (EVA) in an individual observer and a population aged 20 to 70 years. Meanwhile, it can measure the range and intensity of expressed emotions through three Face- Tests, quantify the performance of a sample of 204 observers with hypernormal measures of cognition, “thymia” (defined elsewhere), and low levels of anxiety, and perform analysis of the six primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual- Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Decision-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”, 6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Fingerprint-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
文摘Context: The advent of Artificial Intelligence (AI) requires modeling prior to its implementation in algorithms for most human skills. This observation requires us to have a detailed and precise understanding of the interfaces of verbal and emotional communications. The progress of AI is significant on the verbal level but modest in terms of the recognition of facial emotions even if this functionality is one of the oldest in humans and is omnipresent in our daily lives. Dysfunction in the ability for facial emotional expressions is present in many brain pathologies encountered by psychiatrists, neurologists, psychotherapists, mental health professionals including social workers. It cannot be objectively verified and measured due to a lack of reliable tools that are valid and consistently sensitive. Indeed, the articles in the scientific literature dealing with Visual-Facial-Emotions-Recognition (ViFaEmRe), suffer from the absence of 1) consensual and rational tools for continuous quantified measurement, 2) operational concepts. We have invented a software that can use computer-morphing attempting to respond to these two obstacles. It is identified as the Method of Analysis and Research of the Integration of Emotions (M.A.R.I.E.). Our primary goal is to use M.A.R.I.E. to understand the physiology of ViFaEmRe in normal healthy subjects by standardizing the measurements. Then, it will allow us to focus on subjects manifesting abnormalities in this ability. Our second goal is to make our contribution to the progress of AI hoping to add the dimension of recognition of facial emotional expressions. Objective: To study: 1) categorical vs dimensional aspects of recognition of ViFaEmRe, 2) universality vs idiosyncrasy, 3) immediate vs ambivalent Emotional-Decision-Making, 4) the Emotional-Fingerprint of a face and 5) creation of population references data. Methods: With M.A.R.I.E. enable a rational quantified measurement of Emotional-Visual-Acuity (EVA) of 1) a) an individual observer, b) in a population aged 20 to 70 years old, 2) measure the range and intensity of expressed emotions by 3 Face-Tests, 3) quantify the performance of a sample of 204 observers with hyper normal measures of cognition, “thymia,” (ibid. defined elsewhere) and low levels of anxiety 4) analysis of the 6 primary emotions. Results: We have individualized the following continuous parameters: 1) “Emotional-Visual-Acuity”, 2) “Visual-Emotional-Feeling”, 3) “Emotional-Quotient”, 4) “Emotional-Deci-sion-Making”, 5) “Emotional-Decision-Making Graph” or “Individual-Gun-Trigger”6) “Emotional-Fingerprint” or “Key-graph”, 7) “Emotional-Finger-print-Graph”, 8) detecting “misunderstanding” and 9) detecting “error”. This allowed us a taxonomy with coding of the face-emotion pair. Each face has specific measurements and graphics. The EVA improves from ages of 20 to 55 years, then decreases. It does not depend on the sex of the observer, nor the face studied. In addition, 1% of people endowed with normal intelligence do not recognize emotions. The categorical dimension is a variable for everyone. The range and intensity of ViFaEmRe is idiosyncratic and not universally uniform. The recognition of emotions is purely categorical for a single individual. It is dimensional for a population sample. Conclusions: Firstly, M.A.R.I.E. has made possible to bring out new concepts and new continuous measurements variables. The comparison between healthy and abnormal individuals makes it possible to take into consideration the significance of this line of study. From now on, these new functional parameters will allow us to identify and name “emotional” disorders or illnesses which can give additional dimension to behavioral disorders in all pathologies that affect the brain. Secondly, the ViFaEmRe is idiosyncratic, categorical, and a function of the identity of the observer and of the observed face. These findings stack up against Artificial Intelligence, which cannot have a globalist or regionalist algorithm that can be programmed into a robot, nor can AI compete with human abilities and judgment in this domain. *Here “Emotional disorders” refers to disorders of emotional expressions and recognition.
基金funded by the National Natural Science Foundation of China for Young Scholars(No.72304019)Peking University Health Science Center Project(No.2023YB46)+1 种基金the National Natural Science Foundation of China for Special Purpose(No.J2124013)the ISTIC-Clarivate Joint Laboratory for Scientometrics(No.IT2319).
文摘Interdisciplinary research plays a crucial role in addressing complex problems by integrating knowledge from multiple disciplines.This integration fosters innovative solutions and enhances understanding across various fields.This study explores the historical and sociological development of interdisciplinary research and maps its evolution through three distinct phases:pre-disciplinary,disciplinary,and post-disciplinary.It identifies key internal dynamics,such as disciplinary diversification,reorganization,and innovation,as primary drivers of this evolution.Additionally,this study highlights how external factors,particularly the urgency of World War II and the subsequent political and economic changes,have accelerated its advancement.The rise of interdisciplinary research has significantly reshaped traditional educational paradigms,promoting its integration across different educational levels.However,the inherent contradictions within interdisciplinary research present cognitive,emotional,and institutional challenges for researchers.Meanwhile,finding a balance between the breadth and depth of knowledge remains a critical challenge in interdisciplinary education.
文摘Speech emotion recognition(SER)uses acoustic analysis to find features for emotion recognition and examines variations in voice that are caused by emotions.The number of features acquired with acoustic analysis is extremely high,so we introduce a hybrid filter-wrapper feature selection algorithm based on an improved equilibrium optimizer for constructing an emotion recognition system.The proposed algorithm implements multi-objective emotion recognition with the minimum number of selected features and maximum accuracy.First,we use the information gain and Fisher Score to sort the features extracted from signals.Then,we employ a multi-objective ranking method to evaluate these features and assign different importance to them.Features with high rankings have a large probability of being selected.Finally,we propose a repair strategy to address the problem of duplicate solutions in multi-objective feature selection,which can improve the diversity of solutions and avoid falling into local traps.Using random forest and K-nearest neighbor classifiers,four English speech emotion datasets are employed to test the proposed algorithm(MBEO)as well as other multi-objective emotion identification techniques.The results illustrate that it performs well in inverted generational distance,hypervolume,Pareto solutions,and execution time,and MBEO is appropriate for high-dimensional English SER.
基金the Science and Technology Project of State Grid Corporation of China under Grant No.5700-202318292A-1-1-ZN.
文摘In smart classrooms, conducting multi-face expression recognition based on existing hardware devices to assessstudents’ group emotions can provide educators with more comprehensive and intuitive classroom effect analysis,thereby continuouslypromotingthe improvementof teaching quality.However,most existingmulti-face expressionrecognition methods adopt a multi-stage approach, with an overall complex process, poor real-time performance,and insufficient generalization ability. In addition, the existing facial expression datasets are mostly single faceimages, which are of low quality and lack specificity, also restricting the development of this research. This paperaims to propose an end-to-end high-performance multi-face expression recognition algorithm model suitable forsmart classrooms, construct a high-quality multi-face expression dataset to support algorithm research, and applythe model to group emotion assessment to expand its application value. To this end, we propose an end-to-endmulti-face expression recognition algorithm model for smart classrooms (E2E-MFERC). In order to provide highqualityand highly targeted data support for model research, we constructed a multi-face expression dataset inreal classrooms (MFED), containing 2,385 images and a total of 18,712 expression labels, collected from smartclassrooms. In constructing E2E-MFERC, by introducing Re-parameterization visual geometry group (RepVGG)block and symmetric positive definite convolution (SPD-Conv) modules to enhance representational capability;combined with the cross stage partial network fusion module optimized by attention mechanism (C2f_Attention),it strengthens the ability to extract key information;adopts asymptotic feature pyramid network (AFPN) featurefusion tailored to classroomscenes and optimizes the head prediction output size;achieves high-performance endto-end multi-face expression detection. Finally, we apply the model to smart classroom group emotion assessmentand provide design references for classroom effect analysis evaluation metrics. Experiments based on MFED showthat the mAP and F1-score of E2E-MFERC on classroom evaluation data reach 83.6% and 0.77, respectively,improving the mAP of same-scale You Only Look Once version 5 (YOLOv5) and You Only Look Once version8 (YOLOv8) by 6.8% and 2.5%, respectively, and the F1-score by 0.06 and 0.04, respectively. E2E-MFERC modelhas obvious advantages in both detection speed and accuracy, which can meet the practical needs of real-timemulti-face expression analysis in classrooms, and serve the application of teaching effect assessment very well.
基金financially supported by the National Key R&D Program of China (No. 2022YFF0800604)the National Natural Science Foundation of China (No. 42207224)the State Key Laboratory of Geohazard Prevention and Geoenvironment Protection Independent Research Project (SKLGP2022Z021)
文摘On September 5,2022,a strong earthquake with a magnitude of MS6.8 struck Luding County in Sichuan Province,China,triggering thousands of landslides along the Dadu River in the northwest-southeast(NW-SE)direction.We investigated the reactivation characteristics of historical landslides within the epicentral area of the Luding earthquake to identify the initiation mechanism of earthquake-induced landslides.Records of the two newly triggered and historical landslides were analyzed using manual and threshold methods;the spatial distribution of landslides was assessed in relation to topographical and geological factors using remote sensing images.This study sheds light on the spatial distribution patterns of landslides,especially those that occur above historical landslide areas.Our results revealed a similarity in the spatial distribution trends between historical landslides and new ones induced by earthquakes.These landslides tend to be concentrated within a range of 0.2 km from the river and 2 km from the fault.Notably,both rivers and faults predominantly influenced the reactivation of historical landslides.Remarkably,the reactivated landslides are characterized by their small to medium size and are predominantly situated in historical landslide zones.The number of reactivated landslides surpassed that of previously documented historical landslides within the study area.We provide insights into the critical factors responsible for historical landslides during the 2022 Luding earthquake,thereby enhancing our understanding of the potential implications for future co-seismic hazard assessments and mitigation strategies.
基金funded by the Youth Seismic Regime Tracking Project of CEA(2023010129)。
文摘The 2022 Honghe M_(S)5.0 seismic event is intriguing due to its occurrence in the south of the Red River Fault,an area historically lacking seismic activities greater than M_(S)5.0.To elucidate the seismogenic mechanism and scrutinize stress-triggered interactions,we calculated co-seismic and post-seismic Coulomb stress alterations induced by nine historical seismic events(M≥6.0).The analysis reveals that these substantial seismic events provoked co-seismic stress augmentations of 1.409 bar and postseismic stress increments of 0.159 bar.Noteworthy seismic events,such as the 1833 Songming,1877Shiping,1913 Eshan,and 1970 Tonghai earthquakes,catalyzed the occurrence of the Honghe earthquake.Areas of heightened future seismic risk include the southern region of the Red River Fault and the eastern segments of the Shiping-Jianshui and Qujiang faults.Additionally,we assessed the correlation between the spatial distribution of aftershocks and the Coulomb stress shift triggered by the mainshock,taking into account the influence of calculation parameter settings.
基金Supported by Education and Teaching Reform Project of the First Clinical College of Chongqing Medical University,No.CMER202305Natural Science Foundation of Tibet Autonomous Region,No.XZ2024ZR-ZY100(Z).
文摘This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.
基金The authors are highly thankful to the National Social Science Foundation of China(20BXW101,18XXW015)Innovation Research Project for the Cultivation of High-Level Scientific and Technological Talents(Top-Notch Talents of theDiscipline)(ZZKY2022303)+3 种基金National Natural Science Foundation of China(Nos.62102451,62202496)Basic Frontier Innovation Project of Engineering University of People’s Armed Police(WJX202316)This work is also supported by National Natural Science Foundation of China(No.62172436)Engineering University of PAP’s Funding for Scientific Research Innovation Team,Engineering University of PAP’s Funding for Basic Scientific Research,and Engineering University of PAP’s Funding for Education and Teaching.Natural Science Foundation of Shaanxi Province(No.2023-JCYB-584).
文摘With the rapid spread of Internet information and the spread of fake news,the detection of fake news becomes more and more important.Traditional detection methods often rely on a single emotional or semantic feature to identify fake news,but these methods have limitations when dealing with news in specific domains.In order to solve the problem of weak feature correlation between data from different domains,a model for detecting fake news by integrating domain-specific emotional and semantic features is proposed.This method makes full use of the attention mechanism,grasps the correlation between different features,and effectively improves the effect of feature fusion.The algorithm first extracts the semantic features of news text through the Bi-LSTM(Bidirectional Long Short-Term Memory)layer to capture the contextual relevance of news text.Senta-BiLSTM is then used to extract emotional features and predict the probability of positive and negative emotions in the text.It then uses domain features as an enhancement feature and attention mechanism to fully capture more fine-grained emotional features associated with that domain.Finally,the fusion features are taken as the input of the fake news detection classifier,combined with the multi-task representation of information,and the MLP and Softmax functions are used for classification.The experimental results show that on the Chinese dataset Weibo21,the F1 value of this model is 0.958,4.9% higher than that of the sub-optimal model;on the English dataset FakeNewsNet,the F1 value of the detection result of this model is 0.845,1.8% higher than that of the sub-optimal model,which is advanced and feasible.
文摘Machine Learning(ML)algorithms play a pivotal role in Speech Emotion Recognition(SER),although they encounter a formidable obstacle in accurately discerning a speaker’s emotional state.The examination of the emotional states of speakers holds significant importance in a range of real-time applications,including but not limited to virtual reality,human-robot interaction,emergency centers,and human behavior assessment.Accurately identifying emotions in the SER process relies on extracting relevant information from audio inputs.Previous studies on SER have predominantly utilized short-time characteristics such as Mel Frequency Cepstral Coefficients(MFCCs)due to their ability to capture the periodic nature of audio signals effectively.Although these traits may improve their ability to perceive and interpret emotional depictions appropriately,MFCCS has some limitations.So this study aims to tackle the aforementioned issue by systematically picking multiple audio cues,enhancing the classifier model’s efficacy in accurately discerning human emotions.The utilized dataset is taken from the EMO-DB database,preprocessing input speech is done using a 2D Convolution Neural Network(CNN)involves applying convolutional operations to spectrograms as they afford a visual representation of the way the audio signal frequency content changes over time.The next step is the spectrogram data normalization which is crucial for Neural Network(NN)training as it aids in faster convergence.Then the five auditory features MFCCs,Chroma,Mel-Spectrogram,Contrast,and Tonnetz are extracted from the spectrogram sequentially.The attitude of feature selection is to retain only dominant features by excluding the irrelevant ones.In this paper,the Sequential Forward Selection(SFS)and Sequential Backward Selection(SBS)techniques were employed for multiple audio cues features selection.Finally,the feature sets composed from the hybrid feature extraction methods are fed into the deep Bidirectional Long Short Term Memory(Bi-LSTM)network to discern emotions.Since the deep Bi-LSTM can hierarchically learn complex features and increases model capacity by achieving more robust temporal modeling,it is more effective than a shallow Bi-LSTM in capturing the intricate tones of emotional content existent in speech signals.The effectiveness and resilience of the proposed SER model were evaluated by experiments,comparing it to state-of-the-art SER techniques.The results indicated that the model achieved accuracy rates of 90.92%,93%,and 92%over the Ryerson Audio-Visual Database of Emotional Speech and Song(RAVDESS),Berlin Database of Emotional Speech(EMO-DB),and The Interactive Emotional Dyadic Motion Capture(IEMOCAP)datasets,respectively.These findings signify a prominent enhancement in the ability to emotional depictions identification in speech,showcasing the potential of the proposed model in advancing the SER field.
基金Shijiazhuang City Science and Technology Research and Development Self Raised Plan,No.221460383。
文摘BACKGROUND Studies have revealed that Children's psychological,behavioral,and emotional problems are easily influenced by the family environment.In recent years,the family structure in China has undergone significant changes,with more families having two or three children.AIM To explore the relationship between emotional behavior and parental job stress in only preschool and non-only preschool children.METHODS Children aged 3-6 in kindergartens in four main urban areas of Shijiazhuang were selected by stratified sampling for a questionnaire and divided into only and nononly child groups.Their emotional behaviors and parental pressure were compared.Only and non-only children were paired in a 1:1 ratio by class and age(difference less than or equal to 6 months),and the matched data were compared.The relationship between children's emotional behavior and parents'job stress before and after matching was analyzed.RESULTS Before matching,the mother's occupation,children's personality characteristics,and children's rearing patterns differed between the groups(P<0.05).After matching 550 pairs,differences in the children's parenting styles remained.There were significant differences in children's gender and parents'attitudes toward children between the two groups.The Strengths and Difficulties Questionnaire(SDQ)scores of children in the only child group and the Parenting Stress Index-Short Form(PSI-SF)scores of parents were significantly lower than those in the non-only child group(P<0.05).Pearson’s correlation analysis showed that after matching,there was a positive correlation between children's parenting style and parents'attitudes toward their children(r=0.096,P<0.01),and the PSI-SF score was positively correlated with children's gender,parents'attitudes toward their children,and SDQ scores(r=0.077,0.193,0.172,0.222).CONCLUSION Preschool children's emotional behavior and parental pressure were significantly higher in multi-child families.Parental pressure in differently structured families was associated with many factors,and preschool children's emotional behavior was positively correlated with parental pressure.
文摘Breast cancer(BC)is the most common malignant tumor in women,and the treatment process not only results in physical pain but also significant psychological distress in patients.Psychological intervention(PI)has been recognized as an important approach in treating postoperative psychological disorders in BC patients.It has been proven that PI has a significant therapeutic effect on postoperative psychological disorders,improving patients'negative emotions,enhancing their psychological resilience,and effectively enhancing their quality of life and treatment compliance.
基金supported by the National Key Research and Development Program of China (Grant no.2021YFB3900105)the support from the Fundamental Research Funds for the Central Universitiesthe support by the National Key Research and Development Program of China (Grant no.2017YFA0603100).
文摘The Antarctic Ice Sheet harbors more than 90%of the Earth ice mass,with significant losses experienced through dynamic thinning,particularly in West Antarctica.The crucial aspect of investigating ice mass balance in historical periods preceding 1990 hinges on the utilization of ice velocities derived from optical satellite images.We employed declassified satellite images and Landsat images with normalized cross correlation based image matching,adopting an adaptive combination of skills and methods to overcome challenges encountered during the mapping of historical ice velocity in West Antarctica.A basin-wide synthesis velocity map encompassing the coastal regions of most large-scale glaciers and ice shelves in West Antarctica has already been successfully generated.Our results for historical ice velocities cover over 70%of the grounding line in most of the West Antarctic basins.Through adjustments,we uncovered overestimations in ice velocity measurements over an extended period,transforming our ice velocity map into a spatially deterministic,temporally average version.Among all velocity measurements,Thwaites Glacier exhibited a notable spatial variation in the fastest ice flowline and velocity distribution.Overestimation distributions on Thwaites Glacier displayed a clear consistency with the positions of subsequent front calving events,offering insights into the instabilities of ice shelves.
基金Supported by Zhangjiakou Science and Technology Plan Project,No.2322112D.
文摘BACKGROUND Breast cancer is among the most common malignancies worldwide.With progress in treatment methods and levels,the overall survival period has been prolonged,and the demand for quality care has increased.AIM To investigate the effect of individualized and continuous care intervention in patients with breast cancer.METHODS Two hundred patients with breast cancer who received systemic therapy at The First Affiliated Hospital of Hebei North University(January 2021 to July 2023)were retrospectively selected as research participants.Among them,134 received routine care intervention(routing group)and 66 received personalized and continuous care(intervention group).Self-rating anxiety scale(SAS),self-rating depression scale(SDS),and Functional Assessment of Cancer Therapy-Breast(FACT-B)scores,including limb shoulder joint activity,complication rate,and care satisfaction,were compared between both groups after care.RESULTS SAS and SDS scores were lower in the intervention group than in the routing group at one and three months after care.The total FACT-B scores and five dimensions in the intervention group were higher than those in the routing group at three months of care.The range of motion of shoulder anteflexion,posterior extension,abduction,internal rotation,and external rotation in the intervention group was higher than that in the routing group one month after care.The incidence of postoperative complications was 18.18%lower in the intervention group than in the routing group(34.33%;P<0.05).Satisfaction with care was 90.91% higher in the intervention group than in the routing group(78.36%;P<0.05).CONCLUSION Personalized and continuous care can alleviate negative emotions in patients with breast cancer,quicken rehabilitation of limb function,decrease the incidence of complications,and improve living quality and care satisfaction.
基金Supported by Key R&D and Promotion Special Project(Science and Technology Research)in Henan Province in 2023,No.232102310089.
文摘BACKGROUND Sepsis is a serious infectious disease caused by various systemic inflammatory responses and is ultimately life-threatening.Patients usually experience depression and anxiety,which affect their sleep quality and post-traumatic growth levels.AIM To investigate the effects of sepsis,a one-hour bundle(H1B)management was combined with psychological intervention in patients with sepsis.METHODS This retrospective analysis included 300 patients with sepsis who were admitted to Henan Provincial People’s Hospital between June 2022 and June 2023.According to different intervention methods,the participants were divided into a simple group(SG,n=150)and combined group(CG,n=150).H1B management was used in the SG and H1B management combined with psychological intervention was used in the CG.The changes of negative emotion,sleep quality and post-traumatic growth and prognosis were compared between the two groups before(T0)and after(T1)intervention.RESULTS After intervention(T1),the scores of the Hamilton Anxiety scale and Hamilton Depression scale in the CG were significantly lower than those in the SG(P<0.001).Sleep time,sleep quality,sleep efficiency,daytime dysfunction,sleep disturbance dimension score,and the total score in the CG were significantly lower than those in the SG(P<0.001).The appreciation of life,mental changes,relationship with others,personal strength dimension score,and total score of the CG were significantly higher than those of the SG(P<0.001).The scores for mental health,general health status,physiological function,emotional function,physical pain,social function,energy,and physiological function in the CG were significantly higher than those in the SG(P<0.001).The mechanical ventilation time,intensive care unit stay time,and 28-d mortality of the CG were significantly lower than those of the SG(P<0.05).CONCLUSION H1B management combined with psychological intervention can effectively alleviate the negative emotions of patients with sepsis and increase their quality of sleep and life.
文摘BACKGROUND Gastric cancer is a malignant digestive tract tumor that originates from the epithelium of the gastric mucosa and occurs in the gastric antrum,particularly in the lower curvature of the stomach.AIM To evaluate the impact of a positive web-based psychological intervention on emotions,psychological capital,and quality of survival in gastric cancer patients on chemotherapy.METHODS From January 2020 to October 2023,121 cases of gastric cancer patients on chemotherapy admitted to our hospital were collected and divided into a control group(n=60)and an observation group(n=61)according to the admission order.They were given either conventional nursing care alone and conventional nursing care combined with web-based positive psychological interventions,respectively.The two groups were compared in terms of negative emotions,psychological capital,degree of cancer-caused fatigue,and quality of survival.RESULTS After intervention,the number of patients in the observation group who had negative feelings toward chemotherapy treatment was significantly lower than that of the control group(P<0.05);the Positive Psychological Capital Questionnaire score was considerably higher than that of the control group(P<0.05);the degree of cancer-caused fatigue was significantly lower than that of the control group(P<0.05);and the Quality of Life Scale for Cancer Patients(QLQ-30)score was significantly higher than that of the control group(P<0.05).CONCLUSION Implementing a web-based positive psychological intervention for gastric cancer chemotherapy patients can effectively improve negative emotions,enhance psychological capital,and improve the quality of survival.
文摘Using the multimodal metaphor theory,this article studies the multimodal metaphor of emotion.Emotions can be divided into positive emotions and negative emotions.Positive emotion metaphors include happiness metaphors and love metaphors,while negative emotion metaphors include anger metaphors,fear metaphors and sadness metaphors.They intuitively represent the source domain through physical signs,sensory effects,orientation dynamics and physical presentation close to the actual life,and the emotional multimodal metaphors in emojis have narrative and social functions.