Depression is a mental psychological disorder that may cause a physical disorder or lead to death.It is highly impactful on the socialeconomical life of a person;therefore,its effective and timely detection is needful...Depression is a mental psychological disorder that may cause a physical disorder or lead to death.It is highly impactful on the socialeconomical life of a person;therefore,its effective and timely detection is needful.Despite speech and gait,facial expressions have valuable clues to depression.This study proposes a depression detection system based on facial expression analysis.Facial features have been used for depression detection using Support Vector Machine(SVM)and Convolutional Neural Network(CNN).We extracted micro-expressions using Facial Action Coding System(FACS)as Action Units(AUs)correlated with the sad,disgust,and contempt features for depression detection.A CNN-based model is also proposed in this study to auto classify depressed subjects from images or videos in real-time.Experiments have been performed on the dataset obtained from Bahawal Victoria Hospital,Bahawalpur,Pakistan,as per the patient health questionnaire depression scale(PHQ-8);for inferring the mental condition of a patient.The experiments revealed 99.9%validation accuracy on the proposed CNN model,while extracted features obtained 100%accuracy on SVM.Moreover,the results proved the superiority of the reported approach over state-of-the-art methods.展开更多
Schizophrenia is a severe mental illness responsible for many of the world’s disabilities.It significantly impacts human society;thus,rapid,and efficient identification is required.This research aims to diagnose schi...Schizophrenia is a severe mental illness responsible for many of the world’s disabilities.It significantly impacts human society;thus,rapid,and efficient identification is required.This research aims to diagnose schizophrenia directly from a high-resolution camera,which can capture the subtle micro facial expressions that are difficult to spot with the help of the naked eye.In a clinical study by a team of experts at Bahawal Victoria Hospital(BVH),Bahawalpur,Pakistan,there were 300 people with schizophrenia and 299 healthy subjects.Videos of these participants have been captured and converted into their frames using the OpenFace tool.Additionally,pose,gaze,Action Units(AUs),and land-marked features have been extracted in the Comma Separated Values(CSV)file.Aligned faces have been used to detect schizophrenia by the proposed and the pre-trained Convolutional Neural Network(CNN)models,i.e.,VGG16,Mobile Net,Efficient Net,Google Net,and ResNet50.Moreover,Vision transformer,Swim transformer,big transformer,and vision transformer without attention have also been used to train the models on customized dataset.CSV files have been used to train a model using logistic regression,decision trees,random forest,gradient boosting,and support vector machine classifiers.Moreover,the parameters of the proposed CNN architecture have been optimized using the Particle Swarm Optimization algorithm.The experimental results showed a validation accuracy of 99.6%for the proposed CNN model.The results demonstrated that the reported method is superior to the previous methodologies.The model can be deployed in a real-time environment.展开更多
This research explores the capacity of emerging technologies to enhance well-being. It involves the generation of 2D biophilically-driven geometries to represent human-response-oriented built environments and conducts...This research explores the capacity of emerging technologies to enhance well-being. It involves the generation of 2D biophilically-driven geometries to represent human-response-oriented built environments and conducts inter and intra-individual analyses to assess human responses using a range of technologies within the realms of facial micro-expression analysis and EEG biosensor use. The outcomes of this analysis allow for the grading of these geometries in terms of emotional valences, meditation levels, and subjective preferences. These graded geometries can subsequently be employed in specific architectural contexts, such as interior decor, wallpapers, furniture surfaces, or other architectural and interior components. It is an interdisciplinary effort that underscores the importance of incorporating emerging technological means with human-response-oriented design approaches to foster built environments that promote well-being.展开更多
Bipolar disorder is a serious mental condition that may be caused by any kind of stress or emotional upset experienced by the patient.It affects a large percentage of people globally,who fluctuate between depression a...Bipolar disorder is a serious mental condition that may be caused by any kind of stress or emotional upset experienced by the patient.It affects a large percentage of people globally,who fluctuate between depression and mania,or vice versa.A pleasant or unpleasant mood is more than a reflection of a state of mind.Normally,it is a difficult task to analyze through physical examination due to a large patient-psychiatrist ratio,so automated procedures are the best options to diagnose and verify the severity of bipolar.In this research work,facial microexpressions have been used for bipolar detection using the proposed Convolutional Neural Network(CNN)-based model.Facial Action Coding System(FACS)is used to extract micro-expressions called Action Units(AUs)connected with sad,happy,and angry emotions.Experiments have been conducted on a dataset collected from Bahawal Victoria Hospital,Bahawalpur,Pakistan,Using the Patient Health Questionnaire-15(PHQ-15)to infer a patient’s mental state.The experimental results showed a validation accuracy of 98.99%for the proposed CNN modelwhile classification through extracted featuresUsing SupportVectorMachines(SVM),K-NearestNeighbour(KNN),and Decision Tree(DT)obtained 99.9%,98.7%,and 98.9%accuracy,respectively.Overall,the outcomes demonstrated the stated method’s superiority over the current best practices.展开更多
Facial micro-expressions are short and imperceptible expressions that involuntarily reveal the true emotions that a person may be attempting to suppress,hide,disguise,or conceal.Such expressions can reflect a person...Facial micro-expressions are short and imperceptible expressions that involuntarily reveal the true emotions that a person may be attempting to suppress,hide,disguise,or conceal.Such expressions can reflect a person's real emotions and have a wide range of application in public safety and clinical diagnosis.The analysis of facial micro-expressions in video sequences through computer vision is still relatively recent.In this research,a comprehensive review on the topic of spotting and recognition used in micro expression analysis databases and methods,is conducted,and advanced technologies in this area are summarized.In addition,we discuss challenges that remain unresolved alongside future work to be completed in the field of micro-expression analysis.展开更多
Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most ...Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most existing micro-expression recognition technologies so far focus on modeling the single category of micro-expression images and neural network structure.Aiming at the problems of low recognition rate and weak model generalization ability in micro-expression recognition, a micro-expression recognition algorithm is proposed based on graph convolution network(GCN) and Transformer model.Firstly, action unit(AU) feature detection is extracted and facial muscle nodes in the neighborhood are divided into three subsets for recognition.Then, graph convolution layer is used to find the layout of dependencies between AU nodes of micro-expression classification.Finally, multiple attentional features of each facial action are enriched with Transformer model to include more sequence information before calculating the overall correlation of each region.The proposed method is validated in CASME II and CAS(ME)^2 datasets, and the recognition rate reached 69.85%.展开更多
Micro-expression recognition has attracted growing research interests in the field of compute vision.However,micro-expression usually lasts a few seconds,thus it is difficult to detect.This paper presents a new framew...Micro-expression recognition has attracted growing research interests in the field of compute vision.However,micro-expression usually lasts a few seconds,thus it is difficult to detect.This paper presents a new framework to recognize micro-expression using pyramid histogram of Centralized Gabor Binary Pattern from Three Orthogonal Panels(CGBP-TOP)which is an extension of Local Gabor Binary Pattern from Three Orthogonal Panels feature.CGBP-TOP performs spatial and temporal analysis to capture the local facial characteristics of micro-expression image sequences.In order to keep more local information of the face,CGBP-TOP is extracted based on pyramid subregions of the micro-expression video frame.The combination of CGBP-TOP and spatial pyramid can represent well and truly the facial movements of the micro-expression image sequences.However,the dimension of our pyramid CGBP-TOP tends to be very high,which may lead to high data redundancy problem.In addition,it is clear that people of different genders usually have different ways of micro-expression.Therefore,in this paper,in order to select the relevant features of micro-expression,the gender-specific sparse multi-task learning method with adaptive regularization term is adopted to learn a compact subset of pyramid CGBP-TOP feature for micro-expression classification of different sexes.Finally,extensive experiments on widely used CASME II and SMIC databases demonstrate that our method can efficiently extract micro-expression motion features in the micro-expression video clip.Moreover,our proposed approach achieves comparable results with the state-of-the-art methods.展开更多
Aiming at the problems of short duration,low intensity,and difficult detection of micro-expressions(MEs),the global and local features of ME video frames are extracted by combining spatial feature extraction and tempo...Aiming at the problems of short duration,low intensity,and difficult detection of micro-expressions(MEs),the global and local features of ME video frames are extracted by combining spatial feature extraction and temporal feature extraction.Based on traditional convolution neural network(CNN)and long short-term memory(LSTM),a recognition method combining global identification attention network(GIA),block identification attention network(BIA)and bi-directional long short-term memory(Bi-LSTM)is proposed.In the BIA,the ME video frame will be cropped,and the training will be carried out by cropping into 24 identification blocks(IBs),10 IBs and uncropped IBs.To alleviate the overfitting problem in training,we first extract the basic features of the preprocessed sequence through the transfer learning layer,and then extract the global and local spatial features of the output data through the GIA layer and the BIA layer,respectively.In the BIA layer,the input data will be cropped into local feature vectors with attention weights to extract the local features of the ME frames;in the GIA layer,the global features of the ME frames will be extracted.Finally,after fusing the global and local feature vectors,the ME time-series information is extracted by Bi-LSTM.The experimental results show that using IBs can significantly improve the model’s ability to extract subtle facial features,and the model works best when 10 IBs are used.展开更多
Pulse rate is one of the important characteristics of traditional Chinese medicine pulse diagnosis,and it is of great significance for determining the nature of cold and heat in diseases.The prediction of pulse rate b...Pulse rate is one of the important characteristics of traditional Chinese medicine pulse diagnosis,and it is of great significance for determining the nature of cold and heat in diseases.The prediction of pulse rate based on facial video is an exciting research field for getting palpation information by observation diagnosis.However,most studies focus on optimizing the algorithm based on a small sample of participants without systematically investigating multiple influencing factors.A total of 209 participants and 2,435 facial videos,based on our self-constructed Multi-Scene Sign Dataset and the public datasets,were used to perform a multi-level and multi-factor comprehensive comparison.The effects of different datasets,blood volume pulse signal extraction algorithms,region of interests,time windows,color spaces,pulse rate calculation methods,and video recording scenes were analyzed.Furthermore,we proposed a blood volume pulse signal quality optimization strategy based on the inverse Fourier transform and an improvement strategy for pulse rate estimation based on signal-to-noise ratio threshold sliding.We found that the effects of video estimation of pulse rate in the Multi-Scene Sign Dataset and Pulse Rate Detection Dataset were better than in other datasets.Compared with Fast independent component analysis and Single Channel algorithms,chrominance-based method and plane-orthogonal-to-skin algorithms have a more vital anti-interference ability and higher robustness.The performances of the five-organs fusion area and the full-face area were better than that of single sub-regions,and the fewer motion artifacts and better lighting can improve the precision of pulse rate estimation.展开更多
Background The use of micro-expression recognition to recognize human emotions is one of the most critical challenges in human-computer interaction applications. In recent years, cross-database micro-expression recogn...Background The use of micro-expression recognition to recognize human emotions is one of the most critical challenges in human-computer interaction applications. In recent years, cross-database micro-expression recognition(CDMER) has emerged as a significant challenge in micro-expression recognition and analysis. Because the training and testing data in CDMER come from different micro-expression databases, CDMER is more challenging than conventional micro-expression recognition. Methods In this paper, an adaptive spatio-temporal attention neural network(ASTANN) using an attention mechanism is presented to address this challenge. To this end, the micro-expression databases SMIC and CASME II are first preprocessed using an optical flow approach,which extracts motion information among video frames that represent discriminative features of micro-expression.After preprocessing, a novel adaptive framework with a spatiotemporal attention module was designed to assign spatial and temporal weights to enhance the most discriminative features. The deep neural network then extracts the cross-domain feature, in which the second-order statistics of the sample features in the source domain are aligned with those in the target domain by minimizing the correlation alignment(CORAL) loss such that the source and target databases share similar distributions. Results To evaluate the performance of ASTANN, experiments were conducted based on the SMIC and CASME II databases under the standard experimental evaluation protocol of CDMER. The experimental results demonstrate that ASTANN outperformed other methods in relevant crossdatabase tasks. Conclusions Extensive experiments were conducted on benchmark tasks, and the results show that ASTANN has superior performance compared with other approaches. This demonstrates the superiority of our method in solving the CDMER problem.展开更多
Aiming at the problem of unsatisfactory effects of traditional micro-expression recognition algorithms,an efficient micro-expression recognition algorithm is proposed,which uses convolutional neural networks(CNN)to ex...Aiming at the problem of unsatisfactory effects of traditional micro-expression recognition algorithms,an efficient micro-expression recognition algorithm is proposed,which uses convolutional neural networks(CNN)to extract spatial features of micro-expressions,and long short-term memory network(LSTM)to extract time domain features.CNN and LSTM are combined as the basis of micro-expression recognition.In many CNN structures,the visual geometry group(VGG)using a small convolution kernel is finally selected as the pre-network through comparison.Due to the difficulty of deep learning training and over-fitting,the dropout method and batch normalization method are used to solve the problem in the VGG network.Two data sets CASME and CASME II are used for test comparison,in order to solve the problem of insufficient data sets,randomly determine the starting frame,and a fixedlength frame sequence is used as the standard,and repeatedly read all sample frames of the entire data set to achieve trayersal and data amplification.Finallv.a hieh recognition rate of 67.48% is achieved.展开更多
The micro-expression lasts for a very short time and the intensity is very subtle.Aiming at the problem of its low recognition rate,this paper proposes a new micro-expression recognition algorithm based on a three-dim...The micro-expression lasts for a very short time and the intensity is very subtle.Aiming at the problem of its low recognition rate,this paper proposes a new micro-expression recognition algorithm based on a three-dimensional convolutional neural network(3D-CNN),which can extract two-di-mensional features in spatial domain and one-dimensional features in time domain,simultaneously.The network structure design is based on the deep learning framework Keras,and the discarding method and batch normalization(BN)algorithm are effectively combined with three-dimensional vis-ual geometry group block(3D-VGG-Block)to reduce the risk of overfitting while improving training speed.Aiming at the problem of the lack of samples in the data set,two methods of image flipping and small amplitude flipping are used for data amplification.Finally,the recognition rate on the data set is as high as 69.11%.Compared with the current international average micro-expression recog-nition rate of about 67%,the proposed algorithm has obvious advantages in recognition rate.展开更多
In this paper, we explore the process of emotional state transition. And the process is impacted by emotional state of interaction objects. First of all, the cognitive reasoning process and the micro-expressions recog...In this paper, we explore the process of emotional state transition. And the process is impacted by emotional state of interaction objects. First of all, the cognitive reasoning process and the micro-expressions recognition is the basis of affective computing adjustment process. Secondly, the threshold function and attenuation function are proposed to quantify the emotional changes. In the actual environment, the emotional state of the robot and external stimulus are also quantified as the transferring probability. Finally, the Gaussian cloud distribution is introduced to the Gross model to calculate the emotional transitional probabilities. The experimental results show that the model in human-computer interaction can effectively regulate the emotional states, and can significantly improve the humanoid and intelligent ability of the robot. This model is consistent with experimental and emulational significance of the psychology, and allows the robot to get rid of the mechanical emotional transfer process.展开更多
Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on han...Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.展开更多
Background: The ear and face are indispensable and distinctive features for hearing and identification. Objectives: This study was designed to generate anthropometric data of the ear and facial indices of females of E...Background: The ear and face are indispensable and distinctive features for hearing and identification. Objectives: This study was designed to generate anthropometric data of the ear and facial indices of females of Efik and Ibibio children in Cross River and Akwa Ibom States, show morphological and aesthetic differences and ethnicity. Methods: A total of 600 female children (300 Efiks and 300 Ibibios) aged 2 to 10 years that met the inclusion criteria were chosen from selected primary schools in Calabar Municipality, Calabar South of Cross River State and from Uyo, Itu of Akwa Ibom State, Nigeria. Standardized measurements of face length, face width, ear length, and ear width were taken with a spreading caliper;the facial (proscopic) and ear (auricular) indices were determined. Results: Efik subjects presented a mean face length of 8.36 ± 0.06 cm, face width of 11.04 ± 0.04 cm, ear length of 4.92 ± 0.02 cm, and ear width of 3.06 ± 0.01 cm. Ibibio subjects had mean values for face length, face width, ear length, and ear width as 8.17 ± 0.05 cm, 10.75 ± 0.05 cm, 4.77 ± 0.03 cm, and 2.94 ± 0.02 cm respectively. The mean facial index and ear index for Efik subjects were 75.68 ± 0.31 and 62.16 ± 0.27 respectively;while the mean facial and ear indices for Ibibio subjects were 74.79 ± 0.36 and 61.80 ± 0.34 respectively. Statistical analysis demonstrated significant differences in face length, ear length, ear width and facial index, with the Efik subjects having higher values than Ibibio subjects (p Conclusion: The results showed hypereuryproscopic face as the prevalent face type among females of both ethnic groups, therefore can be of importance in sex, ethnic, and racial differentiation, and in clinical practice, aesthetics and forensic medicine.展开更多
The estimation of pain intensity is critical for medical diagnosis and treatment of patients.With the development of image monitoring technology and artificial intelligence,automatic pain assessment based on facial ex...The estimation of pain intensity is critical for medical diagnosis and treatment of patients.With the development of image monitoring technology and artificial intelligence,automatic pain assessment based on facial expression and behavioral analysis shows a potential value in clinical applications.This paper reports a framework of convolutional neural network with global and local attention mechanism(GLA-CNN)for the effective detection of pain intensity at four-level thresholds using facial expression images.GLA-CNN includes two modules,namely global attention network(GANet)and local attention network(LANet).LANet is responsible for extracting representative local patch features of faces,while GANet extracts whole facial features to compensate for the ignored correlative features between patches.In the end,the global correlational and local subtle features are fused for the final estimation of pain intensity.Experiments under the UNBC-McMaster Shoulder Pain database demonstrate that GLA-CNN outperforms other state-of-the-art methods.Additionally,a visualization analysis is conducted to present the feature map of GLA-CNN,intuitively showing that it can extract not only local pain features but also global correlative facial ones.Our study demonstrates that pain assessment based on facial expression is a non-invasive and feasible method,and can be employed as an auxiliary pain assessment tool in clinical practice.展开更多
Background: Maxillofacial trauma affects young adults more. The injury assessment is difficult to establish in low-income countries because of the imaging means, particularly the scanner, which is poorly available and...Background: Maxillofacial trauma affects young adults more. The injury assessment is difficult to establish in low-income countries because of the imaging means, particularly the scanner, which is poorly available and less financially accessible. The aim of this study is to describe the epidemiological profile and the various tomodensitometric aspects of traumatic lesions of the face in patients received in the Radiology department of Kira Hospital. Patients and methods: This is a descriptive retrospective study involving 104 patients of all ages over a period of 2 years from December 2018 to November 2019 in the medical imaging department of KIRA HOSPITAL. We included in our study any patient having undergone a CT scan of the head and presenting at least one lesion of the facial mass, whether associated with other cranioencephalic lesions. Results: Among the 384 patients received for head trauma, 104 patients (27.1% of cases) presented facial damage. The average age of our patients was 32.02 years with extremes of 8 months and 79 years. In our study, 87 of the patients (83.6%) were male. The road accident was the circumstance in which facial trauma occurred in 79 patients (76% of cases). These injuries were accompanied by at least one bone fracture in 97 patients (93.3%). Patients with fractures of more than 3 facial bones accounted for 40.2% of cases and those with fractures of 2 to 3 bones accounted for 44.6% of cases. The midface was the site of the fracture in 85 patients (87.6% of cases). Orbital wall fractures were noted in 57 patients (58.8% of cases) and the jawbone was the site of a fracture in 50 patients (51.5% of cases). In the vault, the fractures involved the extra-facial frontal bone (36.1% of cases) and temporal bone (18.6% of cases). Cerebral contusion was noted in 41.2% of patients and pneumoencephaly in 15.5% of patients. Extradural hematoma was present in 16 patients and subdural hematoma affected 13 patients. Conclusion: Computed tomography is a diagnostic tool of choice in facial trauma patients. Most of these young patients present with multiple fractures localizing to the mid-level of the face with concomitant involvement of the brain.展开更多
Automatically detecting learners’engagement levels helps to develop more effective online teaching and assessment programs,allowing teachers to provide timely feedback and make personalized adjustments based on stude...Automatically detecting learners’engagement levels helps to develop more effective online teaching and assessment programs,allowing teachers to provide timely feedback and make personalized adjustments based on students’needs to enhance teaching effectiveness.Traditional approaches mainly rely on single-frame multimodal facial spatial information,neglecting temporal emotional and behavioural features,with accuracy affected by significant pose variations.Additionally,convolutional padding can erode feature maps,affecting feature extraction’s representational capacity.To address these issues,we propose a hybrid neural network architecture,the redistributing facial features and temporal convolutional network(RefEIP).This network consists of three key components:first,utilizing the spatial attention mechanism large kernel attention(LKA)to automatically capture local patches and mitigate the effects of pose variations;second,employing the feature organization and weight distribution(FOWD)module to redistribute feature weights and eliminate the impact of white features and enhancing representation in facial feature maps.Finally,we analyse the temporal changes in video frames through the modern temporal convolutional network(ModernTCN)module to detect engagement levels.We constructed a near-infrared engagement video dataset(NEVD)to better validate the efficiency of the RefEIP network.Through extensive experiments and in-depth studies,we evaluated these methods on the NEVD and the Database for Affect in Situations of Elicitation(DAiSEE),achieving an accuracy of 90.8%on NEVD and 61.2%on DAiSEE in the fourclass classification task,indicating significant advantages in addressing engagement video analysis problems.展开更多
Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particula...Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.展开更多
BACKGROUND Facial herpes is a common form of the herpes simplex virus-1 infection and usually presents as vesicles near the mouth,nose,and periocular sites.In contrast,we observed a new facial symptom of herpes on the...BACKGROUND Facial herpes is a common form of the herpes simplex virus-1 infection and usually presents as vesicles near the mouth,nose,and periocular sites.In contrast,we observed a new facial symptom of herpes on the entire face without vesicles.CASE SUMMARY A 33-year-old woman with a history of varicella infection and shingles since an early age presented with sarcoidosis of the entire face and neuralgia without oral lesions.The patient was prescribed antiviral treatment with valacyclovir and acyclovir cream.One day after drug administration,facial skin lesions and neurological pain improved.Herpes simplex without oral blisters can easily be misdiagnosed as pimples upon visual examination in an outpatient clinic.CONCLUSION As acute herpes simplex is accompanied by neuralgia,prompt diagnosis and prescription are necessary,considering the pathological history and health conditions.展开更多
文摘Depression is a mental psychological disorder that may cause a physical disorder or lead to death.It is highly impactful on the socialeconomical life of a person;therefore,its effective and timely detection is needful.Despite speech and gait,facial expressions have valuable clues to depression.This study proposes a depression detection system based on facial expression analysis.Facial features have been used for depression detection using Support Vector Machine(SVM)and Convolutional Neural Network(CNN).We extracted micro-expressions using Facial Action Coding System(FACS)as Action Units(AUs)correlated with the sad,disgust,and contempt features for depression detection.A CNN-based model is also proposed in this study to auto classify depressed subjects from images or videos in real-time.Experiments have been performed on the dataset obtained from Bahawal Victoria Hospital,Bahawalpur,Pakistan,as per the patient health questionnaire depression scale(PHQ-8);for inferring the mental condition of a patient.The experiments revealed 99.9%validation accuracy on the proposed CNN model,while extracted features obtained 100%accuracy on SVM.Moreover,the results proved the superiority of the reported approach over state-of-the-art methods.
文摘Schizophrenia is a severe mental illness responsible for many of the world’s disabilities.It significantly impacts human society;thus,rapid,and efficient identification is required.This research aims to diagnose schizophrenia directly from a high-resolution camera,which can capture the subtle micro facial expressions that are difficult to spot with the help of the naked eye.In a clinical study by a team of experts at Bahawal Victoria Hospital(BVH),Bahawalpur,Pakistan,there were 300 people with schizophrenia and 299 healthy subjects.Videos of these participants have been captured and converted into their frames using the OpenFace tool.Additionally,pose,gaze,Action Units(AUs),and land-marked features have been extracted in the Comma Separated Values(CSV)file.Aligned faces have been used to detect schizophrenia by the proposed and the pre-trained Convolutional Neural Network(CNN)models,i.e.,VGG16,Mobile Net,Efficient Net,Google Net,and ResNet50.Moreover,Vision transformer,Swim transformer,big transformer,and vision transformer without attention have also been used to train the models on customized dataset.CSV files have been used to train a model using logistic regression,decision trees,random forest,gradient boosting,and support vector machine classifiers.Moreover,the parameters of the proposed CNN architecture have been optimized using the Particle Swarm Optimization algorithm.The experimental results showed a validation accuracy of 99.6%for the proposed CNN model.The results demonstrated that the reported method is superior to the previous methodologies.The model can be deployed in a real-time environment.
文摘This research explores the capacity of emerging technologies to enhance well-being. It involves the generation of 2D biophilically-driven geometries to represent human-response-oriented built environments and conducts inter and intra-individual analyses to assess human responses using a range of technologies within the realms of facial micro-expression analysis and EEG biosensor use. The outcomes of this analysis allow for the grading of these geometries in terms of emotional valences, meditation levels, and subjective preferences. These graded geometries can subsequently be employed in specific architectural contexts, such as interior decor, wallpapers, furniture surfaces, or other architectural and interior components. It is an interdisciplinary effort that underscores the importance of incorporating emerging technological means with human-response-oriented design approaches to foster built environments that promote well-being.
文摘Bipolar disorder is a serious mental condition that may be caused by any kind of stress or emotional upset experienced by the patient.It affects a large percentage of people globally,who fluctuate between depression and mania,or vice versa.A pleasant or unpleasant mood is more than a reflection of a state of mind.Normally,it is a difficult task to analyze through physical examination due to a large patient-psychiatrist ratio,so automated procedures are the best options to diagnose and verify the severity of bipolar.In this research work,facial microexpressions have been used for bipolar detection using the proposed Convolutional Neural Network(CNN)-based model.Facial Action Coding System(FACS)is used to extract micro-expressions called Action Units(AUs)connected with sad,happy,and angry emotions.Experiments have been conducted on a dataset collected from Bahawal Victoria Hospital,Bahawalpur,Pakistan,Using the Patient Health Questionnaire-15(PHQ-15)to infer a patient’s mental state.The experimental results showed a validation accuracy of 98.99%for the proposed CNN modelwhile classification through extracted featuresUsing SupportVectorMachines(SVM),K-NearestNeighbour(KNN),and Decision Tree(DT)obtained 99.9%,98.7%,and 98.9%accuracy,respectively.Overall,the outcomes demonstrated the stated method’s superiority over the current best practices.
文摘Facial micro-expressions are short and imperceptible expressions that involuntarily reveal the true emotions that a person may be attempting to suppress,hide,disguise,or conceal.Such expressions can reflect a person's real emotions and have a wide range of application in public safety and clinical diagnosis.The analysis of facial micro-expressions in video sequences through computer vision is still relatively recent.In this research,a comprehensive review on the topic of spotting and recognition used in micro expression analysis databases and methods,is conducted,and advanced technologies in this area are summarized.In addition,we discuss challenges that remain unresolved alongside future work to be completed in the field of micro-expression analysis.
基金Supported by Shaanxi Province Key Research and Development Project (2021GY-280)the National Natural Science Foundation of China (No.61834005,61772417,61802304)。
文摘Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most existing micro-expression recognition technologies so far focus on modeling the single category of micro-expression images and neural network structure.Aiming at the problems of low recognition rate and weak model generalization ability in micro-expression recognition, a micro-expression recognition algorithm is proposed based on graph convolution network(GCN) and Transformer model.Firstly, action unit(AU) feature detection is extracted and facial muscle nodes in the neighborhood are divided into three subsets for recognition.Then, graph convolution layer is used to find the layout of dependencies between AU nodes of micro-expression classification.Finally, multiple attentional features of each facial action are enriched with Transformer model to include more sequence information before calculating the overall correlation of each region.The proposed method is validated in CASME II and CAS(ME)^2 datasets, and the recognition rate reached 69.85%.
基金This work is funded by the natural science foundation of Jiangsu Province(No.BK20150471)the natural science foundation of the higher education institutions of Jiangsu Province(No.17KJB520007)+2 种基金the Key Research and Development Program of Zhenjiang-Social Development(No.SH2018005)the scientific researching fund of Jiangsu University of Science and Technology(No.1132921402,No.1132931803)the basic science and frontier technology research program of Chongqing Municipal Science and Technology Commission(cstc2016jcyjA0407).
文摘Micro-expression recognition has attracted growing research interests in the field of compute vision.However,micro-expression usually lasts a few seconds,thus it is difficult to detect.This paper presents a new framework to recognize micro-expression using pyramid histogram of Centralized Gabor Binary Pattern from Three Orthogonal Panels(CGBP-TOP)which is an extension of Local Gabor Binary Pattern from Three Orthogonal Panels feature.CGBP-TOP performs spatial and temporal analysis to capture the local facial characteristics of micro-expression image sequences.In order to keep more local information of the face,CGBP-TOP is extracted based on pyramid subregions of the micro-expression video frame.The combination of CGBP-TOP and spatial pyramid can represent well and truly the facial movements of the micro-expression image sequences.However,the dimension of our pyramid CGBP-TOP tends to be very high,which may lead to high data redundancy problem.In addition,it is clear that people of different genders usually have different ways of micro-expression.Therefore,in this paper,in order to select the relevant features of micro-expression,the gender-specific sparse multi-task learning method with adaptive regularization term is adopted to learn a compact subset of pyramid CGBP-TOP feature for micro-expression classification of different sexes.Finally,extensive experiments on widely used CASME II and SMIC databases demonstrate that our method can efficiently extract micro-expression motion features in the micro-expression video clip.Moreover,our proposed approach achieves comparable results with the state-of-the-art methods.
基金supported by the National Natural Science Foundation of Hunan Province,China(Grant Nos.2021JJ50058,2022JJ50051)the Open Platform Innovation Foundation of Hunan Provincial Education Department(Grant No.20K046)The Scientific Research Fund of Hunan Provincial Education Department,China(Grant Nos.21A0350,21C0439,19A133).
文摘Aiming at the problems of short duration,low intensity,and difficult detection of micro-expressions(MEs),the global and local features of ME video frames are extracted by combining spatial feature extraction and temporal feature extraction.Based on traditional convolution neural network(CNN)and long short-term memory(LSTM),a recognition method combining global identification attention network(GIA),block identification attention network(BIA)and bi-directional long short-term memory(Bi-LSTM)is proposed.In the BIA,the ME video frame will be cropped,and the training will be carried out by cropping into 24 identification blocks(IBs),10 IBs and uncropped IBs.To alleviate the overfitting problem in training,we first extract the basic features of the preprocessed sequence through the transfer learning layer,and then extract the global and local spatial features of the output data through the GIA layer and the BIA layer,respectively.In the BIA layer,the input data will be cropped into local feature vectors with attention weights to extract the local features of the ME frames;in the GIA layer,the global features of the ME frames will be extracted.Finally,after fusing the global and local feature vectors,the ME time-series information is extracted by Bi-LSTM.The experimental results show that using IBs can significantly improve the model’s ability to extract subtle facial features,and the model works best when 10 IBs are used.
基金supported by the Key Research Program of the Chinese Academy of Sciences(grant number ZDRW-ZS-2021-1-2).
文摘Pulse rate is one of the important characteristics of traditional Chinese medicine pulse diagnosis,and it is of great significance for determining the nature of cold and heat in diseases.The prediction of pulse rate based on facial video is an exciting research field for getting palpation information by observation diagnosis.However,most studies focus on optimizing the algorithm based on a small sample of participants without systematically investigating multiple influencing factors.A total of 209 participants and 2,435 facial videos,based on our self-constructed Multi-Scene Sign Dataset and the public datasets,were used to perform a multi-level and multi-factor comprehensive comparison.The effects of different datasets,blood volume pulse signal extraction algorithms,region of interests,time windows,color spaces,pulse rate calculation methods,and video recording scenes were analyzed.Furthermore,we proposed a blood volume pulse signal quality optimization strategy based on the inverse Fourier transform and an improvement strategy for pulse rate estimation based on signal-to-noise ratio threshold sliding.We found that the effects of video estimation of pulse rate in the Multi-Scene Sign Dataset and Pulse Rate Detection Dataset were better than in other datasets.Compared with Fast independent component analysis and Single Channel algorithms,chrominance-based method and plane-orthogonal-to-skin algorithms have a more vital anti-interference ability and higher robustness.The performances of the five-organs fusion area and the full-face area were better than that of single sub-regions,and the fewer motion artifacts and better lighting can improve the precision of pulse rate estimation.
文摘Background The use of micro-expression recognition to recognize human emotions is one of the most critical challenges in human-computer interaction applications. In recent years, cross-database micro-expression recognition(CDMER) has emerged as a significant challenge in micro-expression recognition and analysis. Because the training and testing data in CDMER come from different micro-expression databases, CDMER is more challenging than conventional micro-expression recognition. Methods In this paper, an adaptive spatio-temporal attention neural network(ASTANN) using an attention mechanism is presented to address this challenge. To this end, the micro-expression databases SMIC and CASME II are first preprocessed using an optical flow approach,which extracts motion information among video frames that represent discriminative features of micro-expression.After preprocessing, a novel adaptive framework with a spatiotemporal attention module was designed to assign spatial and temporal weights to enhance the most discriminative features. The deep neural network then extracts the cross-domain feature, in which the second-order statistics of the sample features in the source domain are aligned with those in the target domain by minimizing the correlation alignment(CORAL) loss such that the source and target databases share similar distributions. Results To evaluate the performance of ASTANN, experiments were conducted based on the SMIC and CASME II databases under the standard experimental evaluation protocol of CDMER. The experimental results demonstrate that ASTANN outperformed other methods in relevant crossdatabase tasks. Conclusions Extensive experiments were conducted on benchmark tasks, and the results show that ASTANN has superior performance compared with other approaches. This demonstrates the superiority of our method in solving the CDMER problem.
基金Shaanxi Province Key Research and Development Project(No.2021 GY-280)Shaanxi Province Natural Science Basic Research Program Project(No.2021JM-459)+1 种基金National Natural Science Foundation of China(No.61834005,61772417,61802304,61602377,61634004)Shaanxi Province International Science and Technology Cooperation Project(No.2018KW-006)。
文摘Aiming at the problem of unsatisfactory effects of traditional micro-expression recognition algorithms,an efficient micro-expression recognition algorithm is proposed,which uses convolutional neural networks(CNN)to extract spatial features of micro-expressions,and long short-term memory network(LSTM)to extract time domain features.CNN and LSTM are combined as the basis of micro-expression recognition.In many CNN structures,the visual geometry group(VGG)using a small convolution kernel is finally selected as the pre-network through comparison.Due to the difficulty of deep learning training and over-fitting,the dropout method and batch normalization method are used to solve the problem in the VGG network.Two data sets CASME and CASME II are used for test comparison,in order to solve the problem of insufficient data sets,randomly determine the starting frame,and a fixedlength frame sequence is used as the standard,and repeatedly read all sample frames of the entire data set to achieve trayersal and data amplification.Finallv.a hieh recognition rate of 67.48% is achieved.
基金Supported by the Shaanxi Province Key Research and Development Project(No.2021GY-280)Shaanxi Province Natural Science Basic Re-search Program Project(No.2021JM-459)+1 种基金the National Natural Science Foundation of China(No.61834005,61772417,61802304,61602377,61634004)the Shaanxi Province International Science and Technology Cooperation Project(No.2018KW-006).
文摘The micro-expression lasts for a very short time and the intensity is very subtle.Aiming at the problem of its low recognition rate,this paper proposes a new micro-expression recognition algorithm based on a three-dimensional convolutional neural network(3D-CNN),which can extract two-di-mensional features in spatial domain and one-dimensional features in time domain,simultaneously.The network structure design is based on the deep learning framework Keras,and the discarding method and batch normalization(BN)algorithm are effectively combined with three-dimensional vis-ual geometry group block(3D-VGG-Block)to reduce the risk of overfitting while improving training speed.Aiming at the problem of the lack of samples in the data set,two methods of image flipping and small amplitude flipping are used for data amplification.Finally,the recognition rate on the data set is as high as 69.11%.Compared with the current international average micro-expression recog-nition rate of about 67%,the proposed algorithm has obvious advantages in recognition rate.
文摘In this paper, we explore the process of emotional state transition. And the process is impacted by emotional state of interaction objects. First of all, the cognitive reasoning process and the micro-expressions recognition is the basis of affective computing adjustment process. Secondly, the threshold function and attenuation function are proposed to quantify the emotional changes. In the actual environment, the emotional state of the robot and external stimulus are also quantified as the transferring probability. Finally, the Gaussian cloud distribution is introduced to the Gross model to calculate the emotional transitional probabilities. The experimental results show that the model in human-computer interaction can effectively regulate the emotional states, and can significantly improve the humanoid and intelligent ability of the robot. This model is consistent with experimental and emulational significance of the psychology, and allows the robot to get rid of the mechanical emotional transfer process.
文摘Facial emotion recognition(FER)has become a focal point of research due to its widespread applications,ranging from human-computer interaction to affective computing.While traditional FER techniques have relied on handcrafted features and classification models trained on image or video datasets,recent strides in artificial intelligence and deep learning(DL)have ushered in more sophisticated approaches.The research aims to develop a FER system using a Faster Region Convolutional Neural Network(FRCNN)and design a specialized FRCNN architecture tailored for facial emotion recognition,leveraging its ability to capture spatial hierarchies within localized regions of facial features.The proposed work enhances the accuracy and efficiency of facial emotion recognition.The proposed work comprises twomajor key components:Inception V3-based feature extraction and FRCNN-based emotion categorization.Extensive experimentation on Kaggle datasets validates the effectiveness of the proposed strategy,showcasing the FRCNN approach’s resilience and accuracy in identifying and categorizing facial expressions.The model’s overall performance metrics are compelling,with an accuracy of 98.4%,precision of 97.2%,and recall of 96.31%.This work introduces a perceptive deep learning-based FER method,contributing to the evolving landscape of emotion recognition technologies.The high accuracy and resilience demonstrated by the FRCNN approach underscore its potential for real-world applications.This research advances the field of FER and presents a compelling case for the practicality and efficacy of deep learning models in automating the understanding of facial emotions.
文摘Background: The ear and face are indispensable and distinctive features for hearing and identification. Objectives: This study was designed to generate anthropometric data of the ear and facial indices of females of Efik and Ibibio children in Cross River and Akwa Ibom States, show morphological and aesthetic differences and ethnicity. Methods: A total of 600 female children (300 Efiks and 300 Ibibios) aged 2 to 10 years that met the inclusion criteria were chosen from selected primary schools in Calabar Municipality, Calabar South of Cross River State and from Uyo, Itu of Akwa Ibom State, Nigeria. Standardized measurements of face length, face width, ear length, and ear width were taken with a spreading caliper;the facial (proscopic) and ear (auricular) indices were determined. Results: Efik subjects presented a mean face length of 8.36 ± 0.06 cm, face width of 11.04 ± 0.04 cm, ear length of 4.92 ± 0.02 cm, and ear width of 3.06 ± 0.01 cm. Ibibio subjects had mean values for face length, face width, ear length, and ear width as 8.17 ± 0.05 cm, 10.75 ± 0.05 cm, 4.77 ± 0.03 cm, and 2.94 ± 0.02 cm respectively. The mean facial index and ear index for Efik subjects were 75.68 ± 0.31 and 62.16 ± 0.27 respectively;while the mean facial and ear indices for Ibibio subjects were 74.79 ± 0.36 and 61.80 ± 0.34 respectively. Statistical analysis demonstrated significant differences in face length, ear length, ear width and facial index, with the Efik subjects having higher values than Ibibio subjects (p Conclusion: The results showed hypereuryproscopic face as the prevalent face type among females of both ethnic groups, therefore can be of importance in sex, ethnic, and racial differentiation, and in clinical practice, aesthetics and forensic medicine.
基金supported by the National Natural Science Foundation of China under Grant No.62276051the Natural Science Foundation of Sichuan Province under Grant No.2023NSFSC0640Medical Industry Information Integration Collaborative Innovation Project of Yangtze Delta Region Institute under Grant No.U0723002。
文摘The estimation of pain intensity is critical for medical diagnosis and treatment of patients.With the development of image monitoring technology and artificial intelligence,automatic pain assessment based on facial expression and behavioral analysis shows a potential value in clinical applications.This paper reports a framework of convolutional neural network with global and local attention mechanism(GLA-CNN)for the effective detection of pain intensity at four-level thresholds using facial expression images.GLA-CNN includes two modules,namely global attention network(GANet)and local attention network(LANet).LANet is responsible for extracting representative local patch features of faces,while GANet extracts whole facial features to compensate for the ignored correlative features between patches.In the end,the global correlational and local subtle features are fused for the final estimation of pain intensity.Experiments under the UNBC-McMaster Shoulder Pain database demonstrate that GLA-CNN outperforms other state-of-the-art methods.Additionally,a visualization analysis is conducted to present the feature map of GLA-CNN,intuitively showing that it can extract not only local pain features but also global correlative facial ones.Our study demonstrates that pain assessment based on facial expression is a non-invasive and feasible method,and can be employed as an auxiliary pain assessment tool in clinical practice.
文摘Background: Maxillofacial trauma affects young adults more. The injury assessment is difficult to establish in low-income countries because of the imaging means, particularly the scanner, which is poorly available and less financially accessible. The aim of this study is to describe the epidemiological profile and the various tomodensitometric aspects of traumatic lesions of the face in patients received in the Radiology department of Kira Hospital. Patients and methods: This is a descriptive retrospective study involving 104 patients of all ages over a period of 2 years from December 2018 to November 2019 in the medical imaging department of KIRA HOSPITAL. We included in our study any patient having undergone a CT scan of the head and presenting at least one lesion of the facial mass, whether associated with other cranioencephalic lesions. Results: Among the 384 patients received for head trauma, 104 patients (27.1% of cases) presented facial damage. The average age of our patients was 32.02 years with extremes of 8 months and 79 years. In our study, 87 of the patients (83.6%) were male. The road accident was the circumstance in which facial trauma occurred in 79 patients (76% of cases). These injuries were accompanied by at least one bone fracture in 97 patients (93.3%). Patients with fractures of more than 3 facial bones accounted for 40.2% of cases and those with fractures of 2 to 3 bones accounted for 44.6% of cases. The midface was the site of the fracture in 85 patients (87.6% of cases). Orbital wall fractures were noted in 57 patients (58.8% of cases) and the jawbone was the site of a fracture in 50 patients (51.5% of cases). In the vault, the fractures involved the extra-facial frontal bone (36.1% of cases) and temporal bone (18.6% of cases). Cerebral contusion was noted in 41.2% of patients and pneumoencephaly in 15.5% of patients. Extradural hematoma was present in 16 patients and subdural hematoma affected 13 patients. Conclusion: Computed tomography is a diagnostic tool of choice in facial trauma patients. Most of these young patients present with multiple fractures localizing to the mid-level of the face with concomitant involvement of the brain.
基金supported by the National Natural Science Foundation of China(No.62367006)the Graduate Innovative Fund of Wuhan Institute of Technology(Grant No.CX2023551).
文摘Automatically detecting learners’engagement levels helps to develop more effective online teaching and assessment programs,allowing teachers to provide timely feedback and make personalized adjustments based on students’needs to enhance teaching effectiveness.Traditional approaches mainly rely on single-frame multimodal facial spatial information,neglecting temporal emotional and behavioural features,with accuracy affected by significant pose variations.Additionally,convolutional padding can erode feature maps,affecting feature extraction’s representational capacity.To address these issues,we propose a hybrid neural network architecture,the redistributing facial features and temporal convolutional network(RefEIP).This network consists of three key components:first,utilizing the spatial attention mechanism large kernel attention(LKA)to automatically capture local patches and mitigate the effects of pose variations;second,employing the feature organization and weight distribution(FOWD)module to redistribute feature weights and eliminate the impact of white features and enhancing representation in facial feature maps.Finally,we analyse the temporal changes in video frames through the modern temporal convolutional network(ModernTCN)module to detect engagement levels.We constructed a near-infrared engagement video dataset(NEVD)to better validate the efficiency of the RefEIP network.Through extensive experiments and in-depth studies,we evaluated these methods on the NEVD and the Database for Affect in Situations of Elicitation(DAiSEE),achieving an accuracy of 90.8%on NEVD and 61.2%on DAiSEE in the fourclass classification task,indicating significant advantages in addressing engagement video analysis problems.
文摘Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.
文摘BACKGROUND Facial herpes is a common form of the herpes simplex virus-1 infection and usually presents as vesicles near the mouth,nose,and periocular sites.In contrast,we observed a new facial symptom of herpes on the entire face without vesicles.CASE SUMMARY A 33-year-old woman with a history of varicella infection and shingles since an early age presented with sarcoidosis of the entire face and neuralgia without oral lesions.The patient was prescribed antiviral treatment with valacyclovir and acyclovir cream.One day after drug administration,facial skin lesions and neurological pain improved.Herpes simplex without oral blisters can easily be misdiagnosed as pimples upon visual examination in an outpatient clinic.CONCLUSION As acute herpes simplex is accompanied by neuralgia,prompt diagnosis and prescription are necessary,considering the pathological history and health conditions.