OBJECTIVE: The objective of this study is to summarize and analyze the brain signal patterns of empathy for pain caused by facial expressions of pain utilizing activation likelihood estimation, a meta-analysis method....OBJECTIVE: The objective of this study is to summarize and analyze the brain signal patterns of empathy for pain caused by facial expressions of pain utilizing activation likelihood estimation, a meta-analysis method. DATA SOURCES: Studies concerning the brain mechanism were searched from the Science Citation Index, Science Direct, PubMed, DeepDyve, Cochrane Library, SinoMed, Wanfang, VIP, China National Knowledge Infrastructure, and other databases, such as SpringerLink, AMA, Science Online, Wiley Online, were collected. A time limitation of up to 13 December 2016 was applied to this study. DATA SELECTION: Studies presenting with all of the following criteria were considered for study inclusion: Use of functional magnetic resonance imaging, neutral and pained facial expression stimuli, involvement of adult healthy human participants over 18 years of age, whose empathy ability showed no difference from the healthy adult, a painless basic state, results presented in Talairach or Montreal Neurological Institute coordinates, multiple studies by the same team as long as they used different raw data. OUTCOME MEASURES: Activation likelihood estimation was used to calculate the combined main activated brain regions under the stimulation of pained facial expression. RESULTS: Eight studies were included, containing 178 subjects. Meta-analysis results suggested that the anterior cingulate cortex(BA32), anterior central gyrus(BA44), fusiform gyrus, and insula(BA13) were activated positively as major brain areas under the stimulation of pained facial expression. CONCLUSION: Our study shows that pained facial expression alone, without viewing of painful stimuli, activated brain regions related to pain empathy, further contributing to revealing the brain's mechanisms of pain empathy.展开更多
Local binary pattern(LBP)is an important method for texture feature extraction of facial expression.However,it also has the shortcomings of high dimension,slow feature extraction and noeffective local or global featur...Local binary pattern(LBP)is an important method for texture feature extraction of facial expression.However,it also has the shortcomings of high dimension,slow feature extraction and noeffective local or global features extracted.To solve these problems,a facial expression feature extraction method is proposed based on improved LBP.Firstly,LBP is converted into double local binary pattern(DLBP).Then by combining Taylor expansion(TE)with DLBP,DLBP-TE algorithm is obtained.Finally,the DLBP-TE algorithm combined with extreme learning machine(ELM)is applied in seven kinds of ficial expression images and the corresponding experiments are carried out in Japanese adult female facial expression(JAFFE)database.The results show that the proposed method can significantly improve facial expression recognition rate.展开更多
Based on the smart home and facial expression recognition, this paper presents a cognitive emotional model for eldercare robot. By combining with Gabor filter, Local Binary Pattern algorithm(LBP) and k-Nearest Neighbo...Based on the smart home and facial expression recognition, this paper presents a cognitive emotional model for eldercare robot. By combining with Gabor filter, Local Binary Pattern algorithm(LBP) and k-Nearest Neighbor algorithm(KNN) are facial emotional features extracted and recognized. Meanwhile, facial emotional features put influence on robot's emotion state, which is described in AVS emotion space. Then the optimization of smart home environment on the cognitive emotional model is specially analyzed using simulated annealing algorithm(SA). Finally, transition probability from any emotional state to a state of basic emotions is obtained based on the cognitive reappraisal strategy and Euclidean distance. The simulation and experiment have tested and verified the effective in reducing negative emotional state.展开更多
The facial expression recognition systn using the Ariaboost based on the Split Rectangle feature is proposed in this paper. This system provides more various featmes in increasing speed and accuracy than the Haarolike...The facial expression recognition systn using the Ariaboost based on the Split Rectangle feature is proposed in this paper. This system provides more various featmes in increasing speed and accuracy than the Haarolike featrue of Viola, which is commonly used for the Adaboost training algorithm. The Split Rectangle feature uses the nmsk-like shape composed with 2 independent rectangles, instead of using mask-like shape of Haar-like feature, which is composed of 2 --4 adhered rectangles of Viola. Split Rectangle feature has less di- verged operation than the Haar-like feaze. It also requires less oper- ation because the stun of pixels requires ordy two rectangles. Split Rectangle feature provides various and fast features to the Adaboost, which produrces the strong classifier with increased accuracy and speed. In the experiment, the system had 5.92 ms performance speed and 84 %--94 % accuracy by leaming 5 facial expressions, neutral, happiness, sadness, anger and surprise with the use of the Adaboost based on the Split Rectangle feature.展开更多
Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their perfor...Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) arid maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.展开更多
基金supported by the National Natural Science Foundation of China,No.81473769(to WW),81772430(to WW)a grant from the Training Program of Innovation and Entrepreneurship for Undergraduates of Southern Medical University of Guangdong Province of China in 2016,No.201612121057(to WW)
文摘OBJECTIVE: The objective of this study is to summarize and analyze the brain signal patterns of empathy for pain caused by facial expressions of pain utilizing activation likelihood estimation, a meta-analysis method. DATA SOURCES: Studies concerning the brain mechanism were searched from the Science Citation Index, Science Direct, PubMed, DeepDyve, Cochrane Library, SinoMed, Wanfang, VIP, China National Knowledge Infrastructure, and other databases, such as SpringerLink, AMA, Science Online, Wiley Online, were collected. A time limitation of up to 13 December 2016 was applied to this study. DATA SELECTION: Studies presenting with all of the following criteria were considered for study inclusion: Use of functional magnetic resonance imaging, neutral and pained facial expression stimuli, involvement of adult healthy human participants over 18 years of age, whose empathy ability showed no difference from the healthy adult, a painless basic state, results presented in Talairach or Montreal Neurological Institute coordinates, multiple studies by the same team as long as they used different raw data. OUTCOME MEASURES: Activation likelihood estimation was used to calculate the combined main activated brain regions under the stimulation of pained facial expression. RESULTS: Eight studies were included, containing 178 subjects. Meta-analysis results suggested that the anterior cingulate cortex(BA32), anterior central gyrus(BA44), fusiform gyrus, and insula(BA13) were activated positively as major brain areas under the stimulation of pained facial expression. CONCLUSION: Our study shows that pained facial expression alone, without viewing of painful stimuli, activated brain regions related to pain empathy, further contributing to revealing the brain's mechanisms of pain empathy.
文摘Local binary pattern(LBP)is an important method for texture feature extraction of facial expression.However,it also has the shortcomings of high dimension,slow feature extraction and noeffective local or global features extracted.To solve these problems,a facial expression feature extraction method is proposed based on improved LBP.Firstly,LBP is converted into double local binary pattern(DLBP).Then by combining Taylor expansion(TE)with DLBP,DLBP-TE algorithm is obtained.Finally,the DLBP-TE algorithm combined with extreme learning machine(ELM)is applied in seven kinds of ficial expression images and the corresponding experiments are carried out in Japanese adult female facial expression(JAFFE)database.The results show that the proposed method can significantly improve facial expression recognition rate.
基金supported by National Natural Science Foundation of China (Normal Project No. 61170115), (Key Project No.61432004)National Key Technologies R&D Program of China (No.2014BAF08B04)the Foundation of Beijing Engineering and Technology Center for Convergence Networks and Ubiquitous Services
文摘Based on the smart home and facial expression recognition, this paper presents a cognitive emotional model for eldercare robot. By combining with Gabor filter, Local Binary Pattern algorithm(LBP) and k-Nearest Neighbor algorithm(KNN) are facial emotional features extracted and recognized. Meanwhile, facial emotional features put influence on robot's emotion state, which is described in AVS emotion space. Then the optimization of smart home environment on the cognitive emotional model is specially analyzed using simulated annealing algorithm(SA). Finally, transition probability from any emotional state to a state of basic emotions is obtained based on the cognitive reappraisal strategy and Euclidean distance. The simulation and experiment have tested and verified the effective in reducing negative emotional state.
基金supported by the Brain Korea 21 Project in2010,the MKE(The Ministry of Knowledge Economy),Koreathe ITRC(Information Technology Research Center)support programsupervised by the NIPA(National ITIndustry Promotion Agency)(NI-PA-2010-(C1090-1021-0010))
文摘The facial expression recognition systn using the Ariaboost based on the Split Rectangle feature is proposed in this paper. This system provides more various featmes in increasing speed and accuracy than the Haarolike featrue of Viola, which is commonly used for the Adaboost training algorithm. The Split Rectangle feature uses the nmsk-like shape composed with 2 independent rectangles, instead of using mask-like shape of Haar-like feature, which is composed of 2 --4 adhered rectangles of Viola. Split Rectangle feature has less di- verged operation than the Haar-like feaze. It also requires less oper- ation because the stun of pixels requires ordy two rectangles. Split Rectangle feature provides various and fast features to the Adaboost, which produrces the strong classifier with increased accuracy and speed. In the experiment, the system had 5.92 ms performance speed and 84 %--94 % accuracy by leaming 5 facial expressions, neutral, happiness, sadness, anger and surprise with the use of the Adaboost based on the Split Rectangle feature.
基金Project'supportedV by the National Natural Science Foundation of China (No. 61272211) and the Six Talent Peaks Project in Jiangsu Province of China (No. DZXX-026)
文摘Emotion recognition via facial expressions (ERFE) has attracted a great deal of interest with recent advances in artificial intelligence and pattern recognition. Most studies are based on 2D images, and their performance is usually computationally expensive. In this paper, we propose a real-time emotion recognition approach based on both 2D and 3D facial expression features captured by Kinect sensors. To capture the deformation of the 3D mesh during facial expression, we combine the features of animation units (AUs) and feature point positions (FPPs) tracked by Kinect. A fusion algorithm based on improved emotional profiles (IEPs) arid maximum confidence is proposed to recognize emotions with these real-time facial expression features. Experiments on both an emotion dataset and a real-time video show the superior performance of our method.